query_id
stringlengths
32
32
query
stringlengths
5
5.38k
positive_passages
listlengths
1
23
negative_passages
listlengths
7
25
subset
stringclasses
5 values
a8f3360c7be5cfacb5d0ef790526247a
Formalizing a Systematic Review Updating Process
[ { "docid": "e79777797fa3cc1ef4650480a7344c40", "text": "Synopsis A framework is presented which assists requirements engineers to choose methods for requirements acquisition. Practitioners are often unaware of the range of methods available. Even when practitioners are aware, most do not foresee the need to use several methods to acquire complete and accurate requirements. One reason for this is the lack of guidelines for method selection. The ACRE framework sets out to overcome these limitations. Method selection is achieved using questions driven from a set of facets which define the strengths and weaknesses of each method. The framework is presented as guidelines for requirements engineering practitioners. It has undergone some evaluation through its presentation to highly-experienced requirements engineers. Some results from this evaluation have been incorporated into the version of ACRE presented in the paper.", "title": "" } ]
[ { "docid": "ba7fe17912c942690c44bc81ce772c22", "text": "[1] We present here a new InSAR persistent scatterer (PS) method for analyzing episodic crustal deformation in non-urban environments, with application to volcanic settings. Our method for identifying PS pixels in a series of interferograms is based primarily on phase characteristics and finds low-amplitude pixels with phase stability that are not identified by the existing amplitude-based algorithm. Our method also uses the spatial correlation of the phases rather than a well-defined phase history so that we can observe temporally-variable processes, e.g., volcanic deformation. The algorithm involves removing the residual topographic component of flattened interferogram phase for each PS, then unwrapping the PS phases both spatially and temporally. Our method finds scatterers with stable phase characteristics independent of amplitudes associated with man-made objects, and is applicable to areas where conventional InSAR fails due to complete decorrelation of the majority of scatterers, yet a few stable scatterers are present.", "title": "" }, { "docid": "2536596ecba0498e7dbcb097695171b0", "text": "How can we effectively encode evolving information over dynamic graphs into low-dimensional representations? In this paper, we propose DyRep – an inductive deep representation learning framework that learns a set of functions to efficiently produce low-dimensional node embeddings that evolves over time. The learned embeddings drive the dynamics of two key processes namely, communication and association between nodes in dynamic graphs. These processes exhibit complex nonlinear dynamics that evolve at different time scales and subsequently contribute to the update of node embeddings. We employ a time-scale dependent multivariate point process model to capture these dynamics. We devise an efficient unsupervised learning procedure and demonstrate that our approach significantly outperforms representative baselines on two real-world datasets for the problem of dynamic link prediction and event time prediction.", "title": "" }, { "docid": "4b03aeb6c56cc25ce57282279756d1ff", "text": "Weighted signed networks (WSNs) are networks in which edges are labeled with positive and negative weights. WSNs can capture like/dislike, trust/distrust, and other social relationships between people. In this paper, we consider the problem of predicting the weights of edges in such networks. We propose two novel measures of node behavior: the goodness of a node intuitively captures how much this node is liked/trusted by other nodes, while the fairness of a node captures how fair the node is in rating other nodes' likeability or trust level. We provide axioms that these two notions need to satisfy and show that past work does not meet these requirements for WSNs. We provide a mutually recursive definition of these two concepts and prove that they converge to a unique solution in linear time. We use the two measures to predict the edge weight in WSNs. Furthermore, we show that when compared against several individual algorithms from both the signed and unsigned social network literature, our fairness and goodness metrics almost always have the best predictive power. We then use these as features in different multiple regression models and show that we can predict edge weights on 2 Bitcoin WSNs, an Epinions WSN, 2 WSNs derived from Wikipedia, and a WSN derived from Twitter with more accurate results than past work. Moreover, fairness and goodness metrics form the most significant feature for prediction in most (but not all) cases.", "title": "" }, { "docid": "cf7b17b690258dc50ec12bfbd9de232d", "text": "In this paper, we propose a novel method for visual object tracking called HMMTxD. The method fuses observations from complementary out-of-the box trackers and a detector by utilizing a hidden Markov model whose latent states correspond to a binary vector expressing the failure of individual trackers. The Markov model is trained in an unsupervised way, relying on an online learned detector to provide a source of tracker-independent information for a modified BaumWelch algorithm that updates the model w.r.t. the partially annotated data. We show the effectiveness of the proposed method on combination of two and three tracking algorithms. The performance of HMMTxD is evaluated on two standard benchmarks (CVPR2013 and VOT) and on a rich collection of 77 publicly available sequences. The HMMTxD outperforms the state-of-the-art, often significantly, on all datasets in almost all criteria.", "title": "" }, { "docid": "bdb41d1633c603f4b68dfe0191eb822b", "text": "Concepts are the elementary units of reason and linguistic meaning. They are conventional and relatively stable. As such, they must somehow be the result of neural activity in the brain. The questions are: Where? and How? A common philosophical position is that all concepts-even concepts about action and perception-are symbolic and abstract, and therefore must be implemented outside the brain's sensory-motor system. We will argue against this position using (1) neuroscientific evidence; (2) results from neural computation; and (3) results about the nature of concepts from cognitive linguistics. We will propose that the sensory-motor system has the right kind of structure to characterise both sensory-motor and more abstract concepts. Central to this picture are the neural theory of language and the theory of cogs, according to which, brain structures in the sensory-motor regions are exploited to characterise the so-called \"abstract\" concepts that constitute the meanings of grammatical constructions and general inference patterns.", "title": "" }, { "docid": "07817eb2722fb434b1b8565d936197cf", "text": "We recently have witnessed many ground-breaking results in machine learning and computer vision, generated by using deep convolutional neural networks (CNN). While the success mainly stems from the large volume of training data and the deep network architectures, the vector processing hardware (e.g. GPU) undisputedly plays a vital role in modern CNN implementations to support massive computation. Though much attention was paid in the extent literature to understand the algorithmic side of deep CNN, little research was dedicated to the vectorization for scaling up CNNs. In this paper, we studied the vectorization process of key building blocks in deep CNNs, in order to better understand and facilitate parallel implementation. Key steps in training and testing deep CNNs are abstracted as matrix and vector operators, upon which parallelism can be easily achieved. We developed and compared six implementations with various degrees of vectorization with which we illustrated the impact of vectorization on the speed of model training and testing. Besides, a unified CNN framework for both high-level and low-level vision tasks is provided, along with a vectorized Matlab implementation with state-of-the-art speed performance.", "title": "" }, { "docid": "ba314edceb1b8ac00f94ad0037bd5b8e", "text": "AMS subject classifications: primary 62G10 secondary 62H20 Keywords: dCor dCov Multivariate independence Distance covariance Distance correlation High dimension a b s t r a c t Distance correlation is extended to the problem of testing the independence of random vectors in high dimension. Distance correlation characterizes independence and determines a test of multivariate independence for random vectors in arbitrary dimension. In this work, a modified distance correlation statistic is proposed, such that under independence the distribution of a transformation of the statistic converges to Student t, as dimension tends to infinity. Thus we obtain a distance correlation t-test for independence of random vectors in arbitrarily high dimension, applicable under standard conditions on the coordinates that ensure the validity of certain limit theorems. This new test is based on an unbiased es-timator of distance covariance, and the resulting t-test is unbiased for every sample size greater than three and all significance levels. The transformed statistic is approximately normal under independence for sample size greater than nine, providing an informative sample coefficient that is easily interpretable for high dimensional data. 1. Introduction Many applications in genomics, medicine, engineering, etc. require analysis of high dimensional data. Time series data can also be viewed as high dimensional data. Objects can be represented by their characteristics or features as vectors p. In this work, we consider the extension of distance correlation to the problem of testing independence of random vectors in arbitrarily high, not necessarily equal dimensions, so the dimension p of the feature space of a random vector is typically large. measure all types of dependence between random vectors in arbitrary, not necessarily equal dimensions. (See Section 2 for definitions.) Distance correlation takes values in [0, 1] and is equal to zero if and only if independence holds. It is more general than the classical Pearson product moment correlation, providing a scalar measure of multivariate independence that characterizes independence of random vectors. The distance covariance test of independence is consistent against all dependent alternatives with finite second moments. In practice, however, researchers are often interested in interpreting the numerical value of distance correlation, without a formal test. For example, given an array of distance correlation statistics, what can one learn about the strength of dependence relations from the dCor statistics without a formal test? This is in fact, a difficult question, but a solution is finally available for a large class of problems. The …", "title": "" }, { "docid": "4eeb20c4a5cc259be1355b04813223f6", "text": "Dropout, a simple and effective way to train deep neural networks, has led to a number of impressive empirical successes and spawned many recent theoretical investigations. However, the gap between dropout’s training and inference phases, introduced due to tractability considerations, has largely remained under-appreciated. In this work, we first formulate dropout as a tractable approximation of some latent variable model, leading to a clean view of parameter sharing and enabling further theoretical analysis. Then, we introduce (approximate) expectation-linear dropout neural networks, whose inference gap we are able to formally characterize. Algorithmically, we show that our proposed measure of the inference gap can be used to regularize the standard dropout training objective, resulting in an explicit control of the gap. Our method is as simple and efficient as standard dropout. We further prove the upper bounds on the loss in accuracy due to expectation-linearization, describe classes of input distributions that expectation-linearize easily. Experiments on three image classification benchmark datasets demonstrate that reducing the inference gap can indeed improve the performance consistently.", "title": "" }, { "docid": "b1d348e2095bd7054cc11bd84eb8ccdc", "text": "Epidermolysis bullosa (EB) is a group of inherited, mechanobullous disorders caused by mutations in various structural proteins in the skin. There have been several advances in the classification of EB since it was first introduced in the late 19th century. We now recognize four major types of EB, depending on the location of the target proteins and level of the blisters: EB simplex (epidermolytic), junctional EB (lucidolytic), dystrophic EB (dermolytic), and Kindler syndrome (mixed levels of blistering). This contribution will summarize the most recent classification and discuss the molecular basis, target genes, and proteins involved. We have also included new subtypes, such as autosomal dominant junctional EB and autosomal recessive EB due to mutations in the dystonin (DST) gene, which encodes the epithelial isoform of bullouspemphigoid antigen 1. The main laboratory diagnostic techniques-immunofluorescence mapping, transmission electron microscopy, and mutation analysis-will also be discussed. Finally, the clinical characteristics of the different major EB types and subtypes will be reviewed.", "title": "" }, { "docid": "cf374e1d1fa165edaf0b29749f32789c", "text": "Photovoltaic (PV) system performance extremely depends on local insolation and temperature conditions. Under partial shading, P-I characteristics of PV systems are complicated and may have multiple local maxima. Conventional Maximum Power Point Tracking (MPPT) techniques can easily fail to track global maxima and may be trapped in local maxima under partial shading; this can be one of main causes for reduced energy yield for many PV systems. In order to solve this problem, this paper proposes a novel Maximum Power Point tracking algorithm based on Differential Evolution (DE) that is capable of tracking global MPP under partial shaded conditions. The ability of proposed algorithm and its excellent performances are evaluated with conventional and popular algorithm by means of simulation. The proposed algorithm works in conjunction with a Boost (step up) DC-DC converter to track the global peak. Moreover, this paper includes a MATLAB-based modeling and simulation scheme suitable for photovoltaic characteristics under partial shading.", "title": "" }, { "docid": "7bbffa53f71207f0f218a09f18586541", "text": "Myelotoxicity induced by chemotherapy may become life-threatening. Neutropenia may be prevented by granulocyte colony-stimulating factors (GCSF), and epoetin may prevent anemia, but both cause substantial side effects and increased costs. According to non-established data, wheat grass juice (WGJ) may prevent myelotoxicity when applied with chemotherapy. In this prospective matched control study, 60 patients with breast carcinoma on chemotherapy were enrolled and assigned to an intervention or control arm. Those in the intervention arm (A) were given 60 cc of WGJ orally daily during the first three cycles of chemotherapy, while those in the control arm (B) received only regular supportive therapy. Premature termination of treatment, dose reduction, and starting GCSF or epoetin were considered as \"censoring events.\" Response rate to chemotherapy was calculated in patients with evaluable disease. Analysis of the results showed that five censoring events occurred in Arm A and 15 in Arm B (P = 0.01). Of the 15 events in Arm B, 11 were related to hematological events. No reduction in response rate was observed in patients who could be assessed for response. Side effects related to WGJ were minimal, including worsening of nausea in six patients, causing cessation of WGJ intake. In conclusion, it was found that WGJ taken during FAC chemotherapy may reduce myelotoxicity, dose reductions, and need for GCSF support, without diminishing efficacy of chemotherapy. These preliminary results need confirmation in a phase III study.", "title": "" }, { "docid": "b9e7fedbc42f815b35351ec9a0c31b33", "text": "Proponents have marketed e-learning by focusing on its adoption as the right thing to do while disregarding, among other things, the concerns of the potential users, the adverse effects on users and the existing research on the use of e-learning or related innovations. In this paper, the e-learning-adoption proponents are referred to as the technopositivists. It is argued that most of the technopositivists in the higher education context are driven by a personal agenda, with the aim of propagating a technopositivist ideology to stakeholders. The technopositivist ideology is defined as a ‘compulsive enthusiasm’ about e-learning in higher education that is being created, propagated and channelled repeatedly by the people who are set to gain without giving the educators the time and opportunity to explore the dangers and rewards of e-learning on teaching and learning. Ten myths on e-learning that the technopositivists have used are presented with the aim of initiating effective and constructive dialogue, rather than merely criticising the efforts being made. Introduction The use of technology, and in particular e-learning, in higher education is becoming increasingly popular. However, Guri-Rosenblit (2005) and Robertson (2003) propose that educational institutions should step back and reflect on critical questions regarding the use of technology in teaching and learning. The focus of Guri-Rosenblit’s article is on diverse issues of e-learning implementation in higher education, while Robertson focuses on the teacher. Both papers show that there is a change in the ‘euphoria towards eLearning’ and that a dose of techno-negativity or techno-scepticism is required so that the gap between rhetoric in the literature (with all the promises) and actual implementation can be bridged for an informed stance towards e-learning adoption. British Journal of Educational Technology Vol 41 No 2 2010 199–212 doi:10.1111/j.1467-8535.2008.00910.x © 2008 The Authors. Journal compilation © 2008 British Educational Communications and Technology Agency. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. Technology in teaching and learning has been marketed or presented to its intended market with a lot of promises, benefits and opportunities. This technopositivist ideology has denied educators and educational researchers the much needed opportunities to explore the motives, power, rewards and sanctions of information and communication technologies (ICTs), as well as time to study the impacts of the new technologies on learning and teaching. Educational research cannot cope with the speed at which technology is advancing (Guri-Rosenblit, 2005; Robertson, 2003; Van Dusen, 1998; Watson, 2001). Indeed there has been no clear distinction between teaching with and teaching about technology and therefore the relevance of such studies has not been brought to the fore. Much of the focus is on the actual educational technology as it advances, rather than its educational functions or the effects it has on the functions of teaching and learning. The teaching profession has been affected by the implementation and use of ICT through these optimistic views, and the ever-changing teaching and learning culture (Kompf, 2005; Robertson, 2003). It is therefore necessary to pause and ask the question to the technopositivist ideologists: whether in e-learning the focus is on the ‘e’ or on the learning. The opportunities and dangers brought about by the ‘e’ in e-learning should be soberly examined. As Gandolfo (1998, p. 24) suggests: [U]ndoubtedly, there is opportunity; the effective use of technology has the potential to improve and enhance learning. Just as assuredly there is the danger that the wrong headed adoption of various technologies apart from a sound grounding in educational research and practice will result, and indeed in some instances has already resulted, in costly additions to an already expensive enterprise without any value added. That is, technology applications must be consonant with what is known about the nature of learning and must be assessed to ensure that they are indeed enhancing learners’ experiences. Technopositivist ideology is a ‘compulsory enthusiasm’ about technology that is being created, propagated and channelled repeatedly by the people who stand to gain either economically, socially, politically or otherwise in due disregard of the trade-offs associated with the technology to the target audience (Kompf, 2005; Robertson, 2003). In e-learning, the beneficiaries of the technopositivist market are doing so by presenting it with promises that would dismiss the judgement of many. This is aptly illustrated by Robertson (2003, pp. 284–285): Information technology promises to deliver more (and more important) learning for every student accomplished in less time; to ensure ‘individualization’ no matter how large and diverse the class; to obliterate the differences and disadvantages associated with race, gender, and class; to vary and yet standardize the curriculum; to remove subjectivity from student evaluation; to make reporting and record keeping a snap; to keep discipline problems to a minimum; to enhance professional learning and discourse; and to transform the discredited teacher-centered classroom into that paean of pedagogy: the constructivist, student-centered classroom, On her part, Guri-Rosenblit (2005, p. 14) argues that the proponents and marketers of e-learning present it as offering multiple uses that do not have a clear relationship with a current or future problem. She asks two ironic, vital and relevant questions: ‘If it ain’t broken, why fix it?’ and ‘Technology is the answer—but what are the questions?’ The enthusiasm to use technology for endless possibilities has led to the belief that providing 200 British Journal of Educational Technology Vol 41 No 2 2010 © 2008 The Authors. Journal compilation © 2008 British Educational Communications and Technology Agency. information automatically leads to meaningful knowledge creation; hence blurring and confusing the distinction between information and knowledge. This is one of the many misconceptions that emerged with e-learning. There has been a great deal of confusion both in the marketing of and language used in the advocating of the ICTs in teaching and learning. As an example, Guri-Rosenblit (2005, p. 6) identified a list of 15 words used to describe the environment for teaching and learning with technology from various studies: ‘web-based learning, computermediated instruction, virtual classrooms, online education, e-learning, e-education, computer-driven interactive communication, open and distance learning, I-Campus, borderless education, cyberspace learning environments, distributed learning, flexible learning, blended learning, mobile-learning’. The list could easily be extended with many more words. Presented with this array of words, most educators are not sure of what e-learning is. Could it be synonymous to distance education? Is it just the use of online tools to enhance or enrich the learning experiences? Is it stashing the whole courseware or parts of it online for students to access? Or is it a new form of collaborative or cooperative learning? Clearly, any of these questions could be used to describe an aspect of e-learning and quite often confuse the uninformed educator. These varied words, with as many definitions, show the degree to which e-learning is being used in different cultures and in different organisations. Unfortunately, many of these uses are based on popular assumptions and myths. While the myths that will be discussed in this paper are generic, and hence applicable to e-learning use in most cultures and organisations, the paper’s focus is on higher education, because it forms part of a larger e-learning research project among higher education institutions (HEIs) and also because of the popularity of e-learning use in HEIs. Although there is considerable confusion around the term e-learning, for the purpose of this paper it will be considered as referring to the use of electronic technology and content in teaching and learning. It includes, but is not limited to, the use of the Internet; television; streaming video and video conferencing; online text and multimedia; and mobile technologies. From the nomenclature, also comes the crafting of the language for selling the technologies to the educators. Robertson (2003, p. 280) shows the meticulous choice of words by the marketers where ‘research’ is transformed into a ‘belief system’ and the past tense (used to communicate research findings) is substituted for the present and future tense, for example “Technology ‘can and will’ rather than ‘has and does’ ” in a quote from Apple’s comment: ‘At Apple, we believe the effective integration of technology into classroom instruction can and will result in higher levels of student achievement’. Similar quotes are available in the market and vendors of technology products for teaching and learning. This, however, is not limited to the market; some researchers have used similar quotes: ‘It is now conventional wisdom that those countries which fail to move from the industrial to the Information Society will not be able to compete in the globalised market system made possible by the new technologies’ (Mac Keogh, 2001, p. 223). The role of research should be to question the conventional wisdom or common sense and offer plausible answers, rather than dancing to the fine tunes of popular or mass e-Learning myths 201 © 2008 The Authors. Journal compilation © 2008 British Educational Communications and Technology Agency. wisdom. It is also interesting to note that Mac Keogh (2001, p. 233) concludes that ‘[w]hen issues other than costs and performance outcomes are considered, the rationale for introducing ICTs in education is more powerful’. Does this mean that irrespective of whether ICTs ", "title": "" }, { "docid": "af8fbdfbc4c4958f69b3936ff2590767", "text": "Analysis of sedimentary diatom assemblages (10 to 144 ka) form the basis for a detailed reconstruction of the paleohydrography and diatom paleoecology of Lake Malawi. Lake-level fluctuations on the order of hundreds of meters were inferred from dramatic changes in the fossil and sedimentary archives. Many of the fossil diatom assemblages we observed have no analog in modern Lake Malawi. Cyclotelloid diatom species are a major component of fossil assemblages prior to 35 ka, but are not found in significant abundances in the modern diatom communities in Lake Malawi. Salinityand alkalinity-tolerant plankton has not been reported in the modern lake system, but frequently dominant fossil diatom assemblages prior to 85 ka. Large stephanodiscoid species that often dominate the plankton today are rarely present in the fossil record prior to 31 ka. Similarly, prior to 31 ka, common central-basin aulacoseiroid species are replaced by species found in the shallow, well-mixed southern basin. Surprisingly, tychoplankton and periphyton were not common throughout prolonged lowstands, but tended to increase in relative abundance during periods of inferred deeper-lake environments. A high-resolution lake level reconstruction was generated by a principle component analysis of fossil diatom and wetsieved fossil and mineralogical residue records. Prior to 70 ka, fossil assemblages suggest that the central basin was periodically a much shallower, more saline and/or alkaline, well-mixed environment. The most significant reconstructed lowstands are ~ 600 m below the modern lake level and span thousands of years. These conditions contrast starkly with the deep, dilute, dysaerobic environments of the modern central basin. After 70 ka, our reconstruction indicates sustained deeper-water environments were common, marked by a few brief, but significant, lowstands. High amplitude lake-level fluctuations appear related to changes in insolation. Seismic reflection data and additional sediment cores recovered from the northern basin of Lake Malawi provide evidence that supports our reconstruction.", "title": "" }, { "docid": "3d490d7d30dcddc3f1c0833794a0f2df", "text": "Purpose-This study attempts to investigate (1) the effect of meditation experience on employees’ self-directed learning (SDL) readiness and organizational innovative (OI) ability as well as organizational performance (OP), and (2) the relationships among SDL, OI, and OP. Design/methodology/approach-This study conducts an empirical study of 15 technological companies (n = 412) in Taiwan, utilizing the collected survey data to test the relationships among the three dimensions. Findings-Results show that: (1) The employees’ meditation experience significantly and positively influenced employees’ SDL readiness, companies’ OI capability and OP; (2) The study found that SDL has a direct and significant impact on OI; and OI has direct and significant influences on OP. Research limitation/implications-The generalization of the present study is constrained by (1) the existence of possible biases of the participants, (2) the variations of length, type and form of meditation demonstrated by the employees in these high tech companies, and (3) the fact that local data collection in Taiwan may present different cultural characteristics which may be quite different from those in other areas or countries. Managerial implications are presented at the end of the work. Practical implications-The findings indicate that SDL can only impact organizational innovation through employees “openness to a challenge”, “inquisitive nature”, self-understanding and acceptance of responsibility for learning. Such finding implies better organizational innovative capability under such conditions, thus organizations may encourage employees to take risks or accept new opportunities through various incentives, such as monetary rewards or public recognitions. More specifically, the present study discovers that while administration innovation is the most important element influencing an organization’s financial performance, market innovation is the key component in an organization’s market performance. Social implications-The present study discovers that meditation experience positively", "title": "" }, { "docid": "c7eb67093a6f00bec0d96607e6384378", "text": "Two primary simulations have been developed and are being updated for the Mars Smart Lander Entry, Descent, and Landing (EDL). The high fidelity engineering end-to-end EDL simulation that is based on NASA Langley’s Program to Optimize Simulated Trajectories (POST) and the end-to-end real-time, hardware-in-the-loop simulation test bed, which is based on NASA JPL’s Dynamics Simulator for Entry, Descent and Surface landing (DSENDS). This paper presents the status of these Mars Smart Lander EDL end-to-end simulations at this time. Various models, capabilities, as well as validation and verification for these simulations are discussed.", "title": "" }, { "docid": "046148901452aefdc5a14357ed89cbd3", "text": "Of late, there has been an avalanche of cross-layer design proposals for wireless networks. A number of researchers have looked at specific aspects of network performance and, approaching cross-layer design via their interpretation of what it implies, have presented several cross-layer design proposals. These proposals involve different layers of the protocol stack, and address both cellular and ad hoc networks. There has also been work relating to the implementation of cross-layer interactions. It is high time that these various individual efforts be put into perspective and a more holistic view be taken. In this article, we take a step in that direction by presenting a survey of the literature in the area of cross-layer design, and by taking stock of the ongoing work. We suggest a definition for cross-layer design, discuss the basic types of cross-layer design with examples drawn from the literature, and categorize the initial proposals on how cross-layer interactions may be implemented. We then highlight some open challenges and new opportunities for cross-layer design. Designers presenting cross-layer design proposals can start addressing these as they move ahead.", "title": "" }, { "docid": "5462d51955d2eaaa25fd6ff4d71b3f40", "text": "2 \"Generations of scientists may yet have to come and go before the question of the origin of life is finally solved. That it will be solved eventually is as certain as anything can ever be amid the uncertainties that surround us.\" 1. Introduction How, where and when did life appear on Earth? Although Charles Darwin was reluctant to address these issues in his books, in a letter sent on February 1st, 1871 to his friend Joseph Dalton Hooker he wrote in a now famous paragraph that \"it is often said that all the conditions for the first production of a living being are now present, which could ever have been present. But if (and oh what a big if) we could conceive in some warm little pond with all sort of ammonia and phosphoric salts,-light, heat, electricity present, that a protein compound was chemically formed, ready to undergo still more complex changes, at the present such matter would be instantly devoured, or absorbed, which would not have been the case before living creatures were formed...\" (Darwin, 1871). Darwin's letter summarizes in a nutshell not only his ideas on the emergence of life, but also provides considerable insights on the views on the chemical nature of the basic biological processes that were prevalent at the time in many scientific circles. Although Friedrich Miescher had discovered nucleic acids (he called them nuclein) in 1869 (Dahm, 2005), the deciphering of their central role in genetic processes would remain unknown for almost another a century. In contrast, the roles played by proteins in manifold biological processes had been established. Equally significant, by the time Darwin wrote his letter major advances had been made in the understanding of the material basis of life, which for a long time had been considered to be fundamentally different from inorganic compounds. The experiments of Friedrich Wöhler, Adolph Strecker and Aleksandr Butlerov, who had demonstrated independently the feasibility of the laboratory synthesis of urea, alanine, and sugars, respectively, from simple 3 starting materials were recognized as a demonstration that the chemical gap separating organisms from the non-living was not insurmountable. But how had this gap first been bridged? The idea that life was an emergent feature of nature has been widespread since the nineteenth century. The major breakthrough that transformed the origin of life from pure speculation into workable and testable research models were proposals, suggested independently, in …", "title": "" }, { "docid": "c273620e05cc5131e8c6d58b700a0aab", "text": "Differential evolution has been shown to be an effective methodology for solving optimization problems over continuous space. In this paper, we propose an eigenvector-based crossover operator. The proposed operator utilizes eigenvectors of covariance matrix of individual solutions, which makes the crossover rotationally invariant. More specifically, the donor vectors during crossover are modified, by projecting each donor vector onto the eigenvector basis that provides an alternative coordinate system. The proposed operator can be applied to any crossover strategy with minimal changes. The experimental results show that the proposed operator significantly improves DE performance on a set of 54 test functions in CEC 2011, BBOB 2012, and CEC 2013 benchmark sets.", "title": "" }, { "docid": "7a1f409eea5e0ff89b51fe0a26d6db8d", "text": "A multi-agent system consisting of <inline-formula><tex-math notation=\"LaTeX\">$N$</tex-math></inline-formula> agents is considered. The problem of steering each agent from its initial position to a desired goal while avoiding collisions with obstacles and other agents is studied. This problem, referred to as the <italic>multi-agent collision avoidance problem</italic>, is formulated as a differential game. Dynamic feedback strategies that approximate the feedback Nash equilibrium solutions of the differential game are constructed and it is shown that, provided certain assumptions are satisfied, these guarantee that the agents reach their targets while avoiding collisions.", "title": "" }, { "docid": "c68196f826f2afb61c13a0399d921421", "text": "BACKGROUND\nIndividuals with mild cognitive impairment (MCI) have a substantially increased risk of developing dementia due to Alzheimer's disease (AD). In this study, we developed a multivariate prognostic model for predicting MCI-to-dementia progression at the individual patient level.\n\n\nMETHODS\nUsing baseline data from 259 MCI patients and a probabilistic, kernel-based pattern classification approach, we trained a classifier to distinguish between patients who progressed to AD-type dementia (n = 139) and those who did not (n = 120) during a three-year follow-up period. More than 750 variables across four data sources were considered as potential predictors of progression. These data sources included risk factors, cognitive and functional assessments, structural magnetic resonance imaging (MRI) data, and plasma proteomic data. Predictive utility was assessed using a rigorous cross-validation framework.\n\n\nRESULTS\nCognitive and functional markers were most predictive of progression, while plasma proteomic markers had limited predictive utility. The best performing model incorporated a combination of cognitive/functional markers and morphometric MRI measures and predicted progression with 80% accuracy (83% sensitivity, 76% specificity, AUC = 0.87). Predictors of progression included scores on the Alzheimer's Disease Assessment Scale, Rey Auditory Verbal Learning Test, and Functional Activities Questionnaire, as well as volume/cortical thickness of three brain regions (left hippocampus, middle temporal gyrus, and inferior parietal cortex). Calibration analysis revealed that the model is capable of generating probabilistic predictions that reliably reflect the actual risk of progression. Finally, we found that the predictive accuracy of the model varied with patient demographic, genetic, and clinical characteristics and could be further improved by taking into account the confidence of the predictions.\n\n\nCONCLUSIONS\nWe developed an accurate prognostic model for predicting MCI-to-dementia progression over a three-year period. The model utilizes widely available, cost-effective, non-invasive markers and can be used to improve patient selection in clinical trials and identify high-risk MCI patients for early treatment.", "title": "" } ]
scidocsrr
5355b9be7a88b959ad05750fb5aa10ba
Supervised Learning with Quantum-Inspired Tensor Networks
[ { "docid": "5d247482bb06e837bf04c04582f4bfa2", "text": "This paper provides an introduction to support vector machines, kernel Fisher discriminant analysis, and kernel principal component analysis, as examples for successful kernel-based learning methods. We first give a short background about Vapnik-Chervonenkis theory and kernel feature spaces and then proceed to kernel based learning in supervised and unsupervised scenarios including practical and algorithmic considerations. We illustrate the usefulness of kernel algorithms by discussing applications such as optical character recognition and DNA analysis.", "title": "" } ]
[ { "docid": "4995bb31547a98adbe98c7a9f2bfa947", "text": "This paper describes our proposed solutions designed for a STS core track within the SemEval 2016 English Semantic Textual Similarity (STS) task. Our method of similarity detection combines recursive autoencoders with a WordNet award-penalty system that accounts for semantic relatedness, and an SVM classifier, which produces the final score from similarity matrices. This solution is further supported by an ensemble classifier, combining an aligner with a bi-directional Gated Recurrent Neural Network and additional features, which then performs Linear Support Vector Regression to determine another set of scores.", "title": "" }, { "docid": "69a11f89a92051631e1c07f2af475843", "text": "Animal-assisted therapy (AAT) has been practiced for many years and there is now increasing interest in demonstrating its efficacy through research. To date, no known quantitative review of AAT studies has been published; our study sought to fill this gap. We conducted a comprehensive search of articles reporting on AAT in which we reviewed 250 studies, 49 of which met our inclusion criteria and were submitted to meta-analytic procedures. Overall, AAT was associated with moderate effect sizes in improving outcomes in four areas: Autism-spectrum symptoms, medical difficulties, behavioral problems, and emotional well-being. Contrary to expectations, characteristics of participants and studies did not produce differential outcomes. AAT shows promise as an additive to established interventions and future research should investigate the conditions under which AAT can be most helpful.", "title": "" }, { "docid": "34f94f47de9329595f6b4a49139310a9", "text": "The powerful data storage and data processing abilities of cloud computing (CC) and the ubiquitous data gathering capability of wireless sensor network (WSN) complement each other in CC-WSN integration, which is attracting growing interest from both academia and industry. However, job scheduling for CC integrated with WSN is a critical and unexplored topic. To fill this gap, this paper first analyzes the characteristics of job scheduling with respect to CC-WSN integration and then studies two traditional and popular job scheduling algorithms (i.e., Min-Min and Max-Min). Further, two novel job scheduling algorithms, namely priority-based two phase Min-Min (PTMM) and priority-based two phase Max-Min (PTAM), are proposed for CC integrated with WSN. Extensive experimental results show that PTMM and PTAM achieve shorter expected completion time than Min-Min and Max-Min, for CC integrated with WSN.", "title": "" }, { "docid": "35225f6ca92daf5b17bdd2a5395b83ca", "text": "A neural network with a single layer of hidden units of gaussian type is proved to be a universal approximator for real-valued maps defined on convex, compact sets of Rn.", "title": "" }, { "docid": "0f7f8557ffa238a529f28f9474559cc4", "text": "Fast incipient machine fault diagnosis is becoming one of the key requirements for economical and optimal process operation management. Artificial neural networks have been used to detect machine faults for a number of years and shown to be highly successful in this application area. This paper presents a novel test technique for machine fault detection and classification in electro-mechanical machinery from vibration measurements using one-class support vector machines (SVMs). In order to evaluate one-class SVMs, this paper examines the performance of the proposed method by comparing it with that of multilayer perception, one of the artificial neural network techniques, based on real benchmarking data. q 2005 Published by Elsevier Ltd.", "title": "" }, { "docid": "21916d34fb470601fb6376c4bcd0839a", "text": "BACKGROUND\nCutibacterium (Propionibacterium) acnes is assumed to play an important role in the pathogenesis of acne.\n\n\nOBJECTIVES\nTo examine if clones with distinct virulence properties are associated with acne.\n\n\nMETHODS\nMultiple C. acnes isolates from follicles and surface skin of patients with moderate to severe acne and healthy controls were characterized by multilocus sequence typing. To determine if CC18 isolates from acne patients differ from those of controls in the possession of virulence genes or lack of genes conducive to a harmonious coexistence the full genomes of dominating CC18 follicular clones from six patients and five controls were sequenced.\n\n\nRESULTS\nIndividuals carried one to ten clones simultaneously. The dominating C. acnes clones in follicles from acne patients were exclusively from the phylogenetic clade I-1a and all belonged to clonal complex CC18 with the exception of one patient dominated by the worldwide-disseminated and often antibiotic resistant clone ST3. The clonal composition of healthy follicles showed a more heterogeneous pattern with follicles dominated by clones representing the phylogenetic clades I-1a, I-1b, I-2 and II. Comparison of follicular CC18 gene contents, allelic versions of putative virulence genes and their promoter regions, and 54 variable-length intragenic and inter-genic homopolymeric tracts showed extensive conservation and no difference associated with the clinical origin of isolates.\n\n\nCONCLUSIONS\nThe study supports that C. acnes strains from clonal complex CC18 and the often antibiotic resistant clone ST3 are associated with acne and suggests that susceptibility of the host rather than differences within these clones may determine the clinical outcome of colonization.", "title": "" }, { "docid": "b82805187bdfd14a4dd5efc6faf70f10", "text": "8 Cloud computing has gained tremendous popularity in recent years. By outsourcing computation and 9 storage requirements to public providers and paying for the services used, customers can relish upon the 10 advantages of this new paradigm. Cloud computing provides with a comparably lower-cost, scalable, a 11 location-independent platform for managing clients’ data. Compared to a traditional model of computing, 12 which uses dedicated in-house infrastructure, cloud computing provides unprecedented benefits regarding 13 cost and reliability. Cloud storage is a new cost-effective paradigm that aims at providing high 14 availability, reliability, massive scalability and data sharing. However, outsourcing data to a cloud service 15 provider introduces new challenges from the perspectives of data correctness and security. Over the years, 16 many data integrity schemes have been proposed for protecting outsourced data. This paper aims to 17 enhance the understanding of security issues associated with cloud storage and highlights the importance 18 of data integrity schemes for outsourced data. In this paper, we have presented a taxonomy of existing 19 data integrity schemes use for cloud storage. A comparative analysis of existing schemes is also provided 20 along with a detailed discussion on possible security attacks and their mitigations. Additionally, we have 21 discussed design challenges such as computational efficiency, storage efficiency, communication 22 efficiency, and reduced I/O in these schemes. Furthermore; we have highlighted future trends and open 23 issues, for future research in cloud storage security. 24", "title": "" }, { "docid": "cad2d29b9f51bbd146c5b683208cf3fa", "text": "The stereotype content model (SCM) defines two fundamental dimensions of social perception, warmth and competence, predicted respectively by perceived competition and status. Combinations of warmth and competence generate distinct emotions of admiration, contempt, envy, and pity. From these intergroup emotions and stereotypes, the behavior from intergroup affect and stereotypes (BIAS) map predicts distinct behaviors: active and passive, facilitative and harmful. After defining warmth/communion and competence/agency, the chapter integrates converging work documenting the centrality of these dimensions in interpersonal as well as intergroup perception. Structural origins of warmth and competence perceptions result from competitors judged as not warm, and allies judged as warm; high status confers competence and low status incompetence. Warmth and competence judgments support systematic patterns of cognitive, emotional, and behavioral reactions, including ambivalent prejudices. Past views of prejudice as a univalent antipathy have obscured the unique responses toward groups stereotyped as competent but not warm or warm but not competent. Finally, the chapter addresses unresolved issues and future research directions.", "title": "" }, { "docid": "76d22feb7da3dbc14688b0d999631169", "text": "Guilt proneness is a personality trait indicative of a predisposition to experience negative feelings about personal wrongdoing, even when the wrongdoing is private. It is characterized by the anticipation of feeling bad about committing transgressions rather than by guilty feelings in a particular moment or generalized guilty feelings that occur without an eliciting event. Our research has revealed that guilt proneness is an important character trait because knowing a person’s level of guilt proneness helps us to predict the likelihood that they will behave unethically. For example, online studies of adults across the U.S. have shown that people who score high in guilt proneness (compared to low scorers) make fewer unethical business decisions, commit fewer delinquent behaviors, and behave more honestly when they make economic decisions. In the workplace, guilt-prone employees are less likely to engage in counterproductive behaviors that harm their organization.", "title": "" }, { "docid": "4e91d37de7701e4a03c506c602ef3455", "text": "This paper presents the design of Glow, a machine learning compiler for heterogeneous hardware. It is a pragmatic approach to compilation that enables the generation of highly optimized code for multiple targets. Glow lowers the traditional neural network dataflow graph into a two-phase strongly-typed intermediate representation. The high-level intermediate representation allows the optimizer to perform domain-specific optimizations. The lower-level instruction-based address-only intermediate representation allows the compiler to perform memory-related optimizations, such as instruction scheduling, static memory allocation and copy elimination. At the lowest level, the optimizer performs machine-specific code generation to take advantage of specialized hardware features. Glow features a lowering phase which enables the compiler to support a high number of input operators as well as a large number of hardware targets by eliminating the need to implement all operators on all targets. The lowering phase is designed to reduce the input space and allow new hardware backends to focus on a small number of linear algebra primitives.", "title": "" }, { "docid": "80f098f2cee2f0cef196c946ba93cb99", "text": "In this paper we propose a new approach to incrementally initialize a manifold surface for automatic 3D reconstruction from images. More precisely we focus on the automatic initialization of a 3D mesh as close as possible to the final solution; indeed many approaches require a good initial solution for further refinement via multi-view stereo techniques. Our novel algorithm automatically estimates an initial manifold mesh for surface evolving multi-view stereo algorithms, where the manifold property needs to be enforced. It bootstraps from 3D points extracted via Structure from Motion, then iterates between a state-of-the-art manifold reconstruction step and a novel mesh sweeping algorithm that looks for new 3D points in the neighborhood of the reconstructed manifold to be added in the manifold reconstruction. The experimental results show quantitatively that the mesh sweeping improves the resolution and the accuracy of the manifold reconstruction, allowing a better convergence of state-of-the-art surface evolution multi-view stereo algorithms.", "title": "" }, { "docid": "60ea79b98eade6b3a7bcd786484aa063", "text": "This paper analyses the effect of adding Bitcoin, to the portfolio (stocks, bonds, Baltic index, MXEF, gold, real estate and crude oil) of an international investor by using daily data available from 2nd of July, 2010 to 2nd of August, 2016. We conclude that adding Bitcoin to portfolio, over the course of the considered period, always yielded a higher Sharpe ratio. This means that Bitcoin’s returns offset its high volatility. This paper, recognizing the fact that Bitcoin is a relatively new asset class, gives the readers a basic idea about the working of the virtual currency, the increasing number developments in the financial industry revolving around it, its unique features and the detailed look into its continuously growing acceptance across different fronts (Banks, Merchants and Countries) globally. We also construct optimal portfolios to reflect the highly lucrative and largely unexplored opportunities associated with investment in Bitcoin. Keywords—Portfolio management, Bitcoin, optimization, Sharpe ratio.", "title": "" }, { "docid": "c023633ca0fe1cfc78b1d579d1ae157b", "text": "A model is proposed that specifies the conditions under which individuals will become internally motivated to perform effectively on their jobs. The model focuses on the interaction among three classes of variables: (a) the psychological states of employees that must be present for internally motivated work behavior to develop; (b) the characteristics of jobs that can create these psychological states; and (c) the attributes of individuals that determine how positively a person will respond to a complex and challenging job. The model was tested for 658 employees who work on 62 different jobs in seven organizations, and results support its validity. A number of special features of the model are discussed (including its use as a basis for the diagnosis of jobs and the evaluation of job redesign projects), and the model is compared to other theories of job design.", "title": "" }, { "docid": "324d5709b5638a06170a703e88732458", "text": "Finding the most influential people is an NP-hard problem that has attracted many researchers in the field of social networks. The problem is also known as influence maximization and aims to find a number of people that are able to maximize the spread of influence through a target social network. In this paper, a new algorithm based on the linear threshold model of influence maximization is proposed. The main benefit of the algorithm is that it reduces the number of investigated nodes without loss of quality to decrease its execution time. Our experimental results based on two well-known datasets show that the proposed algorithm is much faster and at the same time more efficient than the state of the art", "title": "" }, { "docid": "f6e8bda7c3915fa023f1b0f88f101f46", "text": "This paper presents a formulation to the obstacle avoidance problem for semi-autonomous ground vehicles. The planning and tracking problems have been divided into a two-level hierarchical controller. The high level solves a nonlinear model predictive control problem to generate a feasible and obstacle free path. It uses a nonlinear vehicle model and utilizes a coordinate transformation which uses vehicle position along a path as the independent variable. The low level uses a higher fidelity model and solves the MPC problem with a sequential quadratic programming approach to track the planned path. Simulations show the method’s ability to safely avoid multiple obstacles while tracking the lane centerline. Experimental tests on a semi-autonomous passenger vehicle driving at high speed on ice show the effectiveness of the approach.", "title": "" }, { "docid": "490d63de99f1973d5bab4c1a90633d18", "text": "Flows transported across mobile ad hoc wireless networks suffer from route breakups caused by nodal mobility. In a network that aims to support critical interactive real-time data transactions, to provide for the uninterrupted execution of a transaction, or for the rapid transport of a high value file, it is essential to identify robust routes across which such transactions are transported. Noting that route failures can induce long re-routing delays that may be highly interruptive for many applications and message/stream transactions, it is beneficial to configure the routing scheme to send a flow across a route whose lifetime is longer, with sufficiently high probability, than the estimated duration of the activity that it is selected to carry. We evaluate the ability of a mobile ad hoc wireless network to distribute flows across robust routes by introducing the robust throughput measure as a performance metric. The utility gained by the delivery of flow messages is based on the level of interruption experienced by the underlying transaction. As a special case, for certain applications only transactions that are completed without being prematurely interrupted may convey data to their intended users that is of acceptable utility. We describe the mathematical calculation of a network’s robust throughput measure, as well as its robust throughput capacity. We introduce the robust flow admission and routing algorithm (RFAR) to provide for the timely and robust transport of flow transactions across mobile ad hoc wireless net-", "title": "" }, { "docid": "e3a412a62d5e6a253158e2eba9b0fd05", "text": "Colorectal cancer (CRC) is one of the most common cancers in the western world and is characterised by deregulation of the Wnt signalling pathway. Mutation of the adenomatous polyposis coli (APC) tumour suppressor gene, which encodes a protein that negatively regulates this pathway, occurs in almost 80% of CRC cases. The progression of this cancer from an early adenoma to carcinoma is accompanied by a well-characterised set of mutations including KRAS, SMAD4 and TP53. Using elegant genetic models the current paradigm is that the intestinal stem cell is the origin of CRC. However, human histology and recent studies, showing marked plasticity within the intestinal epithelium, may point to other cells of origin. Here we will review these latest studies and place these in context to provide an up-to-date view of the cell of origin of CRC.", "title": "" }, { "docid": "cc12bd6dcd844c49c55f4292703a241b", "text": "Eleven cases of sudden death of men restrained in a prone position by police officers are reported. Nine of the men were hogtied, one was tied to a hospital gurney, and one was manually held prone. All subjects were in an excited delirious state when restrained. Three were psychotic, whereas the others were acutely delirious from drugs (six from cocaine, one from methamphetamine, and one from LSD). Two were shocked with stun guns shortly before death. The literature is reviewed and mechanisms of death are discussed.", "title": "" }, { "docid": "75e794b731685064820c79f4d68ed79b", "text": "Graph visualizations encode relationships between objects. Abstracting the objects into group structures provides an overview of the data. Groups can be disjoint or overlapping, and might be organized hierarchically. However, the underlying graph still needs to be represented for analyzing the data in more depth. This work surveys research in visualizing group structures as part of graph diagrams. A particular focus is the explicit visual encoding of groups, rather than only using graph layout to implicitly indicate groups. We introduce a taxonomy of visualization techniques structuring the field into four main categories: visual node attributes vary properties of the node representation to encode the grouping, juxtaposed approaches use two separate visualizations, superimposed techniques work with two aligned visual layers, and embedded visualizations tightly integrate group and graph representation. We discuss results from evaluations of those techniques as well as main areas of application. Finally, we report future challenges based on interviews we conducted with leading researchers of the field.", "title": "" }, { "docid": "f5c4c25286eb419eb8f7100702062180", "text": "The primary objective of this investigation was to quantitatively identify which training variables result in the greatest strength and hypertrophy outcomes with lower body low intensity training with blood flow restriction (LI-BFR). Searches were performed for published studies with certain criteria. First, the primary focus of the study must have compared the effects of low intensity endurance or resistance training alone to low intensity exercise with some form of blood flow restriction. Second, subject populations had to have similar baseline characteristics so that valid outcome measures could be made. Finally, outcome measures had to include at least one measure of muscle hypertrophy. All studies included in the analysis utilized MRI except for two which reported changes via ultrasound. The mean overall effect size (ES) for muscle strength for LI-BFR was 0.58 [95% CI: 0.40, 0.76], and 0.00 [95% CI: −0.18, 0.17] for low intensity training. The mean overall ES for muscle hypertrophy for LI-BFR training was 0.39 [95% CI: 0.35, 0.43], and −0.01 [95% CI: −0.05, 0.03] for low intensity training. Blood flow restriction resulted in significantly greater gains in strength and hypertrophy when performed with resistance training than with walking. In addition, performing LI-BFR 2–3 days per week resulted in the greatest ES compared to 4–5 days per week. Significant correlations were found between ES for strength development and weeks of duration, but not for muscle hypertrophy. This meta-analysis provides insight into the impact of different variables on muscular strength and hypertrophy to LI-BFR training.", "title": "" } ]
scidocsrr
ba56c243a11d96c06eb434c155b3da59
The Evolution of the Platform Concept: A Systematic Review
[ { "docid": "4bfb389e1ae2433f797458ff3fe89807", "text": "Many if not most markets with network externalities are two-sided. To succeed, platforms in industries such as software, portals and media, payment systems and the Internet, must “get both sides of the market on board ”. Accordingly, platforms devote much attention to their business model, that is to how they court each side while making money overall. The paper builds a model of platform competition with two-sided markets. It unveils the determinants of price allocation and enduser surplus for different governance structures (profit-maximizing platforms and not-for-profit joint undertakings), and compares the outcomes with those under an integrated monopolist and a Ramsey planner.", "title": "" }, { "docid": "4ab8913fff86d8a737ed62c56fe2b39d", "text": "This paper draws on the social and behavioral sciences in an endeavor to specify the nature and microfoundations of the capabilities necessary to sustain superior enterprise performance in an open economy with rapid innovation and globally dispersed sources of invention, innovation, and manufacturing capability. Dynamic capabilities enable business enterprises to create, deploy, and protect the intangible assets that support superior longrun business performance. The microfoundations of dynamic capabilities—the distinct skills, processes, procedures, organizational structures, decision rules, and disciplines—which undergird enterprise-level sensing, seizing, and reconfiguring capacities are difficult to develop and deploy. Enterprises with strong dynamic capabilities are intensely entrepreneurial. They not only adapt to business ecosystems, but also shape them through innovation and through collaboration with other enterprises, entities, and institutions. The framework advanced can help scholars understand the foundations of long-run enterprise success while helping managers delineate relevant strategic considerations and the priorities they must adopt to enhance enterprise performance and escape the zero profit tendency associated with operating in markets open to global competition. Copyright  2007 John Wiley & Sons, Ltd.", "title": "" } ]
[ { "docid": "597e00855111c6ccb891c96e28f23585", "text": "Global food demand is increasing rapidly, as are the environmental impacts of agricultural expansion. Here, we project global demand for crop production in 2050 and evaluate the environmental impacts of alternative ways that this demand might be met. We find that per capita demand for crops, when measured as caloric or protein content of all crops combined, has been a similarly increasing function of per capita real income since 1960. This relationship forecasts a 100-110% increase in global crop demand from 2005 to 2050. Quantitative assessments show that the environmental impacts of meeting this demand depend on how global agriculture expands. If current trends of greater agricultural intensification in richer nations and greater land clearing (extensification) in poorer nations were to continue, ~1 billion ha of land would be cleared globally by 2050, with CO(2)-C equivalent greenhouse gas emissions reaching ~3 Gt y(-1) and N use ~250 Mt y(-1) by then. In contrast, if 2050 crop demand was met by moderate intensification focused on existing croplands of underyielding nations, adaptation and transfer of high-yielding technologies to these croplands, and global technological improvements, our analyses forecast land clearing of only ~0.2 billion ha, greenhouse gas emissions of ~1 Gt y(-1), and global N use of ~225 Mt y(-1). Efficient management practices could substantially lower nitrogen use. Attainment of high yields on existing croplands of underyielding nations is of great importance if global crop demand is to be met with minimal environmental impacts.", "title": "" }, { "docid": "d0a2c8cf31e1d361a7c2b306dffddc25", "text": "During the first years of the so called fourth industrial revolution, main attempts that tried to define the main ideas and tools behind this new era of manufacturing, always end up referring to the concept of smart machines that would be able to communicate with each and with the environment. In fact, the defined cyber physical systems, connected by the internet of things, take all the attention when referring to the new industry 4.0. But, nevertheless, the new industrial environment will benefit from several tools and applications that complement the real formation of a smart, embedded system that is able to perform autonomous tasks. And most of these revolutionary concepts rest in the same background theory as artificial intelligence does, where the analysis and filtration of huge amounts of incoming information from different types of sensors, assist to the interpretation and suggestion of the most recommended course of action. For that reason, artificial intelligence science suit perfectly with the challenges that arise in the consolidation of the fourth industrial revolution.", "title": "" }, { "docid": "d2abcdcdb6650c30838507ec1521b263", "text": "Deep neural networks (DNNs) have achieved great success in solving a variety of machine learning (ML) problems, especially in the domain of image recognition. However, recent research showed that DNNs can be highly vulnerable to adversarially generated instances, which look seemingly normal to human observers, but completely confuse DNNs. These adversarial samples are crafted by adding small perturbations to normal, benign images. Such perturbations, while imperceptible to the human eye, are picked up by DNNs and cause them to misclassify the manipulated instances with high confidence. In this work, we explore and demonstrate how systematic JPEG compression can work as an effective pre-processing step in the classification pipeline to counter adversarial attacks and dramatically reduce their effects (e.g., Fast Gradient Sign Method, DeepFool). An important component of JPEG compression is its ability to remove high frequency signal components, inside square blocks of an image. Such an operation is equivalent to selective blurring of the image, helping remove additive perturbations. Further, we propose an ensemble-based technique that can be constructed quickly from a given well-performing DNN, and empirically show how such an ensemble that leverages JPEG compression can protect a model from multiple types of adversarial attacks, without requiring knowledge about the model.", "title": "" }, { "docid": "6b49441def46e13e7289a49a6a615e8d", "text": "In the present research, the authors investigated the impact of self-regulation resources on confirmatory information processing, that is, the tendency of individuals to systematically prefer standpoint-consistent information to standpoint-inconsistent information in information evaluation and search. In 4 studies with political and economic decision-making scenarios, it was consistently found that individuals with depleted self-regulation resources exhibited a stronger tendency for confirmatory information processing than did individuals with nondepleted self-regulation resources. Alternative explanations based on processes of ego threat, cognitive load, and mood were ruled out. Mediational analyses suggested that individuals with depleted self-regulation resources experienced increased levels of commitment to their own standpoint, which resulted in increased confirmatory information processing. In sum, the impact of ego depletion on confirmatory information search seems to be more motivational than cognitive in nature.", "title": "" }, { "docid": "027681fed6a8932935ea8ef9e49cea13", "text": "Nowadays smartphones are ubiquitous and - to some extent - already used to support sports training, e.g. runners or bikers track their trip with a gps-enabled smartphone. But recent mobile technology has powerful processors that allow even more complex tasks like image or graphics processing. In this work we address the question on how mobile technology can be used for collaborative boulder training. More specifically, we present a mobile augmented reality application to support various parts of boulder training. The proposed approach also incorporates sharing and other social features. Thus our solution supports collaborative training by providing an intuitive way to create, share and define goals and challenges together with friends. Furthermore we propose a novel method of trackable generation for augmented reality. Synthetically generated images of climbing walls are used as trackables for real, existing walls.", "title": "" }, { "docid": "cedc00b6b92dc47d7480e51a146affe8", "text": "We propose a new scheme for detecting and localizing the abnormal crowd behavior in video sequences. The proposed method starts from the assumption that the interaction force, as estimated by the Social Force Model (SFM), is a significant feature to analyze crowd behavior. We step forward this hypothesis by optimizing this force using Particle Swarm Optimization (PSO) to perform the advection of a particle population spread randomly over the image frames. The population of particles is drifted towards the areas of the main image motion, driven by the PSO fitness function aimed at minimizing the interaction force, so as to model the most diffused, normal, behavior of the crowd. In this way, anomalies can be detected by checking if some particles (forces) do not fit the estimated distribution, and this is done by a RANSAC-like method followed by a segmentation algorithm to finely localize the abnormal areas. A large set of experiments are carried out on public available datasets, and results show the consistent higher performances of the proposed method as compared to other state-of-the-art algorithms, proving the goodness of the proposed approach.", "title": "" }, { "docid": "6afcc3c2e0c67823348cf89a0dfec9db", "text": "BACKGROUND\nThe consumption of dietary protein is important for resistance-trained individuals. It has been posited that intakes of 1.4 to 2.0 g/kg/day are needed for physically active individuals. Thus, the purpose of this investigation was to determine the effects of a very high protein diet (4.4 g/kg/d) on body composition in resistance-trained men and women.\n\n\nMETHODS\nThirty healthy resistance-trained individuals participated in this study (mean ± SD; age: 24.1 ± 5.6 yr; height: 171.4 ± 8.8 cm; weight: 73.3 ± 11.5 kg). Subjects were randomly assigned to one of the following groups: Control (CON) or high protein (HP). The CON group was instructed to maintain the same training and dietary habits over the course of the 8 week study. The HP group was instructed to consume 4.4 grams of protein per kg body weight daily. They were also instructed to maintain the same training and dietary habits (e.g. maintain the same fat and carbohydrate intake). Body composition (Bod Pod®), training volume (i.e. volume load), and food intake were determined at baseline and over the 8 week treatment period.\n\n\nRESULTS\nThe HP group consumed significantly more protein and calories pre vs post (p < 0.05). Furthermore, the HP group consumed significantly more protein and calories than the CON (p < 0.05). The HP group consumed on average 307 ± 69 grams of protein compared to 138 ± 42 in the CON. When expressed per unit body weight, the HP group consumed 4.4 ± 0.8 g/kg/d of protein versus 1.8 ± 0.4 g/kg/d in the CON. There were no changes in training volume for either group. Moreover, there were no significant changes over time or between groups for body weight, fat mass, fat free mass, or percent body fat.\n\n\nCONCLUSIONS\nConsuming 5.5 times the recommended daily allowance of protein has no effect on body composition in resistance-trained individuals who otherwise maintain the same training regimen. This is the first interventional study to demonstrate that consuming a hypercaloric high protein diet does not result in an increase in body fat.", "title": "" }, { "docid": "5c03be451f3610f39c94043d30314617", "text": "Syphilis is a sexually transmitted disease (STD) produced by Treponema pallidum, which mainly affects humans and is able to invade practically any organ in the body. Its infection facilitates the transmission of other STDs. Since the end of the last decade, successive outbreaks of syphilis have been reported in most western European countries. Like other STDs, syphilis is a notifiable disease in the European Union. In Spain, epidemiological information is obtained nationwide via the country's system for recording notifiable diseases (Spanish acronym EDO) and the national microbiological information system (Spanish acronym SIM), which compiles information from a network of 46 sentinel laboratories in twelve Spanish regions. The STDs that are epidemiologically controlled are gonococcal infection, syphilis, and congenital syphilis. The incidence of each of these diseases is recorded weekly. The information compiled indicates an increase in the cases of syphilis and gonococcal infection in Spain in recent years. According to the EDO, in 1999, the number of cases of syphilis per 100,000 inhabitants was recorded to be 1.69, which has risen to 4.38 in 2007. In this article, we review the reappearance and the evolution of this infectious disease in eight European countries, and alert dentists to the importance of a) diagnosing sexually-transmitted diseases and b) notifying the centres that control them.", "title": "" }, { "docid": "199df544c19711fbee2dd49e60956243", "text": "Languages vary strikingly in how they encode motion events. In some languages (e.g. English), manner of motion is typically encoded within the verb, while direction of motion information appears in modifiers. In other languages (e.g. Greek), the verb usually encodes the direction of motion, while the manner information is often omitted, or encoded in modifiers. We designed two studies to investigate whether these language-specific patterns affect speakers' reasoning about motion. We compared the performance of English and Greek children and adults (a) in nonlinguistic (memory and categorization) tasks involving motion events, and (b) in their linguistic descriptions of these same motion events. Even though the two linguistic groups differed significantly in terms of their linguistic preferences, their performance in the nonlinguistic tasks was identical. More surprisingly, the linguistic descriptions given by subjects within language also failed to correlate consistently with their memory and categorization performance in the relevant regards. For the domain studied, these results are consistent with the view that conceptual development and organization are largely independent of language-specific labeling practices. The discussion emphasizes that the necessarily sketchy nature of language use assures that it will be at best a crude index of thought.", "title": "" }, { "docid": "ba29af46fd410829c450eed631aa9280", "text": "We address the problem of dense visual-semantic embedding that maps not only full sentences and whole images but also phrases within sentences and salient regions within images into a multimodal embedding space. Such dense embeddings, when applied to the task of image captioning, enable us to produce several region-oriented and detailed phrases rather than just an overview sentence to describe an image. Specifically, we present a hierarchical structured recurrent neural network (RNN), namely Hierarchical Multimodal LSTM (HM-LSTM). Compared with chain structured RNN, our proposed model exploits the hierarchical relations between sentences and phrases, and between whole images and image regions, to jointly establish their representations. Without the need of any supervised labels, our proposed model automatically learns the fine-grained correspondences between phrases and image regions towards the dense embedding. Extensive experiments on several datasets validate the efficacy of our method, which compares favorably with the state-of-the-art methods.", "title": "" }, { "docid": "9584909fc62cca8dc5c9d02db7fa7e5d", "text": "As the nature of many materials handling tasks have begun to change from lifting to pushing and pulling, it is important that one understands the biomechanical nature of the risk to which the lumbar spine is exposed. Most previous assessments of push-pull tasks have employed models that may not be sensitive enough to consider the effects of the antagonistic cocontraction occurring during complex pushing and pulling motions in understanding the risk to the spine and the few that have considered the impact of cocontraction only consider spine load at one lumbar level. This study used an electromyography-assisted biomechanical model sensitive to complex motions to assess spine loadings throughout the lumbar spine as 10 males and 10 females pushed and pulled loads at three different handle heights and of three different load magnitudes. Pulling induced greater spine compressive loads than pushing, whereas the reverse was true for shear loads at the different lumbar levels. The results indicate that, under these conditions, anterior-posterior (A/P) shear loads were of sufficient magnitude to be of concern especially at the upper lumbar levels. Pushing and pulling loads equivalent to 20% of body weight appeared to be the limit of acceptable exertions, while pulling at low and medium handle heights (50% and 65% of stature) minimised A/P shear. These findings provide insight to the nature of spine loads and their potential risk to the low back during modern exertions.", "title": "" }, { "docid": "c6d26dddce25dec91534bf5481f64c28", "text": "We propose a new approach to image segmentation, which exploits the advantages of both conditional random fields (CRFs) and decision trees. In the literature, the potential functions of CRFs are mostly defined as a linear combination of some predefined parametric models, and then, methods, such as structured support vector machines, are applied to learn those linear coefficients. We instead formulate the unary and pairwise potentials as nonparametric forests—ensembles of decision trees, and learn the ensemble parameters and the trees in a unified optimization problem within the large-margin framework. In this fashion, we easily achieve nonlinear learning of potential functions on both unary and pairwise terms in CRFs. Moreover, we learn classwise decision trees for each object that appears in the image. Experimental results on several public segmentation data sets demonstrate the power of the learned nonlinear nonparametric potentials.", "title": "" }, { "docid": "430993dbb8fe6cd6c7acdf613424e608", "text": "Deep learning algorithms have recently produced state-of-the-art accuracy in many classification tasks, but this success is typically dependent on access to many annotated training examples. For domains without such data, an attractive alternative is to train models with light, or distant supervision. In this paper, we introduce a deep neural network for the Learning from Label Proportion (LLP) setting, in which the training data consist of bags of unlabeled instances with associated label distributions for each bag. We introduce a new regularization layer, Batch Averager, that can be appended to the last layer of any deep neural network to convert it from supervised learning to LLP. This layer can be implemented readily with existing deep learning packages. To further support domains in which the data consist of two conditionally independent feature views (e.g. image and text), we propose a co-training algorithm that iteratively generates pseudo bags and refits the deep LLP model to improve classification accuracy. We demonstrate our models on demographic attribute classification (gender and race/ethnicity), which has many applications in social media analysis, public health, and marketing. We conduct experiments to predict demographics of Twitter users based on their tweets and profile image, without requiring any user-level annotations for training. We find that the deep LLP approach outperforms baselines for both text and image features separately. Additionally, we find that co-training algorithm improves image and text classification by 4% and 8% absolute F1, respectively. Finally, an ensemble of text and image classifiers further improves the absolute F1 measure by 4% on average.", "title": "" }, { "docid": "7ed693c8f8dfa62842304f4c6783af03", "text": "Indian Sign Language (ISL) or Indo-Pakistani Sign Language is possibly the prevalent sign language variety in South Asia used by at least several hundred deaf signers. It is different in the phonetics, grammar and syntax from other country’s sign languages. Since ISL got standardized only recently, there is very little research work that has happened in ISL recognition. Considering the challenges in ISL gesture recognition, a novel method for recognition of static signs of Indian sign language alphabets and numerals for Human Computer Interaction (HCI) has been proposed in this thesis work. The developed algorithm for the hand gesture recognition system in ISL formulates a vision-based approach, using the Two-Dimensional Discrete Cosine Transform (2D-DCT) for image compression and the Self-Organizing Map (SOM) or Kohonen Self Organizing Feature Map (SOFM) Neural Network for pattern recognition purpose, simulated in MATLAB. To design an efficient and user friendly hand gesture recognition system, a GUI model has been implemented. The main advantage of this algorithm is its high-speed processing capability and low computational requirements, in terms of both speed and memory utilization. KeywordsArtificial Neural Network, Hand Gesture Recognition, Human Computer Interaction (HCI), Indian Sign Language (ISL), Kohonen Self Organizing Feature Map (SOFM), Two-Dimensional Discrete Cosine Transform (2D-", "title": "" }, { "docid": "3c014205609a8bbc2f5e216d7af30b32", "text": "This paper proposes a novel design for variable-flux machines with Alnico magnets. The proposed design uses tangentially magnetized magnets to achieve high air-gap flux density and to avoid demagnetization by the armature field. Barriers are also inserted in the rotor to limit the armature flux and to allow the machine to utilize both reluctance and magnet torque components. An analytical procedure is first applied to obtain the initial machine design parameters. Then, several modifications are applied to the stator and rotor designs through finite-element analysis (FEA) simulations to improve machine efficiency and torque density. A prototype of the proposed design is built, and the experimental results are in good correlation with the FEA simulations, confirming the validity of the proposed machine design concept.", "title": "" }, { "docid": "bef0eaf89164e6ffeabc758a6c93840b", "text": "Modern instruction set decoders feature translation of native instructions into internal micro-ops to simplify CPU design and improve instruction-level parallelism. However, this translation is static in most known instances. This work proposes context-sensitive decoding, a technique that enables customization of the micro-op translation at the microsecond or faster granularity, based on the current execution context and/or preset hardware events. While there are many potential applications, this work demonstrates its effectiveness with two use cases: 1) as a novel security defense to thwart instruction/data cache-based side-channel attacks, as demonstrated on commercial implementations of RSA and AES and 2) as a power management technique that performs selective devectorization to enable efficient unit-level power gating. This architecture, first by allowing execution to transition between different translation modes rapidly, defends against a variety of attacks, completely obfuscating code-dependent cache access, only sacrificing 5% in steady-state performance – orders of magnitude less than prior art. By selectively disabling the vector units without disabling vector arithmetic, context-sensitive decoding reduces energy by 12.9% with minimal loss in performance. Both optimizations work with no significant changes to the pipeline or the external ISA.", "title": "" }, { "docid": "6ecf5cb70cca991fbefafb739a0a44c9", "text": "Reasoning about objects, relations, and physics is central to human intelligence, and 1 a key goal of artificial intelligence. Here we introduce the interaction network, a 2 model which can reason about how objects in complex systems interact, supporting 3 dynamical predictions, as well as inferences about the abstract properties of the 4 system. Our model takes graphs as input, performs objectand relation-centric 5 reasoning in a way that is analogous to a simulation, and is implemented using 6 deep neural networks. We evaluate its ability to reason about several challenging 7 physical domains: n-body problems, rigid-body collision, and non-rigid dynamics. 8 Our results show it can be trained to accurately simulate the physical trajectories of 9 dozens of objects over thousands of time steps, estimate abstract quantities such 10 as energy, and generalize automatically to systems with different numbers and 11 configurations of objects and relations. Our interaction network implementation 12 is the first general-purpose, learnable physics engine, and a powerful general 13 framework for reasoning about object and relations in a wide variety of complex 14 real-world domains. 15", "title": "" }, { "docid": "bf0531b03cc36a69aca1956b21243dc6", "text": "Sound of their breath fades with the light. I think about the loveless fascination, Under the milky way tonight. Lower the curtain down in memphis, Lower the curtain down all right. I got no time for private consultation, Under the milky way tonight. Wish I knew what you were looking for. Might have known what you would find. And it's something quite peculiar, Something thats shimmering and white. It leads you here despite your destination, Under the milky way tonight (chorus) Preface This Master's Thesis concludes my studies in Human Aspects of Information Technology (HAIT) at Tilburg University. It describes the development, implementation, and analysis of an automatic mood classifier for music. I would like to thank those who have contributed to and supported the contents of the thesis. Special thanks goes to my supervisor Menno van Zaanen for his dedication and support during the entire process of getting started up to the final results. Moreover, I would like to express my appreciation to Fredrik Mjelle for providing the user-tagged instances exported out of the MOODY database, which was used as the dataset for the experiments. Furthermore, I would like to thank Toine Bogers for pointing me out useful website links regarding music mood classification and sending me papers with citations and references. I would also like to thank Michael Voong for sending me his papers on music mood classification research, Jaap van den Herik for his support and structuring of my writing and thinking. I would like to recognise Eric Postma and Marieke van Erp for their time assessing the thesis as members of the examination committee. Finally, I would like to express my gratitude to my family for their enduring support. Abstract This research presents the outcomes of research into using the lingual part of music for building an automatic mood classification system. Using a database consisting of extracted lyrics and user-tagged mood attachments, we built a classifier based on machine learning techniques. By testing the classification system on various mood frameworks (or dimensions) we examined to what extent it is possible to attach mood tags automatically to songs based on lyrics only. Furthermore, we examined to what extent the linguistic part of music revealed adequate information for assigning a mood category and which aspects of mood can be classified best. Our results show that the use of term frequencies and tf*idf values provide a valuable source of …", "title": "" }, { "docid": "f96eb97ea9300632cfae02084455946e", "text": "A planar folded dipole antenna that exhibits wideband characteristics is proposed. The antenna has simple planar construction without a ground plane and is easy to be assembled. Parameter values are adjusted in order to obtain wideband properties and compactness by using an electromagnetic simulator based on the method of moments. An experimental result centered at 1.7 GHz for 50 impedance matching shows that the antenna has bandwidth over 55% . The gains of the antenna are almost constant (2 dBi) in this frequency band and the radiation patterns are very similar to those of a normal dipole antenna. It is also shown that the antenna has a self-balanced impedance property in this frequency band.", "title": "" }, { "docid": "9131f56c00023a3402b602940be621bb", "text": "Location estimation of a wireless capsule endoscope at 400 MHz MICS band is implemented here using both RSSI and TOA-based techniques and their performance investigated. To improve the RSSI-based location estimation, a maximum likelihood (ML) estimation method is employed. For the TOA-based localization, FDTD coupled with continuous wavelet transform (CWT) is used to estimate the time of arrival and localization is performed using multilateration. The performances of the proposed localization algorithms are evaluated using a computational heterogeneous biological tissue phantom in the 402MHz-405MHz MICS band. Our investigations reveal that the accuracy obtained by TOA based method is superior to RSSI based estimates. It has been observed that the ML method substantially improves the accuracy of the RSSI-based location estimation.", "title": "" } ]
scidocsrr
5bbe97ae81ac959a40146de3a5680d52
Artificial Intelligence and Economic Growth
[ { "docid": "4fa7ee44cdc4b0cd439723e9600131bd", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/ucpress.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.", "title": "" }, { "docid": "84b8e98e143c0bfba79506c44ea12e6d", "text": "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended. Is such progress avoidable? If not to be avoided, can events be guided so that we may survive? These questions are investigated. Some possible answers (and some further dangers) are presented. _What is The Singularity?_ The acceleration of technological progress has been the central feature of this century. I argue in this paper that we are on the edge of change comparable to the rise of human life on Earth. The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence. There are several means by which science may achieve this breakthrough (and this is another reason for having confidence that the event will occur): o The development of computers that are \"awake\" and superhumanly intelligent. (To date, most controversy in the area of AI relates to whether we can create human equivalence in a machine. But if the answer is \"yes, we can\", then there is little doubt that beings more intelligent can be constructed shortly thereafter. o Large computer networks (and their associated users) may \"wake up\" as a superhumanly intelligent entity. o Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent. o Biological science may find ways to improve upon the natural human intellect. The first three possibilities depend in large part on improvements in computer hardware. Progress in computer hardware has followed an amazingly steady curve in the last few decades [16]. Based largely on this trend, I believe that the creation of greater than human intelligence will occur during the next thirty years. (Charles Platt [19] has pointed out the AI enthusiasts have been making claims like this for the last thirty years. Just so I'm not guilty of a relative-time ambiguity, let me more specific: I'll be surprised if this event occurs before 2005 or after 2030.) What are the consequences of this event? When greater-than-human intelligence drives progress, that progress will be much more rapid. In fact, there seems no reason why progress itself would not involve the creation of still more intelligent entities -on a still-shorter time scale. The best analogy that I see is with the evolutionary past: Animals can adapt to problems and make inventions, but often no faster than natural selection can do its work -the world acts as its own simulator in the case of natural selection. We humans have the ability to internalize the world and conduct \"what if's\" in our heads; we can solve many problems thousands of times faster than natural selection. Now, by creating the means to execute those simulations at much higher speeds, we are entering a regime as radically different from our human past as we humans are from the lower animals. From the human point of view this change will be a throwing away of all the previous rules, perhaps in the blink of an eye, an exponential runaway beyond any hope of control. Developments that before were thought might only happen in \"a million years\" (if ever) will likely happen in the next century. (In [4], Greg Bear paints a picture of the major changes happening in a matter of hours.) I think it's fair to call this event a singularity (\"the Singularity\" for the purposes of this paper). It is a point where our models must be discarded and a new reality rules. As we move closer and closer to this point, it will loom vaster and vaster over human affairs till the notion becomes a commonplace. Yet when it finally happens it may still be a great surprise and a greater unknown. In the 1950s there were very few who saw it: Stan Ulam [27] paraphrased John von Neumann as saying: One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue. Von Neumann even uses the term singularity, though it appears he is still thinking of normal progress, not the creation of superhuman intellect. (For me, the superhumanity is the essence of the Singularity. Without that we would get a glut of technical riches, never properly absorbed (see [24]).) In the 1960s there was recognition of some of the implications of superhuman intelligence. I. J. Good wrote [10]: Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an \"intelligence explosion,\" and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the _last_ invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. ... It is more probable than not that, within the twentieth century, an ultraintelligent machine will be built and that it will be the last invention that man need make. Good has captured the essence of the runaway, but does not pursue its most disturbing consequences. Any intelligent machine of the sort he describes would not be humankind's \"tool\" -any more than humans are the tools of rabbits or robins or chimpanzees. Through the '60s and '70s and '80s, recognition of the cataclysm spread [28] [1] [30] [4]. Perhaps it was the science-fiction writers who felt the first concrete impact. After all, the \"hard\" science-fiction writers are the ones who try to write specific stories about all that technology may do for us. More and more, these writers felt an opaque wall across the future. Once, they could put such fantasies millions of years in the future [23]. Now they saw that their most diligent extrapolations resulted in the unknowable ... soon. Once, galactic empires might have seemed a Post-Human domain. Now, sadly, even interplanetary ones are. What about the '90s and the '00s and the '10s, as we slide toward the edge? How will the approach of the Singularity spread across the human world view? For a while yet, the general critics of machine sapience will have good press. After all, till we have hardware as powerful as a human brain it is probably foolish to think we'll be able to create human equivalent (or greater) intelligence. (There is the far-fetched possibility that we could make a human equivalent out of less powerful hardware, if were willing to give up speed, if we were willing to settle for an artificial being who was literally slow [29]. But it's much more likely that devising the software will be a tricky process, involving lots of false starts and experimentation. If so, then the arrival of self-aware machines will not happen till after the development of hardware that is substantially more powerful than humans' natural equipment.) But as time passes, we should see more symptoms. The dilemma felt by science fiction writers will be perceived in other creative endeavors. (I have heard thoughtful comic book writers worry about how to have spectacular effects when everything visible can be produced by the technically commonplace.) We will see automation replacing higher and higher level jobs. We have tools right now (symbolic math programs, cad/cam) that release us from most low-level drudgery. Or put another way: The work that is truly productive is the domain of a steadily smaller and more elite fraction of humanity. In the coming of the Singularity, we are seeing the predictions of _true_ technological unemployment finally come true. Another symptom of progress toward the Singularity: ideas themselves should spread ever faster, and even the most radical will quickly become commonplace. When I began writing, it seemed very easy to come up with ideas that took decades to percolate into the cultural consciousness; now the lead time seems more like eighteen months. (Of course, this could just be me losing my imagination as I get old, but I see the effect in others too.) Like the shock in a compressible flow, the Singularity moves closer as we accelerate through the critical speed. And what of the arrival of the Singularity itself? What can be said of its actual appearance? Since it involves an intellectual runaway, it will probably occur faster than any technical revolution seen so far. The precipitating event will likely be unexpected -perhaps even to the researchers involved. (\"But all our previous models were catatonic! We were just tweaking some parameters....\") If networking is widespread enough (into ubiquitous embedded systems), it may seem as if our artifacts as a whole had suddenly wakened. And what happens a month or two (or a day or two) after that? I have only analogies to point to: The rise of humankind. We will be in the Post-Human era. And for all my rampant technological optimism, sometimes I think I'd be more comfortable if I were regarding these transcendental events from one thousand years remove ... instead of twenty. _Can the Singularity be Avoided?_ Well, maybe it won't happen at all: Sometimes I try to imagine the symptoms that we should expect to see if the Singularity is not to develop. There are the widely respected arguments of Penrose [18] and Searle [21] against the practicality of machine sapience. In August of 1992, Thinking Machines Corporation held a workshop to investigate the question \"How We Will Build a Machine that Thinks\" [Thearling]. As you might guess from the workshop's title, the participants were not especially supportive of the arguments against machine intelligence. In fact, there was general agreement that minds can exist on nonbiological substrates and that algorithms are of central importance to the existence of minds. However, there was much debate about the raw hardware power that is present in organic brains.", "title": "" } ]
[ { "docid": "248adf4ee726dce737b7d0cbe3334ea3", "text": "People can often find themselves out of their depth when they face knowledge-based problems, such as faulty technology, or medical concerns. This can also happen in everyday domains that users are simply inexperienced with, like cooking. These are common exploratory search conditions, where users don’t quite know enough about the domain to know if they are submitting a good query, nor if the results directly resolve their need or can be translated to do so. In such situations, people turn to their friends for help, or to forums like StackOverflow, so that someone can explain things to them and translate information to their specific need. This short paper describes work-in-progress within a Google-funded project focusing on Search Literacy in these situations, where improved search skills will help users to learn as they search, to search better, and to better comprehend the results. Focusing on the technology-problem domain, we present initial results from a qualitative study of questions asked and answers given in StackOverflow, and present plans for designing search engine support to help searchers learn as they search.", "title": "" }, { "docid": "a7a3966aca3881430cd379ed42828e1b", "text": "From rule-based to data-driven lexical entrainment models in spoken dialog systems José Lopes a,b,∗, Maxine Eskenazi c, Isabel Trancoso a,b a Spoken Language Laboratory, INESC-ID Lisboa, Rua Alves Redol 9, 1000-029 Lisboa, Portugal b Instituto Superior Técnico, Avenida Rovisco Pais 1, 1049-001 Lisboa, Portugal c Language Technologies Institute, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213, USA", "title": "" }, { "docid": "f0432af5265a08ccde0111d2d05b93e2", "text": "Cyber security is a critical issue now a days in various different domains in different disciplines. This paper presents a review analysis of cyber hacking attacks along with its experimental results and proposes a new methodology 3SEMCS named as three step encryption method for cyber security. By utilizing this new designed methodology, security at highest level will be easily provided especially on the time of request submission in the search engine as like google during client server communication. During its working a group of separate encryption algorithms are used. The benefit to utilize this three step encryption is to provide more tighten security by applying three separate encryption algorithms in each phase having different operations. And the additional benefit to utilize this methodology is to run over new designed private browser named as “RR” that is termed as Rim Rocks correspondingly this also help to check the authenticated sites or phishing sites by utilizing the strategy of passing URL address from phishing tank. This may help to block the phisher sites and user will relocate on previous page. The purpose to design this personnel browser is to enhance the level of security by", "title": "" }, { "docid": "4f846635e4f23b7630d0c853559f71dc", "text": "Parkinson's disease, known also as striatal dopamine deficiency syndrome, is a degenerative disorder of the central nervous system characterized by akinesia, muscular rigidity, tremor at rest, and postural abnormalities. In early stages of parkinsonism, there appears to be a compensatory increase in the number of dopamine receptors to accommodate the initial loss of dopamine neurons. As the disease progresses, the number of dopamine receptors decreases, apparently due to the concomitant degeneration of dopamine target sites on striatal neurons. The loss of dopaminergic neurons in Parkinson's disease results in enhanced metabolism of dopamine, augmenting the formation of H2O2, thus leading to generation of highly neurotoxic hydroxyl radicals (OH.). The generation of free radicals can also be produced by 6-hydroxydopamine or MPTP which destroys striatal dopaminergic neurons causing parkinsonism in experimental animals as well as human beings. Studies of the substantia nigra after death in Parkinson's disease have suggested the presence of oxidative stress and depletion of reduced glutathione; a high level of total iron with reduced level of ferritin; and deficiency of mitochondrial complex I. New approaches designed to attenuate the effects of oxidative stress and to provide neuroprotection of striatal dopaminergic neurons in Parkinson's disease include blocking dopamine transporter by mazindol, blocking NMDA receptors by dizocilpine maleate, enhancing the survival of neurons by giving brain-derived neurotrophic factors, providing antioxidants such as vitamin E, or inhibiting monoamine oxidase B (MAO-B) by selegiline. Among all of these experimental therapeutic refinements, the use of selegiline has been most successful in that it has been shown that selegiline may have a neurotrophic factor-like action rescuing striatal neurons and prolonging the survival of patients with Parkinson's disease.", "title": "" }, { "docid": "86cdce8b04818cc07e1003d85305bd40", "text": "Balanced graph partitioning is a well known NP-complete problem with a wide range of applications. These applications include many large-scale distributed problems including the optimal storage of large sets of graph-structured data over several hosts-a key problem in today's Cloud infrastructure. However, in very large-scale distributed scenarios, state-of-the-art algorithms are not directly applicable, because they typically involve frequent global operations over the entire graph. In this paper, we propose a fully distributed algorithm, called JA-BE-JA, that uses local search and simulated annealing techniques for graph partitioning. The algorithm is massively parallel: there is no central coordination, each node is processed independently, and only the direct neighbors of the node, and a small subset of random nodes in the graph need to be known locally. Strict synchronization is not required. These features allow JA-BE-JA to be easily adapted to any distributed graph-processing system from data centers to fully distributed networks. We perform a thorough experimental analysis, which shows that the minimal edge-cut value achieved by JA-BE-JA is comparable to state-of-the-art centralized algorithms such as METIS. In particular, on large social networks JA-BEJA outperforms METIS, which makes JA-BE-JA-a bottom-up, self-organizing algorithm-a highly competitive practical solution for graph partitioning.", "title": "" }, { "docid": "92fb94c947ec85ef7fe44be24e0e2c34", "text": "This paper describes the Microsoft submission to the WMT2018 news translation shared task. We participated in one language direction – English-German. Our system follows current best-practice and combines state-of-theart models with new data filtering (dual conditional cross-entropy filtering) and sentence weighting methods. We trained fairly standard Transformer-big models with an updated version of Edinburgh’s training scheme for WMT2017 and experimented with different filtering schemes for Paracrawl. According to automatic metrics (BLEU) we reached the highest score for this subtask with a nearly 2 BLEU point margin over the next strongest system. Based on human evaluation we ranked first among constrained systems. We believe this is mostly caused by our data filtering/weighting regime.", "title": "" }, { "docid": "39a4914ad4f793d8ce412aa169736e75", "text": "We present a metamaterial that acts as a strongly resonant absorber at terahertz frequencies. Our design consists of a bilayer unit cell which allows for maximization of the absorption through independent tuning of the electrical permittivity and magnetic permeability. An experimental absorptivity of 70% at 1.3 terahertz is demonstrated. We utilize only a single unit cell in the propagation direction, thus achieving an absorption coefficient alpha = 2000 cm(-1). These metamaterials are promising candidates as absorbing elements for thermally based THz imaging, due to their relatively low volume, low density, and narrow band response.", "title": "" }, { "docid": "4b9fe62a497ffe0fe6e669542843292d", "text": "Autonomous robot navigation through unknown, cluttered environments at high-speeds is still an open problem. Quadrotor platforms with this capability have only begun to emerge with the advancements in light-weight, small form factor sensing and computing. Many of the existing platforms, however, require excessive computation time to perform collision avoidance, which ultimately limits the vehicle's top speed. This work presents an efficient perception and planning approach that significantly reduces the computation time by using instantaneous perception data for collision avoidance. Minimum-time, state and input constrained motion primitives are generated by sampling terminal states until a collision-free path is found. The worst case performance of the Triple Integrator Planner (TIP) is nearly an order of magnitude faster than the state-of-the-art. Experimental results demonstrate the algorithm's ability to plan and execute aggressive collision avoidance maneuvers in highly cluttered environments.", "title": "" }, { "docid": "0d6960b2817f98924f7de3b7d7774912", "text": "Visual textures have played a key role in image understanding because they convey important semantics of images, and because texture representations that pool local image descriptors in an orderless manner have had a tremendous impact in diverse applications. In this paper we make several contributions to texture understanding. First, instead of focusing on texture instance and material category recognition, we propose a human-interpretable vocabulary of texture attributes to describe common texture patterns, complemented by a new describable texture dataset for benchmarking. Second, we look at the problem of recognizing materials and texture attributes in realistic imaging conditions, including when textures appear in clutter, developing corresponding benchmarks on top of the recently proposed OpenSurfaces dataset. Third, we revisit classic texture represenations, including bag-of-visual-words and the Fisher vectors, in the context of deep learning and show that these have excellent efficiency and generalization properties if the convolutional layers of a deep model are used as filter banks. We obtain in this manner state-of-the-art performance in numerous datasets well beyond textures, an efficient method to apply deep features to image regions, as well as benefit in transferring features from one domain to another.", "title": "" }, { "docid": "f8cc1cf257711c83464a98b3d9167c94", "text": "A Software Repository is a collection of library files and function codes. Programmers and Engineers design develop and build software libraries in a continuous process. Selecting suitable function code from one among many in the repository is quite challenging and cumbersome as we need to analyze semantic issues in function codes or components. Clustering and Mining Software Components for efficient reuse is the current topic of interest among researchers in Software Reuse Engineering and Information Retrieval. A relatively less research work is contributed in this field and has a good scope in the future. In this paper, the main idea is to cluster the software components and form a subset of libraries from the available repository. These clusters thus help in choosing the required component with high cohesion and low coupling quickly and efficiently. We define a similarity function and use the same for the process of clustering the software components and for estimating the cost of new project. The approach carried out is a feature vector based approach. © 2014 The Authors. Published by Elsevier B.V. Selection and/or peer-review under responsibility of the organizers of ITQM 2014", "title": "" }, { "docid": "7e6c95fbaa356dfa5c95e370f23c8c92", "text": "Volume II of the subject guide for 2910227, Interactive Multimedia introduced the very basics of metadata-and content-based mutimedia information retrieval. This chapter expands on that introduction in the specific context of music, giving an overview of the field of music information retrieval, some currently existing systems (whether research prototypes or commercially-deployed) and how they work, and some examples of problems yet unsolved. Figure 1.1 enumerates a number of tasks commonly attempted in the field of Music Information Retrieval, arranged by 'specificity', which can be thought of as how discriminating a particular task is, or how clear is the demarcation between relevant and non-relevant (or 'right' and 'wrong') retrieval results. As will become clear through the course of this chapter, these and other tasks in Music Information Retrieval have applications in domains as varied digital libraries, consumer digital devices, content delivery and musical performance. specificity high low music identification rights management, plagiarism detection multiple version handling melody extraction and retrieval performer or composer identification recommender systems style, mood, genre detection music-speech segmentation Figure 1.1: An enumeration of some tasks in the general field of Music Information Retrieval, arranged on a scale of 'specificity' after Byrd (2008); Casey et al. (2008). The specificity of a retrieval task relates to how much acoustic and musical material a retrieved result must share with a query to be considered relevant, and how many documents in total could be considered relevant retrieval results. This chapter includes a number of references to the published scientific literature.", "title": "" }, { "docid": "ee2c37fd2ebc3fd783bfe53213e7470e", "text": "Mind-body interventions are beneficial in stress-related mental and physical disorders. Current research is finding associations between emotional disorders and vagal tone as indicated by heart rate variability. A neurophysiologic model of yogic breathing proposes to integrate research on yoga with polyvagal theory, vagal stimulation, hyperventilation, and clinical observations. Yogic breathing is a unique method for balancing the autonomic nervous system and influencing psychologic and stress-related disorders. Many studies demonstrate effects of yogic breathing on brain function and physiologic parameters, but the mechanisms have not been clarified. Sudarshan Kriya yoga (SKY), a sequence of specific breathing techniques (ujjayi, bhastrika, and Sudarshan Kriya) can alleviate anxiety, depression, everyday stress, post-traumatic stress, and stress-related medical illnesses. Mechanisms contributing to a state of calm alertness include increased parasympathetic drive, calming of stress response systems, neuroendocrine release of hormones, and thalamic generators. This model has heuristic value, research implications, and clinical applications.", "title": "" }, { "docid": "4592c8f5758ccf20430dbec02644c931", "text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.", "title": "" }, { "docid": "cf31b8eb971e89d4521c4a70cf181bc3", "text": "In this paper we address the problem of scalable, native and adaptive query processing over Linked Stream Data integrated with Linked Data. Linked Stream Data consists of data generated by stream sources, e.g., sensors, enriched with semantic descriptions, following the standards proposed for Linked Data. This enables the integration of stream data with Linked Data collections and facilitates a wide range of novel applications. Currently available systems use a “black box” approach which delegates the processing to other engines such as stream/event processing engines and SPARQL query processors by translating to their provided languages. As the experimental results described in this paper show, the need for query translation and data transformation, as well as the lack of full control over the query execution, pose major drawbacks in terms of efficiency. To remedy these drawbacks, we present CQELS (Continuous Query Evaluation over Linked Streams), a native and adaptive query processor for unified query processing over Linked Stream Data and Linked Data. In contrast to the existing systems, CQELS uses a “white box” approach and implements the required query operators natively to avoid the overhead and limitations of closed system regimes. CQELS provides a flexible query execution framework with the query processor dynamically adapting to the changes in the input data. During query execution, it continuously reorders operators according to some heuristics to achieve improved query execution in terms of delay and complexity. Moreover, external disk access on large Linked Data collections is reduced with the use of data encoding and caching of intermediate query results. To demonstrate the efficiency of our approach, we present extensive experimental performance evaluations in terms of query execution time, under varied query types, dataset sizes, and number of parallel queries. These results show that CQELS outperforms related approaches by orders of magnitude.", "title": "" }, { "docid": "914f9bf7d24d0a0ee8c42e1263a04646", "text": "With the rapid growth in the usage of social networks worldwide, uploading and sharing of user-generated content, both text and visual, has become increasingly prevalent. An analysis of the content a user shares and engages with can provide valuable insights into an individual's preferences and lifestyle. In this paper, we present a system to automatically infer a user's interests by analysing the content of the photos they share online. We propose a way to leverage web image search engines for detecting high-level semantic concepts, such as interests, in images, without relying on a large set of labeled images. We demonstrate the effectiveness of our system through quantitative and qualitative results on data collected from Instagram.", "title": "" }, { "docid": "ca932a0b6b71f009f95bad6f2f3f8a38", "text": "Page 13 Supply chain management is increasingly being recognized as the integration of key business processes across the supply chain. For example, Hammer argues that now that companies have implemented processes within the firm, they need to integrate them between firms: Streamlining cross-company processes is the next great frontier for reducing costs, enhancing quality, and speeding operations. It is where this decade’s productivity wars will be fought. The victors will be those companies that are able to take a new approach to business, working closely with partners to design and manage processes that extend across traditional corporate boundaries. They will be the ones that make the leap from efficiency to super efficiency [1]. Monczka and Morgan also focus on the importance of process integration in supply chain management [2]. The piece that seems to be missing from the literature is a comprehensive definition of the processes that constitute supply chain management. How can companies achieve supply chain integration if there is not a common understanding of the key business processes? It seems that in order to build links between supply chain members it is necessary for companies to implement a standard set of supply chain processes. Practitioners and educators need a common definition of supply chain management, and a shared understanding of the processes. We recommend the definition of supply chain management developed and used by The Global Supply Chain Forum: Supply Chain Management is the integration of key business processes from end user through original suppliers that provides products, services, and information that add value for customers and other stakeholders [3]. The Forum members identified eight key processes that need to be implemented within and across firms in the supply chain. To date, The Supply Chain Management Processes", "title": "" }, { "docid": "c98bdc262bbc53b5858bea7598f85b6c", "text": "Parallel corpora have driven great progress in the field of Text Simplification. However, most sentence alignment algorithms either offer a limited range of alignment types supported, or simply ignore valuable clues present in comparable documents. We address this problem by introducing a new set of flexible vicinity-driven paragraph and sentence alignment algorithms that 1-N, N-1, N-N and long distance null alignments without the need for hard-toreplicate supervised models.", "title": "" }, { "docid": "9290ca06a925f8e52f445feb3f0a257a", "text": "Multi-task learning is a promising approach for efficiently and effectively addressing multiple mutually related recognition tasks. Many scene understanding tasks such as semantic segmentation and depth prediction can be framed as cross-modal encoding/decoding, and hence most of the prior work used multi-modal datasets for multi-task learning. However, the inter-modal commonalities, such as one across image, depth, and semantic labels, have not been fully exploited. We propose a multi-modal encoder-decoder networks to harness the multi-modal nature of multi-task scene recognition. In addition to the shared latent representation among encoder-decoder pairs, our model also has shared skip connections from different encoders. By combining these two representation sharing mechanisms, the proposed method efficiently learns a shared feature representation among all modalities in the training data. Experiments using two public datasets shows the advantage of our method over baseline methods that are based on encoder-decoder networks and multi-modal auto-encoders.", "title": "" }, { "docid": "5c444fcd85dd89280eee016fd1cbd175", "text": "Over the last years, object detection has become a more and more active field of research in robotics. An important problem in object detection is the need for sufficient labeled training data to learn good classifiers. In this paper we show how to significantly reduce the need for manually labeled training data by leveraging data sets available on the World Wide Web. Specifically, we show how to use objects from Google’s 3D Warehouse to train an object detection system for 3D point clouds collected by robots navigating through both urban and indoor environments. In order to deal with the different characteristics of the web data and the real robot data, we additionally use a small set of labeled point clouds and perform domain adaptation. Our experiments demonstrate that additional data taken from the 3D Warehouse along with our domain adaptation greatly improves the classification accuracy on real-world environments.", "title": "" }, { "docid": "2da9ad29e0b10a8dc8b01a8faf35bb1a", "text": "Face recognition is challenge task which involves determining the identity of facial images. With availability of a massive amount of labeled facial images gathered from Internet, deep convolution neural networks(DCNNs) have achieved great success in face recognition tasks. Those images are gathered from unconstrain environment, which contain people with different ethnicity, age, gender and so on. However, in the actual application scenario, the target face database may be gathered under different conditions compered with source training dataset, e.g. different ethnicity, different age distribution, disparate shooting environment. These factors increase domain discrepancy between source training database and target application database which makes the learnt model degenerate in target database. Meanwhile, for the target database where labeled data are lacking or unavailable, directly using target data to fine-tune pre-learnt model becomes intractable and impractical. In this paper, we adopt unsupervised transfer learning methods to address this issue. To alleviate the discrepancy between source and target face database and ensure the generalization ability of the model, we constrain the maximum mean discrepancy (MMD) between source database and target database and utilize the massive amount of labeled facial images of source database to training the deep neural network at the same time. We evaluate our method on two face recognition benchmarks and significantly enhance the performance without utilizing the target label.", "title": "" } ]
scidocsrr
82d15811583e63a67c7ba60179cfd8bb
Marble MLFQ: An educational visualization tool for the multilevel feedback queue algorithm
[ { "docid": "b43118e150870aab96af1a7b32515202", "text": "Algorithm visualization (AV) technology graphically illustrates how algorithms work. Despite the intuitive appeal of the technology, it has failed to catch on in mainstream computer science education. Some have attributed this failure to the mixed results of experimental studies designed to substantiate AV technology’s educational effectiveness. However, while several integrative reviews of AV technology have appeared, none has focused specifically on the software’s effectiveness by analyzing this body of experimental studies as a whole. In order to better understand the effectiveness of AV technology, we present a systematic metastudy of 24 experimental studies. We pursue two separate analyses: an analysis of independent variables, in which we tie each study to a particular guiding learning theory in an attempt to determine which guiding theory has had the most predictive success; and an analysis of dependent variables, which enables us to determine which measurement techniques have been most sensitive to the learning benefits of AV technology. Our most significant finding is that how students use AV technology has a greater impact on effectiveness than what AV technology shows them. Based on our findings, we formulate an agenda for future research into AV effectiveness. A META-STUDY OF ALGORITHM VISUALIZATION EFFECTIVENESS 3", "title": "" } ]
[ { "docid": "d4ac0d6890cc89e2525b9537376cce39", "text": "Unsupervised over-segmentation of an image into regions of perceptually similar pixels, known as super pixels, is a widely used preprocessing step in segmentation algorithms. Super pixel methods reduce the number of regions that must be considered later by more computationally expensive algorithms, with a minimal loss of information. Nevertheless, as some information is inevitably lost, it is vital that super pixels not cross object boundaries, as such errors will propagate through later steps. Existing methods make use of projected color or depth information, but do not consider three dimensional geometric relationships between observed data points which can be used to prevent super pixels from crossing regions of empty space. We propose a novel over-segmentation algorithm which uses voxel relationships to produce over-segmentations which are fully consistent with the spatial geometry of the scene in three dimensional, rather than projective, space. Enforcing the constraint that segmented regions must have spatial connectivity prevents label flow across semantic object boundaries which might otherwise be violated. Additionally, as the algorithm works directly in 3D space, observations from several calibrated RGB+D cameras can be segmented jointly. Experiments on a large data set of human annotated RGB+D images demonstrate a significant reduction in occurrence of clusters crossing object boundaries, while maintaining speeds comparable to state-of-the-art 2D methods.", "title": "" }, { "docid": "8344bc2e3165bd2a3a426d3c3699257f", "text": "We present a methodology for designing and implementing interactive intelligences. The Constructionist Design Methodology (CDM) – so called because it advocates modular building blocks and incorporation of prior work – addresses factors that we see as key to future advances in A.I., including interdisciplinary collaboration support, coordination of teams and large-scale systems integration. We test the methodology by building an interactive multi-functional system with a real-time perception-action loop. The system, whose construction relied entirely on the methodology, consists of an embodied virtual agent that can perceive both real and virtual objects in an augmented-reality room and interact with a user through coordinated gestures and speech. Wireless tracking technologies give the agent awareness of the environment and the user’s speech and communicative acts. User and agent can communicate about things in the environment, their placement and function, as well as more abstract topics such as current news, through situated multimodal dialog. The results demonstrate CDM’s strength in simplifying the modeling of complex, multi-functional systems requiring architectural experimentation and exploration of unclear sub-system boundaries, undefined variables, and tangled data flow and control hierarchies. Introduction The creation of embodied humanoids and broad A.I. systems requires integration of a large number of functionalities that must be carefully coordinated to achieve coherent system behavior. We are working on formalizing a methodology that can help in this process. The architectural foundation we have chosen for the approach is based on the concept of a network of interacting modules, communicating via messages. To test the design methodology we chose a system with a human user that interacts in real-time with a simulated human, in an augmented-reality environment. In this paper we present the design methodology and describe the system that we built to test it. Newell [1992] urged for the search of unified theories of cognition, and recent work in A.I. has increasingly focused on integration of multiple systems (cf. [Simmons et 1 While Newell’s architecture Soar is based on a small set of general principles, intended to explain a wide range of cognitive phenomena, Newell makes it very clear in his book [Newell 1992] that he does not consider Soar to be the unified theory of cognition. We read his call for unification not in the narrow sense to mean the particular premises he chose for Soar, but rather in the more broad sense to refer to the general breadth of cognitive models. Constructionist Design Methodology for Interactive Intelligences K.R.Thórisson et al., 2004 Accepted to AAAI Magazine, Dec. 2003 2 al. 2003, McCarthy et al. 2002, Bischoff et al. 1999]). Unified theories necessarily mean integration of many functionalities, but our prior experience in building systems that integrate multiple features from artificial intelligence and computer graphics [Bryson & Thórisson 2000, Lucente 2000, Thórisson 1999] has made it very clear that such integration can be a challenge, even for a team of experienced developers. In addition to basic technical issues – connecting everything together can be prohibitive in terms of time – it can be difficult to get people with different backgrounds, such as computer graphics, hardware, and artificial intelligence, to communicate effectively. Coordinating such an effort can thus be a management task of a tall order; keeping all parties synchronized takes skill and time. On top of this comes the challenge of deciding the scope of the system: What seems simple to a computer graphics expert may in fact be a long-standing dream of the A.I. person, and vice versa. Several factors motivate our work. First, a much-needed move towards building on prior work in A.I., to promote incremental accumulation of knowledge in creating intelligent systems, is long overdue. The relatively small group who is working on broad models of mind, bridging across disciplines, needs better ways to share results and work together, and to work with others outside their field. To this end our principles foster re-usable software components, through a common middleware specification, and mechanisms for defining interfaces between components. Second, by focusing on the re-use of existing work we are able to support the construction of more powerful systems than otherwise possible, speeding up the path towards useful, deployable systems. Third, we believe that to study mental mechanisms they need to be embedded in a larger cognitive model with significant breadth, to contextualize their operation and enable their testing under boundary conditions. This calls for an increased focus on supporting large-scale integration and experimentation. Fourth, by bridging across multiple functionalities in a single, unified system, researchers’ familiarity and breadth of experience with the various models of thought to date – as well as new ones – increases. This is important – as are in fact all of the above points – when the goal is to develop unified theories of cognition. Inspired to a degree by the classic LEGO bricks, our methodology – which we call a Constructionist Approach to A.I. – puts modularity at its center: Functionalities of the system are broken into individual software modules, which are typically larger than software classes (i.e. objects and methods) in object-oriented programming, but smaller than the typical enterprise application. The role of each module is determined in part by specifying the message types and information content that needs to flow between the various functional parts of the system. Using this functional outline we then define and develop, or select, components for perception, knowledge representation, planning, animation, and other desired functionalities. Behind this work lies the conjecture that the mind can be modeled through the adequate combination of interacting, functional machines (modules). Of course, this is still debated in the research community and not all researchers are convinced of its merits. However, this claim is in its essence simply a combination of two less radical ones. First, that a divide-and-conquer methodology will be fruitful in studying the mind as a system. Since practically all scientific results since the Greek philosophers are based on this, it is hard to argue against it. In contrast to the search for unified 2 http://www.MINDMAKERS.ORG Constructionist Design Methodology for Interactive Intelligences K.R.Thórisson et al., 2004 Accepted to AAAI Magazine, Dec. 2003 3 theories in physics, we see the search for unified theories of cognition in the same way as articulated in Minsky’s [1986] theory, that the mind is a multitude of interacting components, and his (perhaps whimsical but fundamental) claim that the brain is a hack. In other words, we expect a working model of the mind to incorporate, and coherently address, what at first seems a tangle of control hierarchies and data paths. Which relates to another important theoretical stance: The need to model more than a single or a handful of the mind’s mechanisms in isolation in order to understand the working mind. In a system of many modules with rich interaction, only a model incorporating a rich spectrum of (animal or human) mental functioning will give us a correct picture of the broad principles underlying intelligence. Figure 1: Our embodied agent Mirage is situated in the lab. Here we see how he appears to the user through the head-mounted glasses. (Image has been enhanced for clarity.) There is essentially nothing in the Constructionist approach to A.I. that lends it more naturally to behavior-based A.I. [c.f. Brooks 1991] or “classical” A.I. – its principles sit beside both. In fact, since CDM is intended to address the integration problem of very broad cognitive systems, it must be able to encompass all variants and approaches to date. We think it unlikely that any of the principles we present will be found objectionable, or even completely novel for that matter, by a seasoned software engineer. But these principles are custom-tailored to guide the construction of large cognitive systems, and we hope it will be used, extended and improved by many others over time. To test the power of a new methodology, a novel problem is preferred over one that has a known solution. The system we chose to develop presented us with a unique scope and unsolved integration issues: An augmented reality setting inhabited by an embodied virtual character; the character would be visible via a see-through stereoscopic display that the user wears, and would help them navigate the real-world environment. The character, called Mirage, should appear as a transparent, ghost-like 3 Personal communication, 1994. Constructionist Design Methodology for Interactive Intelligences K.R.Thórisson et al., 2004 Accepted to AAAI Magazine, Dec. 2003 4 stereoscopic 3-D graphic superimposed on the user’s real world view (Figure 1). This system served as a test-bed for our methodology; it is presented in sufficient detail here to demonstrate the application of the methodology and to show its modular philosophy, which it mirrors closely.", "title": "" }, { "docid": "697580dda38c9847e9ad7c6a14ad6cd0", "text": "Background: This paper describes an analysis that was conducted on newly collected repository with 92 versions of 38 proprietary, open-source and academic projects. A preliminary study perfomed before showed the need for a further in-depth analysis in order to identify project clusters.\n Aims: The goal of this research is to perform clustering on software projects in order to identify groups of software projects with similar characteristic from the defect prediction point of view. One defect prediction model should work well for all projects that belong to such group. The existence of those groups was investigated with statistical tests and by comparing the mean value of prediction efficiency.\n Method: Hierarchical and k-means clustering, as well as Kohonen's neural network was used to find groups of similar projects. The obtained clusters were investigated with the discriminant analysis. For each of the identified group a statistical analysis has been conducted in order to distinguish whether this group really exists. Two defect prediction models were created for each of the identified groups. The first one was based on the projects that belong to a given group, and the second one - on all the projects. Then, both models were applied to all versions of projects from the investigated group. If the predictions from the model based on projects that belong to the identified group are significantly better than the all-projects model (the mean values were compared and statistical tests were used), we conclude that the group really exists.\n Results: Six different clusters were identified and the existence of two of them was statistically proven: 1) cluster proprietary B -- T=19, p=0.035, r=0.40; 2) cluster proprietary/open - t(17)=3.18, p=0.05, r=0.59. The obtained effect sizes (r) represent large effects according to Cohen's benchmark, which is a substantial finding.\n Conclusions: The two identified clusters were described and compared with results obtained by other researchers. The results of this work makes next step towards defining formal methods of reuse defect prediction models by identifying groups of projects within which the same defect prediction model may be used. Furthermore, a method of clustering was suggested and applied.", "title": "" }, { "docid": "39ad1394bc419f70c830b1ad9c90664f", "text": "Building on a recent work of Harrison, Armstrong, Harrison, Iverson and Lange which suggested that Wechsler Adult Intelligence Scale–Fourth Edition (WAIS-IV) scores might systematically overestimate the severity of intellectual impairments if Canadian norms are used, the present study examined differences between Canadian and American derived WAIS-IV scores from 861 postsecondary students attending school across the province of Ontario, Canada. This broader data set confirmed a trend whereby individuals’ raw scores systematically produced lower standardized scores through the use of Canadian as opposed to American norms. The differences do not appear to be due to cultural, educational, or population differences, as participants acted as their own controls. The ramifications of utilizing the different norms were examined with regard to psychoeducational assessments and educational placement decisions particularly with respect to the diagnoses of Learning Disability and Intellectual Disability.", "title": "" }, { "docid": "1a6a6c6721073e3664c6a0a2fdd20cfc", "text": "This paper presents a new control strategy for a doubly fed induction generator (DFIG) under unbalanced network voltage conditions. Coordinated control of the grid- and rotor-side converters (GSC and RSC, respectively) during voltage unbalance is proposed. Under an unbalanced supply voltage, the RSC is controlled to eliminate the torque pulsation at double supply frequency. The oscillation of the stator output active power is then compensated by the active power output from the GSC, to ensure constant active power output from the overall DFIG generation system. In order to provide precise control of the positive- and negative-sequence currents of the GSC and RSC, a current control scheme consisting of a proportional integral (PI) controller and a resonant (R) compensator is presented. The PI plus R current regulator is implemented in the positive synchronous reference frame without the need to decompose the positive- and negative-sequence components. Simulations on a 1.5-MW DFIG system and experimental tests on a 1.5-kW prototype validate the proposed strategy. Precise control of both positive- and negative-sequence currents and simultaneous elimination of torque and total active power oscillations have been achieved.", "title": "" }, { "docid": "b27224825bb28b9b8d0eea37f8900d42", "text": "The use of Convolutional Neural Networks (CNN) in natural im age classification systems has produced very impressive results. Combined wit h the inherent nature of medical images that make them ideal for deep-learning, fu rther application of such systems to medical image classification holds much prom ise. However, the usefulness and potential impact of such a system can be compl etely negated if it does not reach a target accuracy. In this paper, we present a s tudy on determining the optimum size of the training data set necessary to achiev e igh classification accuracy with low variance in medical image classification s ystems. The CNN was applied to classify axial Computed Tomography (CT) imag es into six anatomical classes. We trained the CNN using six different sizes of training data set ( 5, 10, 20, 50, 100, and200) and then tested the resulting system with a total of 6000 CT images. All images were acquired from the Massachusetts G eneral Hospital (MGH) Picture Archiving and Communication System (PACS). U sing this data, we employ the learning curve approach to predict classificat ion ccuracy at a given training sample size. Our research will present a general me thodology for determining the training data set size necessary to achieve a cert in target classification accuracy that can be easily applied to other problems within such systems.", "title": "" }, { "docid": "703acc0a9c73c7c2b3ca68c635fec82f", "text": "Purpose – Using 12 case studies, the purpose of this paper is to investigate the use of business analysis techniques in BPR. Some techniques are used more than others depending on the fit between the technique and the problem. Other techniques are preferred due to their versatility, easy to use, and flexibility. Some are difficult to use requiring skills that analysts do not possess. Problem analysis, and business process analysis and activity elimination techniques are preferred for process improvement projects, and technology analysis for technology problems. Root cause analysis (RCA) and activitybased costing (ABC) are seldom used. RCA requires specific skills and ABC is only applicable for discrete business activities. Design/methodology/approach – This is an exploratory case study analysis. The author analyzed 12 existing business reengineering (BR) case studies from the MIS literature. Cases include, but not limited to IBM Credit Union, Chase Manhattan Bank, Honeywell Corporation, and Cigna. Findings – The author identified eight business analysis techniques used in business process reengineering. The author found that some techniques are preferred over others. Some possible reasons are related to the fit between the analysis technique and the problem situation, the ease of useof-use of the chosen technique, and the versatility of the technique. Some BR projects require the use of several techniques, while others require just one. It appears that the problem complexity is correlated with the number of techniques required or used. Research limitations/implications – Small sample sizes are often subject to criticism about replication and generalizability of results. However, this research is a good starting point for expanding the sample to allowmore generalizable results. Future research may investigate the deeper connections between reengineering and analysis techniques and the risks of using various techniques to diagnose problems in multiple dimensions. An investigation of fit between problems and techniques could be explored. Practical implications – The author have a better idea which techniques are used more, which are more versatile, and which are difficult to use and why. Practitioners and academicians have a better understanding of the fit between technique and problem and how best to align them. It guides the selection of choosing a technique, and exposes potential problems. For example RCA requires knowledge of fishbone diagram construction and interpreting results. Unfamiliarity with the technique results in disaster and increases project risk. Understanding the issues helps to reduce project risk and increase project success, benefiting project teams, practitioners, and organizations. Originality/value –Many aspects of BR have been studied but the contribution of this research is to investigate relationships between business analysis techniques and business areas, referred to as BR dimensions. The author try to find answers to the following questions: first, are business analysis techniques used for BR project, and is there evidence that BR affects one or more areas of the business? Second, are BR projects limited to a single dimension? Third, are some techniques better suited for diagnosing problems in specific dimensions and are some techniques more difficult to use than others, if so why?; are some techniques used more than others, if so why?", "title": "" }, { "docid": "f18dc5d572f60da7c85d50e6a42de2c9", "text": "Recent developments in remote sensing are offering a promising opportunity to rethink conventional control strategies of wind turbines. With technologies such as LIDAR, the information about the incoming wind field - the main disturbance to the system - can be made available ahead of time. Feedforward control can be easily combined with traditional collective pitch feedback controllers and has been successfully tested on real systems. Nonlinear model predictive controllers adjusting both collective pitch and generator torque can further reduce structural loads in simulations but have higher computational times compared to feedforward or linear model predictive controller. This paper compares a linear and a commercial nonlinear model predictive controller to a baseline controller. On the one hand simulations show that both controller have significant improvements if used along with the preview of the rotor effective wind speed. On the other hand the nonlinear model predictive controller can achieve better results compared to the linear model close to the rated wind speed.", "title": "" }, { "docid": "ed9b027bafedfa9305d11dca49ecc930", "text": "This paper announces and discusses the experimental results from the Noisy Iris Challenge Evaluation (NICE), an iris biometric evaluation initiative that received worldwide participation and whose main innovation is the use of heavily degraded data acquired in the visible wavelength and uncontrolled setups, with subjects moving and at widely varying distances. The NICE contest included two separate phases: 1) the NICE.I evaluated iris segmentation and noise detection techniques and 2) the NICE:II evaluated encoding and matching strategies for biometric signatures. Further, we give the performance values observed when fusing recognition methods at the score level, which was observed to outperform any isolated recognition strategy. These results provide an objective estimate of the potential of such recognition systems and should be regarded as reference values for further improvements of this technology, which-if successful-may significantly broaden the applicability of iris biometric systems to domains where the subjects cannot be expected to cooperate.", "title": "" }, { "docid": "577f373477f6b8a8bee6a694dab6d3c9", "text": "The YouTube-8M video classification challenge requires teams to classify 0.7 million videos into one or more of 4,716 classes. In this Kaggle competition, we placed in the top 3% out of 650 participants using released video and audio features . Beyond that, we extend the original competition by including text information in the classification, making this a truly multi-modal approach with vision, audio and text. The newly introduced text data is termed as YouTube-8M-Text. We present a classification framework for the joint use of text, visual and audio features, and conduct an extensive set of experiments to quantify the benefit that this additional mode brings. The inclusion of text yields state-of-the-art results, e.g. 86.7% GAP on the YouTube-8M-Text validation dataset.", "title": "" }, { "docid": "bd335c2fd0f866a8af83eab1458c0a4a", "text": "Agile methodologies, in particular the framework SCRUM, are popular in software development companies. Most of the time, however, it is not feasible for these companies to apply every characteristic of the framework. This paper presents a hybrid application of verbal decision analysis methodologies in order to select some of the most relevant SCRUM approaches to be applied by a company. A questionnaire was developed and a group of experienced ScrumMasters was selected to answer it, aiming at characterizing every SCRUM approach into criteria values. The hybrid application consists in dividing the SCRUM practices into groups (stage supported by the ORCLASS method application), using the ORCLASSWEB tool. Then, the rank of the preferred practices will be generated by the application of the ZAPROS-LM method.", "title": "" }, { "docid": "e090bb879e35dbabc5b3c77c98cd6832", "text": "Immunity of analog circuit blocks is becoming a major design risk. This paper presents an automated methodology to simulate the susceptibility of a circuit during the design phase. More specifically, we propose a CAD tool which determines the fail/pass criteria of a signal under direct power injection (DPI). This contribution describes the function of the tool which is validated by a LDO regulator.", "title": "" }, { "docid": "db8cbcc8a7d233d404a18a54cb9fedae", "text": "Edge preserving filters preserve the edges and its information while blurring an image. In other words they are used to smooth an image, while reducing the edge blurring effects across the edge like halos, phantom etc. They are nonlinear in nature. Examples are bilateral filter, anisotropic diffusion filter, guided filter, trilateral filter etc. Hence these family of filters are very useful in reducing the noise in an image making it very demanding in computer vision and computational photography applications like denoising, video abstraction, demosaicing, optical-flow estimation, stereo matching, tone mapping, style transfer, relighting etc. This paper provides a concrete introduction to edge preserving filters starting from the heat diffusion equation in olden to recent eras, an overview of its numerous applications, as well as mathematical analysis, various efficient and optimized ways of implementation and their interrelationships, keeping focus on preserving the boundaries, spikes and canyons in presence of noise. Furthermore it provides a realistic notion for efficient implementation with a research scope for hardware realization for further acceleration.", "title": "" }, { "docid": "b8c0e4b41334790155203533105a4d0d", "text": "In our previous work, we have proposed the extended Karnaugh map representation (EKMR) scheme for multidimensional array representation. In this paper, we propose two data compression schemes, EKMR Compressed Row/ Column Storage (ECRS/ECCS), for multidimensional sparse arrays based on the EKMR scheme. To evaluate the proposed schemes, we compare them to the CRS/CCS schemes. Both theoretical analysis and experimental tests were conducted. In the theoretical analysis, we analyze the CRS/CCS and the ECRS/ ECCS schemes in terms of the time complexity, the space complexity, and the range of their usability for practical applications. In experimental tests, we compare the compressing time of sparse arrays and the execution time of matrixmatrix addition and matrix-matrix multiplication based on the CRS/CCS and the ECRS/ECCS schemes. The theoretical analysis and experimental results show that the ECRS/ECCS schemes are superior to the CRS/CCS schemes for all the evaluated criteria, except the space complexity in some cases.", "title": "" }, { "docid": "f6193fa2ac2ea17c7710241a42d34a33", "text": "BACKGROUND\nThe most common microcytic and hypochromic anemias are iron deficiency anemia and thalassemia trait. Several indices to discriminate iron deficiency anemia from thalassemia trait have been proposed as simple diagnostic tools. However, some of the best discriminative indices use parameters in the formulas that are only measured in modern counters and are not always available in small laboratories. The development of an index with good diagnostic accuracy based only on parameters derived from the blood cell count obtained using simple counters would be useful in the clinical routine. Thus, the aim of this study was to develop and validate a discriminative index to differentiate iron deficiency anemia from thalassemia trait.\n\n\nMETHODS\nTo develop and to validate the new formula, blood count data from 106 (thalassemia trait: 23 and iron deficiency: 83) and 185 patients (thalassemia trait: 30 and iron deficiency: 155) were used, respectively. Iron deficiency, β-thalassemia trait and α-thalassemia trait were confirmed by gold standard tests (low serum ferritin for iron deficiency anemia, HbA2>3.5% for β-thalassemia trait and using molecular biology for the α-thalassemia trait).\n\n\nRESULTS\nThe sensitivity, specificity, efficiency, Youden's Index, area under receiver operating characteristic curve and Kappa coefficient of the new formula, called the Matos & Carvalho Index were 99.3%, 76.7%, 95.7%, 76.0, 0.95 and 0.83, respectively.\n\n\nCONCLUSION\nThe performance of this index was excellent with the advantage of being solely dependent on the mean corpuscular hemoglobin concentration and red blood cell count obtained from simple automatic counters and thus may be of great value in underdeveloped and developing countries.", "title": "" }, { "docid": "a53935e12b0a18d6555315149fdb4563", "text": "With the prevalence of mobile devices such as smartphones and tablets, the ways people access to the Internet have changed enormously. In addition to the information that can be recorded by traditional Web-based e-commerce like frequent online shopping stores and browsing histories, mobile devices are capable of tracking sophisticated browsing behavior. The aim of this study is to utilize users' browsing behavior of reading hotel reviews on mobile devices and subsequently apply text-mining techniques to construct user interest profiles to make personalized hotel recommendations. Specifically, we design and implement an app where the user can search hotels and browse hotel reviews, and every gesture the user has performed on the touch screen when reading the hotel reviews is recorded. We then identify the paragraphs of hotel reviews that a user has shown interests based on the gestures the user has performed. Text mining techniques are applied to construct the interest profile of the user according to the review content the user has seriously read. We collect more than 5,000 reviews of hotels in Taipei, the largest metropolitan area of Taiwan, and recruit 18 users to participate in the experiment. Experimental results demonstrate that the recommendations made by our system better match the user's hotel selections than previous approaches.", "title": "" }, { "docid": "316e4984bf6eef57a7f823b5303164f1", "text": "Recent technical and infrastructural developments posit flipped (or inverted) classroom approaches ripe for exploration. Flipped classroom approaches have students use technology to access the lecture and other instructional resources outside the classroom in order to engage them in active learning during in-class time. Scholars and educators have reported a variety of outcomes of a flipped approach to instruction; however, the lack of a summary from these empirical studies prevents stakeholders from having a clear view of the benefits and challenges of this style of instruction. The purpose of this article is to provide a review of the flipped classroom approach in order to summarize the findings, to guide future studies, and to reflect the major achievements in the area of Computer Science (CS) education. 32 peer-reviewed articles were collected from a systematic literature search and analyzed based on a categorization of their main elements. The results of this survey show the direction of flipped classroom research during recent years and summarize the benefits and challenges of adopting a flipped approach in the classroom. Suggestions for future research include: describing in-detail the flipped approach; performing controlled experiments; and triangulating data from diverse sources. These future research efforts will reveal which aspects of a flipped classroom work better and under which circumstances and student groups. The findings will ultimately allow us to form best practices and a unified framework for guiding/assisting educators who want to adopt this teaching style.", "title": "" }, { "docid": "4646848b959a356bb4d7c0ef14d53c2c", "text": "Consumerization of IT (CoIT) is a key trend affecting society at large, including organizations of all kinds. A consensus about the defining aspects of CoIT has not yet been reached. Some refer to CoIT as employees bringing their own devices and technologies to work, while others highlight different aspects. While the debate about the nature and consequences of CoIT is still ongoing, many definitions have already been proposed. In this paper, we review these definitions and what is known about CoIT thus far. To guide future empirical research in this emerging area, we also review several established theories that have not yet been applied to CoIT but in our opinion have the potential to shed a deeper understanding on CoIT and its consequences. We discuss which elements of the reviewed theories are particularly relevant for understanding CoIT and thereby provide targeted guidance for future empirical research employing these theories. Overall, our paper may provide a useful starting point for addressing the lack of theorization in the emerging CoIT literature stream and stimulate discussion about theorizing CoIT.", "title": "" }, { "docid": "50af85ca1f0c642cd74e713182f5ef58", "text": "Commentators suggest that between 30 and 60% of large US firms have adopted the Balanced Scorecard, first described by Bob Kaplan and David Norton in their seminal Harvard Business Review paper of 1992 (Kaplan and Norton, 1992; Neely and Marr, 2003). Empirical evidence that explores the performance impact of the balanced scorecard, however, is extremely rare and much that is available is anecdotal at best. This paper reports a study that set out to explore the performance impact of the balanced scorecard by employing a quasiexperimental design. Up to three years worth of financial data were collected from two sister divisions of an electrical wholesale chain based in the UK, one of which had implemented the balanced scorecard and one of which had not. The relative performance improvements of matched pairs of branches were compared to establish what, if any, performance differentials existed between the branches that had implemented the balanced scorecard and those that had not. The key findings of the study include: (i) when analyzing just the data from Electrical – the business that implemented the balanced scorecard it appears that implementation of the balanced scorecard might have had a positive impact on sales, gross profit and net profit; but (ii) when comparing Electrical’s performance with its sister company these findings can be questioned. Clearly further work on this important topic is required in similar settings where natural experiments occur.", "title": "" }, { "docid": "e2f2961ab8c527914c3d23f8aa03e4bf", "text": "Pedestrian detection based on the combination of convolutional neural network (CNN) and traditional handcrafted features (i.e., HOG+LUV) has achieved great success. In general, HOG+LUV are used to generate the candidate proposals and then CNN classifies these proposals. Despite its success, there is still room for improvement. For example, CNN classifies these proposals by the fully connected layer features, while proposal scores and the features in the inner-layers of CNN are ignored. In this paper, we propose a unifying framework called multi-layer channel features (MCF) to overcome the drawback. It first integrates HOG+LUV with each layer of CNN into a multi-layer image channels. Based on the multi-layer image channels, a multi-stage cascade AdaBoost is then learned. The weak classifiers in each stage of the multi-stage cascade are learned from the image channels of corresponding layer. Experiments on Caltech data set, INRIA data set, ETH data set, TUD-Brussels data set, and KITTI data set are conducted. With more abundant features, an MCF achieves the state of the art on Caltech pedestrian data set (i.e., 10.40% miss rate). Using new and accurate annotations, an MCF achieves 7.98% miss rate. As many non-pedestrian detection windows can be quickly rejected by the first few stages, it accelerates detection speed by 1.43 times. By eliminating the highly overlapped detection windows with lower scores after the first stage, it is 4.07 times faster than negligible performance loss.", "title": "" } ]
scidocsrr
e27668bcb0ad5e7e56f08b9ec04f2b97
Cauchy Graph Embedding
[ { "docid": "6228f059be27fa5f909f58fb60b2f063", "text": "We propose a unified manifold learning framework for semi-supervised and unsupervised dimension reduction by employing a simple but effective linear regression function to map the new data points. For semi-supervised dimension reduction, we aim to find the optimal prediction labels F for all the training samples X, the linear regression function h(X) and the regression residue F0 = F - h(X) simultaneously. Our new objective function integrates two terms related to label fitness and manifold smoothness as well as a flexible penalty term defined on the residue F0. Our Semi-Supervised learning framework, referred to as flexible manifold embedding (FME), can effectively utilize label information from labeled data as well as a manifold structure from both labeled and unlabeled data. By modeling the mismatch between h(X) and F, we show that FME relaxes the hard linear constraint F = h(X) in manifold regularization (MR), making it better cope with the data sampled from a nonlinear manifold. In addition, we propose a simplified version (referred to as FME/U) for unsupervised dimension reduction. We also show that our proposed framework provides a unified view to explain and understand many semi-supervised, supervised and unsupervised dimension reduction techniques. Comprehensive experiments on several benchmark databases demonstrate the significant improvement over existing dimension reduction algorithms.", "title": "" }, { "docid": "da168a94f6642ee92454f2ea5380c7f3", "text": "One of the central problems in machine learning and pattern recognition is to develop appropriate representations for complex data. We consider the problem of constructing a representation for data lying on a low-dimensional manifold embedded in a high-dimensional space. Drawing on the correspondence between the graph Laplacian, the Laplace Beltrami operator on the manifold, and the connections to the heat equation, we propose a geometrically motivated algorithm for representing the high-dimensional data. The algorithm provides a computationally efficient approach to nonlinear dimensionality reduction that has locality-preserving properties and a natural connection to clustering. Some potential applications and illustrative examples are discussed.", "title": "" } ]
[ { "docid": "c5822fd932e29193a11e749a2d10df0b", "text": "Online deception is disrupting our daily life, organizational process, and even national security. Existing approaches to online deception detection follow a traditional paradigm by using a set of cues as antecedents for deception detection, which may be hindered by ineffective cue identification. Motivated by the strength of statistical language models (SLMs) in capturing the dependency of words in text without explicit feature extraction, we developed SLMs to detect online deception. We also addressed the data sparsity problem in building SLMs in general and in deception detection in specific using smoothing and vocabulary pruning techniques. The developed SLMs were evaluated empirically with diverse datasets. The results showed that the proposed SLM approach to deception detection outperformed a state-of-the-art text categorization method as well as traditional feature-based methods.", "title": "" }, { "docid": "d09e4f8c58f9ff0760addfe1e313d5f6", "text": "Currently, color image encryption is important to ensure its confidentiality during its transmission on insecure networks or its storage. The fact that chaotic properties are related with cryptography properties in confusion, diffusion, pseudorandom, etc., researchers around the world have presented several image (gray and color) encryption algorithms based on chaos, but almost all them with serious security problems have been broken with the powerful chosen/known plain image attack. In this work, we present a color image encryption algorithm based on total plain image characteristics (to resist a chosen/known plain image attack), and 1D logistic map with optimized distribution (for fast encryption process) based on Murillo-Escobar's algorithm (Murillo-Escobar et al. (2014) [38]). The security analysis confirms that the RGB image encryption is fast and secure against several known attacks; therefore, it can be implemented in real-time applications where a high security is required. & 2014 Published by Elsevier B.V.", "title": "" }, { "docid": "0f555a4c2415b6a5995905f1594871d4", "text": "With the ultimate intent of improving the quality of life, identification of human's affective states on the collected electroencephalogram (EEG) has attracted lots of attention recently. In this domain, the existing methods usually use only a few labeled samples to classify affective states consisting of over thousands of features. Therefore, important information may not be well utilized and performance is lowered due to the randomness caused by the small sample problem. However, this issue has rarely been discussed in the previous studies. Besides, many EEG channels are irrelevant to the specific learning tasks, which introduce lots of noise to the systems and further lower the performance in the recognition of affective states. To address these two challenges, in this paper, we propose a novel Deep Belief Networks (DBN) based model for affective state recognition from EEG signals. Specifically, signals from each EEG channel are firstly processed with a DBN for effectively extracting critical information from the over thousands of features. The extracted low dimensional characteristics are then utilized in the learning to avoid the small sample problem. For the noisy channel problem, a novel stimulus-response model is proposed. The optimal channel set is obtained according to the response rate of each channel. Finally, a supervised Restricted Boltzmann Machine (RBM) is applied on the combined low dimensional characteristics from the optimal EEG channels. To evaluate the performance of the proposed Supervised DBN based Affective State Recognition (SDA) model, we implement it on the Deap Dataset and compare it with five baselines. Extensive experimental results show that the proposed algorithm can successfully handle the aforementioned two challenges and significantly outperform the baselines by 11.5% to 24.4%, which validates the effectiveness of the proposed algorithm in the task of affective state recognition.", "title": "" }, { "docid": "8fd049da24568dea2227483415532f9b", "text": "The notion of “semiotic scaffolding”, introduced into the semiotic discussions by Jesper Hoffmeyer in December of 2000, is proving to be one of the single most important concepts for the development of semiotics as we seek to understand the full extent of semiosis and the dependence of evolution, particularly in the living world, thereon. I say “particularly in the living world”, because there has been from the first a stubborn resistance among semioticians to seeing how a semiosis prior to and/or independent of living beings is possible. Yet the universe began in a state not only lifeless but incapable of supporting life, and somehow “moved” from there in the direction of being able to sustain life and finally of actually doing so. Wherever dyadic interactions result indirectly in a new condition that either moves the universe closer to being able to sustain life, or moves life itself in the direction not merely of sustaining itself but opening the way to new forms of life, we encounter a “thirdness” in nature of exactly the sort that semiosic triadicity alone can explain. This is the process, both within and without the living world, that requires scaffolding. This essay argues that a fuller understanding of this concept shows why “semiosis” says clearly what “evolution” says obscurely.", "title": "" }, { "docid": "6f9186944cdeab30da7a530a942a5b3d", "text": "In this work, we perform a comparative analysis of the impact of substrate technologies on the performance of 28 GHz antennas for 5G applications. For this purpose, we model, simulate, analyze and compare 2×2 patch antenna arrays on five substrate technologies typically used for manufacturing integrated antennas. The impact of these substrates on the impedance bandwidth, efficiency and gain of the antennas is quantified. Finally, the antennas are fabricated and measured. Excellent correlation is obtained between measurement and simulation results.", "title": "" }, { "docid": "b8dcf30712528af93cb43c5960435464", "text": "The first clinical description of Parkinson's disease (PD) will embrace its two century anniversary in 2017. For the past 30 years, mitochondrial dysfunction has been hypothesized to play a central role in the pathobiology of this devastating neurodegenerative disease. The identifications of mutations in genes encoding PINK1 (PTEN-induced kinase 1) and Parkin (E3 ubiquitin ligase) in familial PD and their functional association with mitochondrial quality control provided further support to this hypothesis. Recent research focused mainly on their key involvement in the clearance of damaged mitochondria, a process known as mitophagy. It has become evident that there are many other aspects of this complex regulated, multifaceted pathway that provides neuroprotection. As such, numerous additional factors that impact PINK1/Parkin have already been identified including genes involved in other forms of PD. A great pathogenic overlap amongst different forms of familial, environmental and even sporadic disease is emerging that potentially converges at the level of mitochondrial quality control. Tremendous efforts now seek to further detail the roles and exploit PINK1 and Parkin, their upstream regulators and downstream signaling pathways for future translation. This review summarizes the latest findings on PINK1/Parkin-directed mitochondrial quality control, its integration and cross-talk with other disease factors and pathways as well as the implications for idiopathic PD. In addition, we highlight novel avenues for the development of biomarkers and disease-modifying therapies that are based on a detailed understanding of the PINK1/Parkin pathway.", "title": "" }, { "docid": "219a90eb2fd03cd6cc5d89fda740d409", "text": "The general problem of computing poste rior probabilities in Bayesian networks is NP hard Cooper However e cient algorithms are often possible for particular applications by exploiting problem struc tures It is well understood that the key to the materialization of such a possibil ity is to make use of conditional indepen dence and work with factorizations of joint probabilities rather than joint probabilities themselves Di erent exact approaches can be characterized in terms of their choices of factorizations We propose a new approach which adopts a straightforward way for fac torizing joint probabilities In comparison with the clique tree propagation approach our approach is very simple It allows the pruning of irrelevant variables it accommo dates changes to the knowledge base more easily it is easier to implement More importantly it can be adapted to utilize both intercausal independence and condi tional independence in one uniform frame work On the other hand clique tree prop agation is better in terms of facilitating pre computations", "title": "" }, { "docid": "579db3cec4e49d53090ee13f35385c35", "text": "In cloud computing environments, multiple tenants are often co-located on the same multi-processor system. Thus, preventing information leakage between tenants is crucial. While the hypervisor enforces software isolation, shared hardware, such as the CPU cache or memory bus, can leak sensitive information. For security reasons, shared memory between tenants is typically disabled. Furthermore, tenants often do not share a physical CPU. In this setting, cache attacks do not work and only a slow cross-CPU covert channel over the memory bus is known. In contrast, we demonstrate a high-speed covert channel as well as the first side-channel attack working across processors and without any shared memory. To build these attacks, we use the undocumented DRAM address mappings. We present two methods to reverse engineer the mapping of memory addresses to DRAM channels, ranks, and banks. One uses physical probing of the memory bus, the other runs entirely in software and is fully automated. Using this mapping, we introduce DRAMA attacks, a novel class of attacks that exploit the DRAM row buffer that is shared, even in multi-processor systems. Thus, our attacks work in the most restrictive environments. First, we build a covert channel with a capacity of up to 2 Mbps, which is three to four orders of magnitude faster than memory-bus-based channels. Second, we build a side-channel template attack that can automatically locate and monitor memory accesses. Third, we show how using the DRAM mappings improves existing attacks and in particular enables practical Rowhammer attacks on DDR4.", "title": "" }, { "docid": "5571389dcc25cbcd9c68517934adce1d", "text": "The polysaccharide-containing extracellular fractions (EFs) of the edible mushroom Pleurotus ostreatus have immunomodulating effects. Being aware of these therapeutic effects of mushroom extracts, we have investigated the synergistic relations between these extracts and BIAVAC and BIAROMVAC vaccines. These vaccines target the stimulation of the immune system in commercial poultry, which are extremely vulnerable in the first days of their lives. By administrating EF with polysaccharides from P. ostreatus to unvaccinated broilers we have noticed slow stimulation of maternal antibodies against infectious bursal disease (IBD) starting from four weeks post hatching. For the broilers vaccinated with BIAVAC and BIAROMVAC vaccines a low to almost complete lack of IBD maternal antibodies has been recorded. By adding 5% and 15% EF in the water intake, as compared to the reaction of the immune system in the previous experiment, the level of IBD antibodies was increased. This has led us to believe that by using this combination of BIAVAC and BIAROMVAC vaccine and EF from P. ostreatus we can obtain good results in stimulating the production of IBD antibodies in the period of the chicken first days of life, which are critical to broilers' survival. This can be rationalized by the newly proposed reactivity biological activity (ReBiAc) principles by examining the parabolic relationship between EF administration and recorded biological activity.", "title": "" }, { "docid": "a702269cd9fce037f2f74f895595d573", "text": "This paper tackles the reduction of redundant repeating generation that is often observed in RNN-based encoder-decoder models. Our basic idea is to jointly estimate the upper-bound frequency of each target vocabulary in the encoder and control the output words based on the estimation in the decoder. Our method shows significant improvement over a strong RNN-based encoder-decoder baseline and achieved its best results on an abstractive summarization benchmark.", "title": "" }, { "docid": "b789785d7e9cdde760af1d65faccfa60", "text": "The use of an expired product may cause harm to its designated target. If the product is for human consumption, e.g. medicine, the result can be fatal. While most people can check the expiration date easily before using the product, it is very difficult for a visually impaired or a totally blind person to do so independently. This paper therefore proposes a solution that helps the visually impaired to identify a product and subsequently `read' the expiration date on a product using a handheld Smartphone. While there are a few commercial barcode decoder and text recognition applications for the mobile phone, they require the user to point the phone to the correct location - which is extremely hard for the visually impaired. We thus focus our research on helping the blind user to locate the barcode and the expiration date on a product package. After that, existing barcode decoding and OCR algorithms can be utilized to obtain the required information. A field trial with several bind- folded/totally-blind participants is conducted and shows that the proposed solution is effective in guiding a visually impaired user towards the barcode and expiry information, although some issues remain with the reliability of the off-the-shelf decoding algorithms on low-resolution videos.", "title": "" }, { "docid": "599fb363d80fd1a7a6faaccbde3ecbb5", "text": "In this survey a new application paradigm life and safety for critical operations and missions using wearable Wireless Body Area Networks (WBANs) technology is introduced. This paradigm has a vast scope of applications, including disaster management, worker safety in harsh environments such as roadside and building workers, mobile health monitoring, ambient assisted living and many more. It is often the case that during the critical operations and the target conditions, the existing infrastructure is either absent, damaged or overcrowded. In this context, it is envisioned that WBANs will enable the quick deployment of ad-hoc/on-the-fly communication networks to help save many lives and ensuring people's safety. However, to understand the applications more deeply and their specific characteristics and requirements, this survey presents a comprehensive study on the applications scenarios, their context and specific requirements. It explores details of the key enabling standards, existing state-of-the-art research studies, and projects to understand their limitations before realizing aforementioned applications. Application-specific challenges and issues are discussed comprehensively from various perspectives and future research and development directions are highlighted as an inspiration for new innovative solutions. To conclude, this survey opens up a good opportunity for companies and research centers to investigate old but still new problems, in the realm of wearable technologies, which are increasingly evolving and getting more and more attention recently.", "title": "" }, { "docid": "748996944ebd52a7d82c5ca19b90656b", "text": "The experiment was conducted with three biofloc treatments and one control in triplicate in 500 L capacity indoor tanks. Biofloc tanks, filled with 350 L of water, were fed with sugarcane molasses (BFTS), tapioca flour (BFTT), wheat flour (BFTW) and clean water as control without biofloc and allowed to stand for 30 days. The postlarvae of Litopenaeus vannamei (Boone, 1931) with an Average body weight of 0.15 0.02 g were stocked at the rate of 130 PL m 2 and cultured for a period of 60 days fed with pelleted feed at the rate of 1.5% of biomass. The total suspended solids (TSS) level was maintained at around 500 mg L 1 in BFT tanks. The addition of carbohydrate significantly reduced the total ammoniaN (TAN), nitrite-N and nitrate-N in water and it significantly increased the total heterotrophic bacteria (THB) population in the biofloc treatments. There was a significant difference in the final average body weight (8.49 0.09 g) in the wheat flour treatment (BFTW) than those treatment and control group of the shrimp. Survival of the shrimps was not affected by the treatments and ranged between 82.02% and 90.3%. The proximate and chemical composition of biofloc and proximate composition of the shrimp was significantly different between the biofloc treatments and control. Tintinids, ciliates, copepods, cyanobacteria and nematodes were identified in all the biofloc treatments, nematodes being the most dominant group of organisms in the biofloc. It could be concluded that the use of wheat flour (BFTW) effectively enhanced the biofloc production and contributed towards better water quality which resulted in higher production of shrimp.", "title": "" }, { "docid": "f65e55d992bff2ce881aaf197a734adf", "text": "hypervisor as a nondeterministic sequential program  prove invariant properties of individual ϋobjects and compose them 14 Phase1 Startup Phase2 Intercept Phase3 Exception Proofs HW initiated concurrent execution Concurrent execution HW initiated sequential execution Sequential execution  Intro.  Motivating. Ex.  Impl.  Verif. Results  Perf.  Concl.  Architecture", "title": "" }, { "docid": "302a838f1a94596d37693363abcf1978", "text": "In this paper we present a method for organizing and indexing logo digital libraries like the ones of the patent and trademark offices. We propose an efficient queried-by-example retrieval system which is able to retrieve logos by similarity from large databases of logo images. Logos are compactly described by a variant of the shape context descriptor. These descriptors are then indexed by a locality-sensitive hashing data structure aiming to perform approximate k-NN search in high dimensional spaces in sub-linear time. The experiments demonstrate the effectiveness and efficiency of this system on realistic datasets as the Tobacco-800 logo database.", "title": "" }, { "docid": "6aee06316a24005ee2f8f4f1906e2692", "text": "Sir, The origin of vestibular papillomatosis (VP) is controversial. VP describes the condition of multiple papillae that may cover the entire surface of the vestibule (1). Our literature search for vestibular papillomatosis revealed 13 reports in gynaecological journals and only one in a dermatological journal. Furthermore, searching for vulvar squamous papillomatosis revealed 6 reports in gynaecological journals and again only one in a dermatological journal. We therefore conclude that it is worthwhile drawing the attention of dermatologists to this entity.", "title": "" }, { "docid": "a6acba54f34d1d101f4abb00f4fe4675", "text": "We study the potential flow of information in interaction networks, that is, networks in which the interactions between the nodes are being recorded. The central notion in our study is that of an information channel. An information channel is a sequence of interactions between nodes forming a path in the network which respects the time order. As such, an information channel represents a potential way information could have flown in the interaction network. We propose algorithms to estimate information channels of limited time span from every node to other nodes in the network. We present one exact and one more efficient approximate algorithm. Both algorithms are onepass algorithms. The approximation algorithm is based on an adaptation of the HyperLogLog sketch, which allows easily combining the sketches of individual nodes in order to get estimates of how many unique nodes can be reached from groups of nodes as well. We show how the results of our algorithm can be used to build efficient influence oracles for solving the Influence maximization problem which deals with finding top k seed nodes such that the information spread from these nodes is maximized. Experiments show that the use of information channels is an interesting data-driven and model-independent way to find top k influential nodes in interaction networks.", "title": "" }, { "docid": "9a9dc194e0ca7d1bb825e8aed5c9b4fe", "text": "In this paper we show how to divide data <italic>D</italic> into <italic>n</italic> pieces in such a way that <italic>D</italic> is easily reconstructable from any <italic>k</italic> pieces, but even complete knowledge of <italic>k</italic> - 1 pieces reveals absolutely no information about <italic>D</italic>. This technique enables the construction of robust key management schemes for cryptographic systems that can function securely and reliably even when misfortunes destroy half the pieces and security breaches expose all but one of the remaining pieces.", "title": "" }, { "docid": "bd32bda2e79d28122f424ec4966cde15", "text": "This paper holds a survey on plant leaf diseases classification using image processing. Digital image processing has three basic steps: image processing, analysis and understanding. Image processing contains the preprocessing of the plant leaf as segmentation, color extraction, diseases specific data extraction and filtration of images. Image analysis generally deals with the classification of diseases. Plant leaf can be classified based on their morphological features with the help of various classification techniques such as PCA, SVM, and Neural Network. These classifications can be defined various properties of the plant leaf such as color, intensity, dimensions. Back propagation is most commonly used neural network. It has many learning, training, transfer functions which is used to construct various BP networks. Characteristics features are the performance parameter for image recognition. BP networks shows very good results in classification of the grapes leaf diseases. This paper provides an overview on different image processing techniques along with BP Networks used in leaf disease classification.", "title": "" }, { "docid": "5ce93a1c09b4da41f0cc920d5c7e6bdc", "text": "Humanitarian operations comprise a wide variety of activities. These activities differ in temporal and spatial scope, as well as objectives, target population and with respect to the delivered goods and services. Despite a notable variety of agendas of the humanitarian actors, the requirements on the supply chain and supporting logistics activities remain similar to a large extent. This motivates the development of a suitably generic reference model for supply chain processes in the context of humanitarian operations. Reference models have been used in commercial environments for a range of purposes, such as analysis of structural, functional, and behavioural properties of supply chains. Our process reference model aims to support humanitarian organisations when designing appropriately adapted supply chain processes to support their operations, visualising their processes, measuring their performance and thus, improving communication and coordination of organisations. A top-down approach is followed in which modular process elements are developed sequentially and relevant performance measures are identified. This contribution is conceptual in nature and intends to lay the foundation for future research.", "title": "" } ]
scidocsrr
838ee5257f993f5488cf7c0c65ebeb2c
Measuring User Credibility in Social Media
[ { "docid": "51d950dfb9f71b9c8948198c147b9884", "text": "Collaborative filtering is the most popular approach to build recommender systems and has been successfully employed in many applications. However, it cannot make recommendations for so-called cold start users that have rated only a very small number of items. In addition, these methods do not know how confident they are in their recommendations. Trust-based recommendation methods assume the additional knowledge of a trust network among users and can better deal with cold start users, since users only need to be simply connected to the trust network. On the other hand, the sparsity of the user item ratings forces the trust-based approach to consider ratings of indirect neighbors that are only weakly trusted, which may decrease its precision. In order to find a good trade-off, we propose a random walk model combining the trust-based and the collaborative filtering approach for recommendation. The random walk model allows us to define and to measure the confidence of a recommendation. We performed an evaluation on the Epinions dataset and compared our model with existing trust-based and collaborative filtering methods.", "title": "" } ]
[ { "docid": "4a741431c708cd92a250bcb91e4f1638", "text": "PURPOSE\nIn today's workplace, nurses are highly skilled professionals possessing expertise in both information technology and nursing. Nursing informatics competencies are recognized as an important capability of nurses. No established guidelines existed for nurses in Asia. This study focused on identifying the nursing informatics competencies required of nurses in Taiwan.\n\n\nMETHODS\nA modified Web-based Delphi method was used for two expert groups in nursing, educators and administrators. Experts responded to 323 items on the Nursing Informatics Competencies Questionnaire, modified from the initial work of Staggers, Gassert and Curran to include 45 additional items. Three Web-based Delphi rounds were conducted. Analysis included detailed item analysis. Competencies that met 60% or greater agreement of item importance and appropriate level of nursing practice were included.\n\n\nRESULTS\nN=32 experts agreed to participate in Round 1, 23 nursing educators and 9 administrators. The participation rates for Rounds 2 and 3=68.8%. By Round 3, 318 of 323 nursing informatics competencies achieved required consensus levels. Of the new competencies, 42 of 45 were validated. A high degree of agreement existed for specific nursing informatics competencies required for nurses in Taiwan (97.8%).\n\n\nCONCLUSIONS\nThis study provides a current master list of nursing informatics competency requirements for nurses at four levels in the U.S. and Taiwan. The results are very similar to the original work of Staggers et al. The results have international relevance because of the global importance of information technology for the nursing profession.", "title": "" }, { "docid": "a7d4881412978a41da17e282f9419bdd", "text": "Recent studies suggest that judgments of facial masculinity reflect more than sexually dimorphic shape. Here, we investigated whether the perception of masculinity is influenced by facial cues to body height and weight. We used the average differences in three-dimensional face shape of forty men and forty women to compute a morphological masculinity score, and derived analogous measures for facial correlates of height and weight based on the average face shape of short and tall, and light and heavy men. We found that facial cues to body height and weight had substantial and independent effects on the perception of masculinity. Our findings suggest that men are perceived as more masculine if they appear taller and heavier, independent of how much their face shape differs from women's. We describe a simple method to quantify how body traits are reflected in the face and to define the physical basis of psychological attributions.", "title": "" }, { "docid": "0ac0f9965376f5547a2dabd3d06b6b96", "text": "A sentence extract summary of a document is a subset of the document's sentences that contains the main ideas in the document. We present an approach to generating such summaries, a hidden Markov model that judges the likelihood that each sentence should be contained in the summary. We compare the results of this method with summaries generated by humans, showing that we obtain significantly higher agreement than do earlier methods.", "title": "" }, { "docid": "dbb087a999a784669d2189e1c9cd92c4", "text": "Home Automation industry is growing rapidly; this is fuelled by provide supporting systems for the elderly and the disabled, especially those who live alone. Coupled with this, the world population is confirmed to be getting older. Home automation systems must comply with the household standards and convenience of usage. This paper details the overall design of a wireless home automation system (WHAS) which has been built and implemented. The automation centers on recognition of voice commands and uses low-power RF ZigBee wireless communication modules which are relatively cheap. The home automation system is intended to control all lights and electrical appliances in a home or office using voice commands. The system has been tested and verified. The verification tests included voice recognition response test, indoor ZigBee communication test. The tests involved a mix of 10 male and female subjects with different Indian languages. 7 different voice commands were sent by each person. Thus the test involved sending a total of 70 commands and 80.05% of these commands were recognized correctly. Keywords— Home automation, ZigBee transceivers, voice streaming, HM 2007, voice recognition. ——————————  ——————————", "title": "" }, { "docid": "a66a7210436752b220dc5483c43b03be", "text": "Automated unit tests are an essential software quality assurance measure that is widely used in practice. In many projects, thus, large volumes of test code have co-evolved with the production code throughout development. Like any other code, test code too may contain faults, affecting the effectiveness, reliability and usefulness of the tests. Furthermore, throughout the software system's ongoing development and maintenance phase, the test code too has to be constantly adapted and maintained. To support detecting problems in test code and improving its quality, we implemented 42 static checks for analyzing JUnit tests. These checks encompass best practices for writing unit tests, common issues observed in using xUnit frameworks, and our experiences collected from several years of providing trainings and reviews of test code for industry and in teaching. The checks can be run using the open source analysis tool PMD. In addition to a description of the implemented checks and their rationale, we demonstrate the applicability of using static analysis for test code by analyzing the unit tests of the open source project JFreeChart.", "title": "" }, { "docid": "a81f7c588440797b96d342dcad59aed0", "text": "Radio-frequency identification (RFID) technology has recently attracted significant interest in the realm of body-area applications, including both wearables and implants. The presence of the human body in close proximity to the RFID device creates several challenges in terms of design, fabrication, and testing, while also ushering in a whole new realm of opportunities for health care and other body-area applications. With these factors in mind, this article provides a holistic and critical review of design challenges associated with body-area RFID technologies, including operation frequencies, influence of the surrounding biological tissues, antenna design and miniaturization, and conformance to international safety guidelines. Concurrently, a number of fabrication methods are discussed for realizing flexible, conformal, and robust RFID device prototypes. The article concludes by reviewing transformative RFID-based solutions for wearable and implantable applications and discussing the future opportunities and challenges raised. Notably, this is the first time that a comprehensive review has been presented in the area of RFID antennas for body-area applications, addressing challenges specific to on-/in-body RFID operation and spanning a wide range of aspects that include design, fabrication, testing, and, eventually, applications and future directions. As such, the utmost aim of this article is to be a unique point of reference for experts and nonexperts in the field.", "title": "" }, { "docid": "066fdb2deeca1d13218f16ad35fe5f86", "text": "As manga (Japanese comics) have become common content in many countries, it is necessary to search manga by text query or translate them automatically. For these applications, we must first extract texts from manga. In this paper, we develop a method to detect text regions in manga. Taking motivation from methods used in scene text detection, we propose an approach using classifiers for both connected components and regions. We have also developed a text region dataset of manga, which enables learning and detailed evaluations of methods used to detect text regions. Experiments using the dataset showed that our text detection method performs more effectively than existing methods.", "title": "" }, { "docid": "4d31eda0840ac80874a14b0a9fc2439f", "text": "We identified a patient who excreted large amounts of methylmalonic acid and malonic acid. In contrast to other patients who have been described with combined methylmalonic and malonic aciduria, our patient excreted much larger amounts of methylmalonic acid than malonic acid. Since most previous patients with this biochemical phenotype have been reported to have deficiency of malonyl-CoA decarboxylase, we assayed malonyl-CoA decarboxylase activity in skin fibroblasts derived from our patient and found the enzyme activity to be normal. We examined four isocaloric (2000 kcal/day) dietary regimes administered serially over a period of 12 days with 3 days devoted to each dietary regimen. These diets were high in carbohydrate, fat or protein, or enriched with medium-chain triglycerides. Diet-induced changes in malonic and methylmalonic acid excretion became evident 24–36 h after initiating a new diet. Total excretion of malonic and methylmalonic acid was greater (p<0.01) during a high-protein diet than during a high-carbohydrate or high-fat diet. A high-carbohydrate, low-protein diet was associated with the lowest levels of malonic and methylmalonic acid excretion. Perturbations in these metabolites were most marked at night. On all dietary regimes, our patient excreted 3–10 times more methylmalonic acid than malonic acid, a reversal of the ratios reported in patients with malonyl-CoA decarboxylase deficiency. Our data support a previous observation that combined malonicand methylmalonic aciduria has aetiologies other than malonyl-CoA decar-boxylase deficiency. The malonic acid to methylmalonic acid ratio in response to dietary intervention may be useful in identifying a subgroup of patients with normal enzyme activity.", "title": "" }, { "docid": "d7a1985750fe10273c27f7f8121640ac", "text": "The large volumes of data that will be produced by ubiquitous sensors and meters in future smart distribution networks represent an opportunity for the use of data analytics to extract valuable knowledge and, thus, improve Distribution Network Operator (DNO) planning and operation tasks. Indeed, applications ranging from outage management to detection of non-technical losses to asset management can potentially benefit from data analytics. However, despite all the benefits, each application presents DNOs with diverse data requirements and the need to define an adequate approach. Consequently, it is critical to understand the different interactions among applications, monitoring infrastructure and approaches involved in the use of data analytics in distribution networks. To assist DNOs in the decision making process, this work presents some of the potential applications where data analytics are likely to improve distribution network performance and the corresponding challenges involved in its implementation.", "title": "" }, { "docid": "ae5fac207e5d3bf51bffbf2ec01fd976", "text": "Deep learning has revolutionized the way sensor data are analyzed and interpreted. The accuracy gains these approaches offer make them attractive for the next generation of mobile, wearable and embedded sensory applications. However, state-of-the-art deep learning algorithms typically require a significant amount of device and processor resources, even just for the inference stages that are used to discriminate high-level classes from low-level data. The limited availability of memory, computation, and energy on mobile and embedded platforms thus pose a significant challenge to the adoption of these powerful learning techniques. In this paper, we propose SparseSep, a new approach that leverages the sparsification of fully connected layers and separation of convolutional kernels to reduce the resource requirements of popular deep learning algorithms. As a result, SparseSep allows large-scale DNNs and CNNs to run efficiently on mobile and embedded hardware with only minimal impact on inference accuracy. We experiment using SparseSep across a variety of common processors such as the Qualcomm Snapdragon 400, ARM Cortex M0 and M3, and Nvidia Tegra K1, and show that it allows inference for various deep models to execute more efficiently; for example, on average requiring 11.3 times less memory and running 13.3 times faster on these representative platforms.", "title": "" }, { "docid": "6e1013e84468c3809742bbe826598f21", "text": "Many-light rendering methods replace multi-bounce light transport with direct lighting from many virtual point light sources to allow for simple and efficient computation of global illumination. Lightcuts build a hierarchy over virtual lights, so that surface points can be shaded with a sublinear number of lights while minimizing error. However, the original algorithm needs to run on every shading point of the rendered image. It is well known that the performance of Lightcuts can be improved by exploiting the coherence between individual cuts. We propose a novel approach where we invest into the initial lightcut creation at representative cache records, and then directly interpolate the input lightcuts themselves as well as per-cluster visibility for neighboring shading points. This allows us to improve upon the performance of the original Lightcuts algorithm by a factor of 4−8 compared to an optimized GPU-implementation of Lightcuts, while introducing only a small additional approximation error. The GPU-implementation of our technique enables us to create previews of Lightcuts-based global illumination renderings.", "title": "" }, { "docid": "9b519ba8a3b32d7b5b8a117b2d4d06ca", "text": "This article reviews the most current practice guidelines in the diagnosis and management of patients born with cleft lip and/or palate. Such patients frequently have multiple medical and social issues that benefit greatly from a team approach. Common challenges include feeding difficulty, nutritional deficiency, speech disorders, hearing problems, ear disease, dental anomalies, and both social and developmental delays, among others. Interdisciplinary evaluation and collaboration throughout a patient's development are essential.", "title": "" }, { "docid": "e7e60cc10b156e67bce5c07866c40bc3", "text": "JavaScript-based malware attacks have increased in recent years and currently represent a signicant threat to the use of desktop computers, smartphones, and tablets. While static and runtime methods for malware detection have been proposed in the literature, both on the client side, for just-in-time in-browser detection, as well as offline, crawler-based malware discovery, these approaches encounter the same fundamental limitation. Web-based malware tends to be environment-specific, targeting a particular browser, often attacking specic versions of installed plugins. This targeting occurs because the malware exploits vulnerabilities in specific plugins and fails otherwise. As a result, a fundamental limitation for detecting a piece of malware is that malware is triggered infrequently, only showing itself when the right environment is present. We observe that, using fingerprinting techniques that capture and exploit unique properties of browser configurations, almost all existing malware can be made virtually impssible for malware scanners to detect. This paper proposes Rozzle, a JavaScript multi-execution virtual machine, as a way to explore multiple execution paths within a single execution so that environment-specific malware will reveal itself. Using large-scale experiments, we show that Rozzle increases the detection rate for offline runtime detection by almost seven times. In addition, Rozzle triples the effectiveness of online runtime detection. We show that Rozzle incurs virtually no runtime overhead and allows us to replace multiple VMs running different browser configurations with a single Rozzle-enabled browser, reducing the hardware requirements, network bandwidth, and power consumption.", "title": "" }, { "docid": "23c71e8893fceed8c13bf2fc64452bc2", "text": "Variable stiffness actuators (VSAs) are complex mechatronic devices that are developed to build passively compliant, robust, and dexterous robots. Numerous different hardware designs have been developed in the past two decades to address various demands on their functionality. This review paper gives a guide to the design process from the analysis of the desired tasks identifying the relevant attributes and their influence on the selection of different components such as motors, sensors, and springs. The influence on the performance of different principles to generate the passive compliance and the variation of the stiffness are investigated. Furthermore, the design contradictions during the engineering process are explained in order to find the best suiting solution for the given purpose. With this in mind, the topics of output power, potential energy capacity, stiffness range, efficiency, and accuracy are discussed. Finally, the dependencies of control, models, sensor setup, and sensor quality are addressed.", "title": "" }, { "docid": "046df1ccbc545db05d0d91fe8f73d64a", "text": "Precise models of the robot inverse dynamics allow the design of significantly more accurate, energy-efficient and more compliant robot control. However, in some cases the accuracy of rigidbody models does not suffice for sound control performance due to unmodeled nonlinearities arising from hydraulic cable dynamics, complex friction or actuator dynamics. In such cases, estimating the inverse dynamics model from measured data poses an interesting alternative. Nonparametric regression methods, such as Gaussian process regression (GPR) or locally weighted projection regression (LWPR), are not as restrictive as parametric models and, thus, offer a more flexible framework for approximating unknown nonlinearities. In this paper, we propose a local approximation to the standard GPR, called local GPR (LGP), for real-time model online-learning by combining the strengths of both regression methods, i.e., the high accuracy of GPR and the fast speed of LWPR. The approach is shown to have competitive learning performance for high-dimensional data while being sufficiently fast for real-time learning. The effectiveness of LGP is exhibited by a comparison with the state-of-the-art regression techniques, such as GPR, LWPR and ν-SVR. The applicability of the proposed LGP method is demonstrated by real-time online-learning of the inverse dynamics model for robot model-based control on a Barrett WAM robot arm.", "title": "" }, { "docid": "2aa7f98e302bf2e96e16645cd70ff74e", "text": "Membrane potential and permselectivity are critical parameters for a variety of electrochemically-driven separation and energy technologies. An electric potential is developed when a membrane separates electrolyte solutions of different concentrations, and a permselective membrane allows specific species to be transported while restricting the passage of other species. Ion exchange membranes are commonly used in applications that require advanced ionic electrolytes and span technologies such as alkaline batteries to ammonium bicarbonate reverse electrodialysis, but membranes are often only characterized in sodium chloride solutions. Our goal in this work was to better understand membrane behaviour in aqueous ammonium bicarbonate, which is of interest for closed-loop energy generation processes. Here we characterized the permselectivity of four commercial ion exchange membranes in aqueous solutions of sodium chloride, ammonium chloride, sodium bicarbonate, and ammonium bicarbonate. This stepwise approach, using four different ions in aqueous solution, was used to better understand how these specific ions affect ion transport in ion exchange membranes. Characterization of cation and anion exchange membrane permselectivity, using these ions, is discussed from the perspective of the difference in the physical chemistry of the hydrated ions, along with an accompanying re-derivation and examination of the basic equations that describe membrane potential. In general, permselectivity was highest in sodium chloride and lowest in ammonium bicarbonate solutions, and the nature of both the counter- and co-ions appeared to influence measured permselectivity. The counter-ion type influences the binding affinity between counter-ions and polymer fixed charge groups, and higher binding affinity between fixed charge sites and counter-ions within the membrane decreases the effective membrane charge density. As a result permselectivity decreases. The charge density and polarizability of the co-ions also appeared to influence permselectivity leading to ion-specific effects; co-ions that are charge dense and have low polarizability tended to result in high membrane permselectivity.", "title": "" }, { "docid": "0742dcc602a216e41d3bfe47bffc7d30", "text": "In this paper we study supervised and semi-supervised classification of e-mails. We consider two tasks: filing e-mails into folders and spam e-mail filtering. Firstly, in a supervised learning setting, we investigate the use of random forest for automatic e-mail filing into folders and spam e-mail filtering. We show that random forest is a good choice for these tasks as it runs fast on large and high dimensional databases, is easy to tune and is highly accurate, outperforming popular algorithms such as decision trees, support vector machines and naïve Bayes. We introduce a new accurate feature selector with linear time complexity. Secondly, we examine the applicability of the semi-supervised co-training paradigm for spam e-mail filtering by employing random forests, support vector machines, decision tree and naïve Bayes as base classifiers. The study shows that a classifier trained on a small set of labelled examples can be successfully boosted using unlabelled examples to accuracy rate of only 5% lower than a classifier trained on all labelled examples. We investigate the performance of co-training with one natural feature split and show that in the domain of spam e-mail filtering it can be as competitive as co-training with two natural feature splits.", "title": "" }, { "docid": "9cdcf6718ace17a768f286c74c0eb11c", "text": "Trapa bispinosa Roxb. which belongs to the family Trapaceae is a small herb well known for its medicinal properties and is widely used worldwide. Trapa bispinosa or Trapa natans is an important plant of Indian Ayurvedic system of medicine which is used in the problems of stomach, genitourinary system, liver, kidney, and spleen. It is bitter, astringent, stomachic, diuretic, febrifuge, and antiseptic. The whole plant is used in gonorrhea, menorrhagia, and other genital affections. It is useful in diarrhea, dysentery, ophthalmopathy, ulcers, and wounds. These are used in the validated conditions in pitta, burning sensation, dipsia, dyspepsia, hemorrhage, hemoptysis, diarrhea, dysentery, strangely, intermittent fever, leprosy, fatigue, inflammation, urethrorrhea, fractures, erysipelas, lumbago, pharyngitis, bronchitis and general debility, and suppressing stomach and heart burning. Maybe it is due to photochemical content of Trapa bispinosa having high quantity of minerals, ions, namely, Ca, K, Na, Zn, and vitamins; saponins, phenols, alkaloids, H-donation, flavonoids are reported in the plants. Nutritional and biochemical analyses of fruits of Trapa bispinosa in 100 g showed 22.30 and 71.55% carbohydrate, protein contents were 4.40% and 10.80%, a percentage of moisture, fiber, ash, and fat contents were 70.35 and 7.30, 2.05 and 6.35, 2.30 and 8.50, and 0.65 and 1.85, mineral contents of the seeds were 32 mg and 102.85 mg calcium, 1.4 and 3.8 mg Iron, and 121 and 325 mg phosphorus in 100 g, and seeds of Trapa bispinosa produced 115.52 and 354.85 Kcal of energy, in fresh and dry fruits, respectively. Chemical analysis of the fruit and fresh nuts having considerable water content citric acid and fresh fruit which substantiates its importance as dietary food also reported low crude lipid, and major mineral present with confirming good amount of minerals as an iron and manganese potassium were contained in the fruit. Crude fiber, total protein content of the water chestnut kernel, Trapa bispinosa are reported. In this paper, the recent reports on nutritional, phytochemical, and pharmacological aspects of Trapa bispinosa Roxb, as a medicinal and nutritional food, are reviewed.", "title": "" }, { "docid": "599c2f4205f3a0978d0567658daf8be6", "text": "With increasing audio/video service consumption through unmanaged IP networks, HTTP adaptive streaming techniques have emerged to handle bandwidth limitations and variations. But while it is becoming common to serve multiple clients in one home network, these solutions do not adequately address fine tuned quality arbitration between the multiple streams. While clients compete for bandwidth, the video suffers unstable conditions and/or inappropriate bit-rate levels.\n We hereby experiment a mechanism based on traffic chapping that allow bandwidth arbitration to be implemented in the home gateway, first determining desirable target bit-rates to be reached by each stream and then constraining the clients to stay within their limits. This enables the delivery of optimal quality of experience to the maximum number of users. This approach is validated through experimentation, and results are shown through a set of objective measurement criteria.", "title": "" }, { "docid": "225204d66c371372debb3bb2a37c795b", "text": "We present two novel methods for face verification. Our first method - “attribute” classifiers - uses binary classifiers trained to recognize the presence or absence of describable aspects of visual appearance (e.g., gender, race, and age). Our second method - “simile” classifiers - removes the manual labeling required for attribute classification and instead learns the similarity of faces, or regions of faces, to specific reference people. Neither method requires costly, often brittle, alignment between image pairs; yet, both methods produce compact visual descriptions, and work on real-world images. Furthermore, both the attribute and simile classifiers improve on the current state-of-the-art for the LFW data set, reducing the error rates compared to the current best by 23.92% and 26.34%, respectively, and 31.68% when combined. For further testing across pose, illumination, and expression, we introduce a new data set - termed PubFig - of real-world images of public figures (celebrities and politicians) acquired from the internet. This data set is both larger (60,000 images) and deeper (300 images per individual) than existing data sets of its kind. Finally, we present an evaluation of human performance.", "title": "" } ]
scidocsrr
09e694301e741dd9dbe591b981dec8cb
Capturing Business Model Innovation Driven by the Emergence of New Technologies in Established Firms
[ { "docid": "c936e76e8db97b640a4123e66169d1b8", "text": "Varying philosophical and theoretical orientations to qualitative inquiry remind us that issues of quality and credibility intersect with audience and intended research purposes. This overview examines ways of enhancing the quality and credibility of qualitative analysis by dealing with three distinct but related inquiry concerns: rigorous techniques and methods for gathering and analyzing qualitative data, including attention to validity, reliability, and triangulation; the credibility, competence, and perceived trustworthiness of the qualitative researcher; and the philosophical beliefs of evaluation users about such paradigm-based preferences as objectivity versus subjectivity, truth versus perspective, and generalizations versus extrapolations. Although this overview examines some general approaches to issues of credibility and data quality in qualitative analysis, it is important to acknowledge that particular philosophical underpinnings, specific paradigms, and special purposes for qualitative inquiry will typically include additional or substitute criteria for assuring and judging quality, validity, and credibility. Moreover, the context for these considerations has evolved. In early literature on evaluation methods the debate between qualitative and quantitative methodologists was often strident. In recent years the debate has softened. A consensus has gradually emerged that the important challenge is to match appropriately the methods to empirical questions and issues, and not to universally advocate any single methodological approach for all problems.", "title": "" } ]
[ { "docid": "ce7fdc16d6d909a4e0c3294ed55af51d", "text": "In this work, we perform an empirical comparison among the CTC, RNN-Transducer, and attention-based Seq2Seq models for end-to-end speech recognition. We show that, without any language model, Seq2Seq and RNN-Transducer models both outperform the best reported CTC models with a language model, on the popular Hub5'00 benchmark. On our internal diverse dataset, these trends continue — RNN-Transducer models rescored with a language model after beam search outperform our best CTC models. These results simplify the speech recognition pipeline so that decoding can now be expressed purely as neural network operations. We also study how the choice of encoder architecture affects the performance of the three models — when all encoder layers are forward only, and when encoders downsample the input representation aggressively.", "title": "" }, { "docid": "e946deae6e1d441c152dca6e52268258", "text": "The design of robust and high-performance gaze-tracking systems is one of the most important objectives of the eye-tracking community. In general, a subject calibration procedure is needed to learn system parameters and be able to estimate the gaze direction accurately. In this paper, we attempt to determine if subject calibration can be eliminated. A geometric analysis of a gaze-tracking system is conducted to determine user calibration requirements. The eye model used considers the offset between optical and visual axes, the refraction of the cornea, and Donder's law. This paper demonstrates the minimal number of cameras, light sources, and user calibration points needed to solve for gaze estimation. The underlying geometric model is based on glint positions and pupil ellipse in the image, and the minimal hardware needed for this model is one camera and multiple light-emitting diodes. This paper proves that subject calibration is compulsory for correct gaze estimation and proposes a model based on a single point for subject calibration. The experiments carried out show that, although two glints and one calibration point are sufficient to perform gaze estimation (error ~ 1deg), using more light sources and calibration points can result in lower average errors.", "title": "" }, { "docid": "f268718ceac79dbf8d0dcda2ea6557ca", "text": "0167-8655/$ see front matter 2012 Elsevier B.V. A http://dx.doi.org/10.1016/j.patrec.2012.06.003 ⇑ Corresponding author. E-mail addresses: fred.qi@ieee.org (F. Qi), gmshi@x 1 Principal corresponding author. Depth acquisition becomes inexpensive after the revolutionary invention of Kinect. For computer vision applications, depth maps captured by Kinect require additional processing to fill up missing parts. However, conventional inpainting methods for color images cannot be applied directly to depth maps as there are not enough cues to make accurate inference about scene structures. In this paper, we propose a novel fusion based inpainting method to improve depth maps. The proposed fusion strategy integrates conventional inpainting with the recently developed non-local filtering scheme. The good balance between depth and color information guarantees an accurate inpainting result. Experimental results show the mean absolute error of the proposed method is about 20 mm, which is comparable to the precision of the Kinect sensor. 2012 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "0ab1607237e9fd804a23745113e133ef", "text": "One of the key tasks of sentiment analysis of product reviews is to extract product aspects or features that users have expressed opinions on. In this work, we focus on using supervised sequence labeling as the base approach to performing the task. Although several extraction methods using sequence labeling methods such as Conditional Random Fields (CRF) and Hidden Markov Models (HMM) have been proposed, we show that this supervised approach can be significantly improved by exploiting the idea of concept sharing across multiple domains. For example, “screen” is an aspect in iPhone, but not only iPhone has a screen, many electronic devices have screens too. When “screen” appears in a review of a new domain (or product), it is likely to be an aspect too. Knowing this information enables us to do much better extraction in the new domain. This paper proposes a novel extraction method exploiting this idea in the context of supervised sequence labeling. Experimental results show that it produces markedly better results than without using the past information.", "title": "" }, { "docid": "b1fabdbfea2fcffc8071371de8399b69", "text": "Cities across the United States are implementing information communication technologies in an effort to improve government services. One such innovation in e-government is the creation of 311 systems, offering a centralized platform where citizens can request services, report non-emergency concerns, and obtain information about the city via hotline, mobile, or web-based applications. The NYC 311 service request system represents one of the most significant links between citizens and city government, accounting for more than 8,000,000 requests annually. These systems are generating massive amounts of data that, when properly managed, cleaned, and mined, can yield significant insights into the real-time condition of the city. Increasingly, these data are being used to develop predictive models of citizen concerns and problem conditions within the city. However, predictive models trained on these data can suffer from biases in the propensity to make a request that can vary based on socio-economic and demographic characteristics of an area, cultural differences that can affect citizens’ willingness to interact with their government, and differential access to Internet connectivity. Using more than 20,000,000 311 requests together with building violation data from the NYC Department of Buildings and the NYC Department of Housing Preservation and Development; property data from NYC Department of City Planning; and demographic and socioeconomic data from the U.S. Census American Community Survey we develop a two-step methodology to evaluate the propensity to complain: (1) we predict, using a gradient boosting regression model, the likelihood of heating and hot water violations for a given building, and (2) we then compare the actual complaint volume for buildings with predicted violations to quantify discrepancies across the City. Our model predicting service request volumes over time will contribute to the efficiency of the 311 system by informing shortand long-term resource allocation strategy and improving the agency’s performance in responding to requests. For instance, the outcome of our longitudinal pattern analysis allows the city to predict building safety hazards early and take action, leading to anticipatory safety and inspection actions. Furthermore, findings will provide novel insight into equity and community engagement through 311, and provide the basis for acknowledging and accounting for Bloomberg Data for Good Exchange Conference. 24-Sep-2017, Chicago, IL, USA. bias in machine learning applications trained on 311 data.", "title": "" }, { "docid": "44e135418dc6480366bb5679b62bc4f9", "text": "There is growing interest regarding the role of the right inferior frontal gyrus (RIFG) during a particular form of executive control referred to as response inhibition. However, tasks used to examine neural activity at the point of response inhibition have rarely controlled for the potentially confounding effects of attentional demand. In particular, it is unclear whether the RIFG is specifically involved in inhibitory control, or is involved more generally in the detection of salient or task relevant cues. The current fMRI study sought to clarify the role of the RIFG in executive control by holding the stimulus conditions of one of the most popular response inhibition tasks-the Stop Signal Task-constant, whilst varying the response that was required on reception of the stop signal cue. Our results reveal that the RIFG is recruited when important cues are detected, regardless of whether that detection is followed by the inhibition of a motor response, the generation of a motor response, or no external response at all.", "title": "" }, { "docid": "e2a7ff093714cc6a0543816b3d7c08e9", "text": "Microblogs such as Twitter reflect the general public’s reactions to major events. Bursty topics from microblogs reveal what events have attracted the most online attention. Although bursty event detection from text streams has been studied before, previous work may not be suitable for microblogs because compared with other text streams such as news articles and scientific publications, microblog posts are particularly diverse and noisy. To find topics that have bursty patterns on microblogs, we propose a topic model that simultaneously captures two observations: (1) posts published around the same time are more likely to have the same topic, and (2) posts published by the same user are more likely to have the same topic. The former helps find eventdriven posts while the latter helps identify and filter out “personal” posts. Our experiments on a large Twitter dataset show that there are more meaningful and unique bursty topics in the top-ranked results returned by our model than an LDA baseline and two degenerate variations of our model. We also show some case studies that demonstrate the importance of considering both the temporal information and users’ personal interests for bursty topic detection from microblogs.", "title": "" }, { "docid": "14e0664fcbc2e29778a1ccf8744f4ca5", "text": "Mobile offloading migrates heavy computation from mobile devices to cloud servers using one or more communication network channels. Communication interfaces vary in speed, energy consumption and degree of availability. We assume two interfaces: WiFi, which is fast with low energy demand but not always present and cellular, which is slightly slower has higher energy consumption but is present at all times. We study two different communication strategies: one that selects the best available interface for each transmitted packet and the other multiplexes data across available communication channels. Since the latter may experience interrupts in the WiFi connection packets can be delayed. We call it interrupted strategy as opposed to the uninterrupted strategy that transmits packets only over currently available networks. Two key concerns of mobile offloading are the energy use of the mobile terminal and the response time experienced by the user of the mobile device. In this context, we investigate three different metrics that express the energy-performance tradeoff, the known Energy-Response time Weighted Sum (EWRS), the Energy-Response time Product (ERP) and the Energy-Response time Weighted Product (ERWP) metric. We apply the metrics to the two different offloading strategies and find that the conclusions drawn from the analysis depend on the considered metric. In particular, while an additive metric is not normalised, which implies that the term using smaller scale is always favoured, the ERWP metric, which is new in this paper, allows to assign importance to both aspects without being misled by different scales. It combines the advantages of an additive metric and a product. The interrupted strategy can save energy especially if the focus in the tradeoff metric lies on the energy aspect. In general one can say that the uninterrupted strategy is faster, while the interrupted strategy uses less energy. A fast connection improves the response time much more than the fast repair of a failed connection. In conclusion, a short down-time of the transmission channel can mostly be tolerated.", "title": "" }, { "docid": "104c9347338f4e725e3c1907a4991977", "text": "This paper derives a speech parameter generation algorithm for HMM-based speech synthesis, in which speech parameter sequence is generated from HMMs whose observation vector consists of spectral parameter vector and its dynamic feature vectors. In the algorithm, we assume that the state sequence (state and mixture sequence for the multi-mixture case) or a part of the state sequence is unobservable (i.e., hidden or latent). As a result, the algorithm iterates the forward-backward algorithm and the parameter generation algorithm for the case where state sequence is given. Experimental results show that by using the algorithm, we can reproduce clear formant structure from multi-mixture HMMs as compared with that produced from single-mixture HMMs.", "title": "" }, { "docid": "4d529a33044a1b22a71b4ad2f53f8b65", "text": "Robotic assistants have the potential to greatly improve our quality of life by supporting us in our daily activities. A service robot acting autonomously in an indoor environment is faced with very complex tasks. Consider the problem of pouring a liquid into a cup, the robot should first determine if the cup is empty or partially filled. RGB-D cameras provide noisy depth measurements which depend on the opaqueness and refraction index of the liquid. In this paper, we present a novel probabilistic approach for estimating the fill-level of a liquid in a cup using an RGB-D camera. Our approach does not make any assumptions about the properties of the liquid like its opaqueness or its refraction index. We develop a probabilistic model using features extracted from RGB and depth data. Our experiments demonstrate the robustness of our method and an improvement over the state of the art.", "title": "" }, { "docid": "d76b7b25bce29cdac24015f8fa8ee5bb", "text": "A circularly polarized magnetoelectric dipole antenna with high efficiency based on printed ridge gap waveguide is presented. The antenna gain is improved by using a wideband lens in front of the antennas. The lens consists of three layers dual-polarized mu-near zero (MNZ) inclusions. Each layer consists of a <inline-formula> <tex-math notation=\"LaTeX\">$3\\times4$ </tex-math></inline-formula> MNZ unit cell. The measured results indicate that the magnitude of <inline-formula> <tex-math notation=\"LaTeX\">$S_{11}$ </tex-math></inline-formula> is below −10 dB in the frequency range of 29.5–37 GHz. The resulting 3-dB axial ratio is over a frequency range of 32.5–35 GHz. The measured realized gain of the antenna is more than 10 dBi over a frequency band of 31–35 GHz achieving a radiation efficiency of 94% at 34 GHz.", "title": "" }, { "docid": "774c7af1abfde7dd7a4fc858b4b8487e", "text": "Poorly designed charts are prevalent in reports, magazines, books and on the Web. Most of these charts are only available as bitmap images; without access to the underlying data it is prohibitively difficult for viewers to create more effective visual representations. In response we present ReVision, a system that automatically redesigns visualizations to improve graphical perception. Given a bitmap image of a chart as input, ReVision applies computer vision and machine learning techniques to identify the chart type (e.g., pie chart, bar chart, scatterplot, etc.). It then extracts the graphical marks and infers the underlying data. Using a corpus of images drawn from the web, ReVision achieves image classification accuracy of 96% across ten chart categories. It also accurately extracts marks from 79% of bar charts and 62% of pie charts, and from these charts it successfully extracts data from 71% of bar charts and 64% of pie charts. ReVision then applies perceptually-based design principles to populate an interactive gallery of redesigned charts. With this interface, users can view alternative chart designs and retarget content to different visual styles.", "title": "" }, { "docid": "bf62cf6deb1b11816fa271bfecde1077", "text": "EASL–EORTC Clinical Practice Guidelines (CPG) on the management of hepatocellular carcinoma (HCC) define the use of surveillance, diagnosis, and therapeutic strategies recommended for patients with this type of cancer. This is the first European joint effort by the European Association for the Study of the Liver (EASL) and the European Organization for Research and Treatment of Cancer (EORTC) to provide common guidelines for the management of hepatocellular carcinoma. These guidelines update the recommendations reported by the EASL panel of experts in HCC published in 2001 [1]. Several clinical and scientific advances have occurred during the past decade and, thus, a modern version of the document is urgently needed. The purpose of this document is to assist physicians, patients, health-care providers, and health-policy makers from Europe and worldwide in the decision-making process according to evidencebased data. Users of these guidelines should be aware that the recommendations are intended to guide clinical practice in circumstances where all possible resources and therapies are available. Thus, they should adapt the recommendations to their local regulations and/or team capacities, infrastructure, and cost– benefit strategies. Finally, this document sets out some recommendations that should be instrumental in advancing the research and knowledge of this disease and ultimately contribute to improve patient care. The EASL–EORTC CPG on the management of hepatocellular carcinoma provide recommendations based on the level of evi-", "title": "" }, { "docid": "e0b7efd5d3bba071ada037fc5b05a622", "text": "Social exclusion can thwart people's powerful need for social belonging. Whereas prior studies have focused primarily on how social exclusion influences complex and cognitively downstream social outcomes (e.g., memory, overt social judgments and behavior), the current research examined basic, early-in-the-cognitive-stream consequences of exclusion. Across 4 experiments, the threat of exclusion increased selective attention to smiling faces, reflecting an attunement to signs of social acceptance. Compared with nonexcluded participants, participants who experienced the threat of exclusion were faster to identify smiling faces within a \"crowd\" of discrepant faces (Experiment 1), fixated more of their attention on smiling faces in eye-tracking tasks (Experiments 2 and 3), and were slower to disengage their attention from smiling faces in a visual cueing experiment (Experiment 4). These attentional attunements were specific to positive, social targets. Excluded participants did not show heightened attention to faces conveying social disapproval or to positive nonsocial images. The threat of social exclusion motivates people to connect with sources of acceptance, which is manifested not only in \"downstream\" choices and behaviors but also at the level of basic, early-stage perceptual processing.", "title": "" }, { "docid": "7908cc9a1cd6e6f48258a300db37d4a5", "text": "This report describes the algorithms implemented in a Matlab toolbox for change detection and data segmentation. Functions are provided for simulating changes, choosing design parameters and detecting abrupt changes in signals.", "title": "" }, { "docid": "e3a4b77f05ed29b0643a1d699d747415", "text": "This letter develops an optical pixel sensor that is based on hydrogenated amorphous silicon thin-film transistors. Exploiting the photo sensitivity of the photo TFTs and combining different color filters, the proposed sensor can sense an optical input signal of a specified color under high ambient illumination conditions. Measurements indicate that the proposed pixel sensor effectively reacts to the optical input signal under light intensities from 873 to 12,910 lux, proving that the sensor is highly reliable under strong ambient illumination.", "title": "" }, { "docid": "a8f391b630a0261a0693c7038370411a", "text": "In this paper, we address the problem of globally localizing and tracking the pose of a camera-equipped micro aerial vehicle (MAV) flying in urban streets at low altitudes without GPS. An image-based global positioning system is introduced to localize the MAV with respect to the surrounding buildings. We propose a novel airground image-matching algorithm to search the airborne image of the MAV within a ground-level, geotagged image database. Based on the detected matching image features, we infer the global position of the MAV by back-projecting the corresponding image points onto a cadastral three-dimensional city model. Furthermore, we describe an algorithm to track the position of the flying vehicle over several frames and to correct the accumulated drift of the visual odometry whenever a good match is detected between the airborne and the ground-level images. The proposed approach is tested on a 2 km trajectory with a small quadrocopter flying in the streets of Zurich. Our vision-based global localization can robustly handle extreme changes in viewpoint, illumination, perceptual aliasing, and over-season variations, thus outperforming conventional visual placerecognition approaches. The dataset is made publicly available to the research community. To the best of our knowledge, this is the first work that studies and demonstrates global localization and position tracking of a drone in urban streets with a single onboard camera. C © 2015 Wiley Periodicals, Inc.", "title": "" }, { "docid": "18e77bde932964655ba7df73b02a3048", "text": "In this paper, we propose a mathematical framework to jointly model related activities with both motion and context information for activity recognition and anomaly detection. This is motivated from observations that activities related in space and time rarely occur independently and can serve as context for each other. The spatial and temporal distribution of different activities provides useful cues for the understanding of these activities. We denote the activities occurring with high frequencies in the database as normal activities. Given training data which contains labeled normal activities, our model aims to automatically capture frequent motion and context patterns for each activity class, as well as each pair of classes, from sets of predefined patterns during the learning process. Then, the learned model is used to generate globally optimum labels for activities in the testing videos. We show how to learn the model parameters via an unconstrained convex optimization problem and how to predict the correct labels for a testing instance consisting of multiple activities. The learned model and generated labels are used to detect anomalies whose motion and context patterns deviate from the learned patterns. We show promising results on the VIRAT Ground Dataset that demonstrates the benefit of joint modeling and recognition of activities in a wide-area scene and the effectiveness of the proposed method in anomaly detection.", "title": "" }, { "docid": "eee9bbc4e57981813a45114061ef01ec", "text": "Although Marx-bank connection of avalanche transistors is widely used in applications requiring high-voltage nanosecond and subnanosecond pulses, the physical mechanisms responsible for the voltage-ramp-initiated switching of a single transistor in the Marx chain remain unclear. It is shown here by detailed comparison of experiments with physical modeling that picosecond switching determined by double avalanche injection in the collector-base diode gives way to formation and shrinkage of the collector field domain typical of avalanche transistors under the second breakdown. The latter regime, characterized by a lower residual voltage, becomes possible despite a short-connected emitter and base, thanks to the 2-D effects.", "title": "" }, { "docid": "424f871e0e2eabf8b1e636f73d0b1c7d", "text": "Simultaneous localization and mapping (SLAM) methods provide real-time estimation of 3-D models from the sole input of a handheld camera, routinely in mobile robotics scenarios. Medical endoscopic sequences mimic a robotic scenario in which a handheld camera (monocular endoscope) moves along an unknown trajectory while observing an unknown cavity. However, the feasibility and accuracy of SLAM methods have not been extensively validated with human in vivo image sequences. In this work, we propose a monocular visual SLAM algorithm tailored to deal with medical image sequences in order to provide an up-to-scale 3-D map of the observed cavity and the endoscope trajectory at frame rate. The algorithm is validated over synthetic data and human in vivo sequences corresponding to 15 laparoscopic hernioplasties where accurate ground-truth distances are available. It can be concluded that the proposed procedure is: 1) noninvasive, because only a standard monocular endoscope and a surgical tool are used; 2) convenient, because only a hand-controlled exploratory motion is needed; 3) fast, because the algorithm provides the 3-D map and the trajectory in real time; 4) accurate, because it has been validated with respect to ground-truth; and 5) robust to inter-patient variability, because it has performed successfully over the validation sequences.", "title": "" } ]
scidocsrr
e32ce773290401147c93d3008df65965
A Real Time System for Robust 3D Voxel Reconstruction of Human Motions
[ { "docid": "4f3b91bfaa2304e78ad5cd305fb5d377", "text": "The construction of a three-dimensional object model from a set of images taken from different viewpoints is an important problem in computer vision. One of the simplest ways to do this is to use the silhouettes of the object (the binary classification of images into object and background) to construct a bounding volume for the object. To efficiently represent this volume, we use an octree, which represents the object as a tree of recursively subdivided cubes. We develop a new algorithm for computing the octree bounding volume from multiple silhouettes and apply it to an object rotating on a turntable in front of a stationary camera. The algorithm performs a limited amount of processing for each viewpoint and incrementally builds the volumetric model. The resulting algorithm requires less total computation than previous algorithms, runs in close to real-time, and builds a model whose resolution improves over time. 1993 Academic Press, Inc.", "title": "" } ]
[ { "docid": "5e5fcac49c2ee3f944dbc02fe70461cd", "text": "Microkernels long discarded as unacceptable because of their lower performance compared with monolithic kernels might be making a comeback in operating systems due to their potentially higher reliability, which many researchers now regard as more important than performance. Each of the four different attempts to improve operating system reliability focuses on preventing buggy device drivers from crashing the system. In the Nooks approach, each driver is individually hand wrapped in a software jacket to carefully control its interactions with the rest of the operating system, but it leaves all the drivers in the kernel. The paravirtual machine approach takes this one step further and moves the drivers to one or more machines distinct from the main one, taking away even more power from the drivers. Both of these approaches are intended to improve the reliability of existing (legacy) operating systems. In contrast, two other approaches replace legacy operating systems with more reliable and secure ones. The multiserver approach runs each driver and operating system component in a separate user process and allows them to communicate using the microkernel's IPC mechanism. Finally, Singularity, the most radical approach, uses a type-safe language, a single address space, and formal contracts to carefully limit what each module can do.", "title": "" }, { "docid": "e0d040efd131db568d875b80c6adc111", "text": "Familism is a cultural value that emphasizes interdependent family relationships that are warm, close, and supportive. We theorized that familism values can be beneficial for romantic relationships and tested whether (a) familism would be positively associated with romantic relationship quality and (b) this association would be mediated by less attachment avoidance. Evidence indicates that familism is particularly relevant for U.S. Latinos but is also relevant for non-Latinos. Thus, we expected to observe the hypothesized pattern in Latinos and explored whether the pattern extended to non-Latinos of European and East Asian cultural background. A sample of U.S. participants of Latino (n 1⁄4 140), European (n 1⁄4 176), and East Asian (n 1⁄4 199) cultural background currently in a romantic relationship completed measures of familism, attachment, and two indices of romantic relationship quality, namely, partner support and partner closeness. As predicted, higher familism was associated with higher partner support and partner closeness, and these associations were mediated by lower attachment avoidance in the Latino sample. This pattern was not observed in the European or East Asian background samples. The implications of familism for relationships and psychological processes relevant to relationships in Latinos and non-Latinos are discussed. 1 University of California, Irvine, USA 2 University of California, Los Angeles, USA Corresponding author: Belinda Campos, Department of Chicano/Latino Studies, University of California, Irvine, 3151 Social Sciences Plaza A, Irvine, CA 92697, USA. Email: bcampos@uci.edu Journal of Social and Personal Relationships 1–20 a The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav DOI: 10.1177/0265407514562564 spr.sagepub.com J S P R at UNIV CALIFORNIA IRVINE on January 5, 2015 spr.sagepub.com Downloaded from", "title": "" }, { "docid": "2dad5e4cc93246fd64b576d414fb5a3e", "text": "Intelligent vehicles use advanced driver assistance systems (ADASs) to mitigate driving risks. There is increasing demand for an ADAS framework that can increase driving safety by detecting dangerous driving behavior from driver, vehicle, and lane attributes. However, because dangerous driving behavior in real-world driving scenarios can be caused by any or a combination of driver, vehicle, and lane attributes, the detection of dangerous driving behavior using conventional approaches that focus on only one type of attribute may not be sufficient to improve driving safety in realistic situations. To facilitate driving safety improvements, the concept of dangerous driving intensity (DDI) is introduced in this paper, and the objective of dangerous driving behavior detection is converted into DDI estimation based on the three attribute types. To this end, we propose a framework, wherein fuzzy sets are optimized using particle swarm optimization for modeling driver, vehicle, and lane attributes and then used to accurately estimate the DDI. The mean opinion scores of experienced drivers are employed to label DDI for a fair comparison with the results of our framework. The experimental results demonstrate that the driver, vehicle, and lane attributes defined in this paper provide useful cues for DDI analysis; furthermore, the results obtained using the framework are in favorable agreement with those obtained in the perception study. The proposed framework can greatly increase driving safety in intelligent vehicles, where most of the driving risk is within the control of the driver.", "title": "" }, { "docid": "19bd7a6c21dd50c5dc8d14d5cfd363ab", "text": "Frontotemporal dementia (FTD) is one of the most common forms of dementia in persons younger than 65 years. Variants include behavioral variant FTD, semantic dementia, and progressive nonfluent aphasia. Behavioral and language manifestations are core features of FTD, and patients have relatively preserved memory, which differs from Alzheimer disease. Common behavioral features include loss of insight, social inappropriateness, and emotional blunting. Common language features are loss of comprehension and object knowledge (semantic dementia), and nonfluent and hesitant speech (progressive nonfluent aphasia). Neuroimaging (magnetic resonance imaging) usually demonstrates focal atrophy in addition to excluding other etiologies. A careful history and physical examination, and judicious use of magnetic resonance imaging, can help distinguish FTD from other common forms of dementia, including Alzheimer disease, dementia with Lewy bodies, and vascular dementia. Although no cure for FTD exists, symptom management with selective serotonin reuptake inhibitors, antipsychotics, and galantamine has been shown to be beneficial. Primary care physicians have a critical role in identifying patients with FTD and assembling an interdisciplinary team to care for patients with FTD, their families, and caregivers.", "title": "" }, { "docid": "61096a0d1e94bb83f7bd067b06d69edd", "text": "A main puzzle of deep neural networks (DNNs) revolves around the apparent absence of “overfitting”, defined in this paper as follows: the expected error does not get worse when increasing the number of neurons or of iterations of gradient descent. This is surprising because of the large capacity demonstrated by DNNs to fit randomly labeled data and the absence of explicit regularization. Recent results by Srebro et al. provide a satisfying solution of the puzzle for linear networks used in binary classification. They prove that minimization of loss functions such as the logistic, the cross-entropy and the exp-loss yields asymptotic, “slow” convergence to the maximum margin solution for linearly separable datasets, independently of the initial conditions. Here we prove a similar result for nonlinear multilayer DNNs near zero minima of the empirical loss. The result holds for exponential-type losses but not for the square loss. In particular, we prove that the normalized weight matrix at each layer of a deep network converges to a minimum norm solution (in the separable case). Our analysis of the dynamical system corresponding to gradient descent of a multilayer network suggests a simple criterion for predicting the generalization performance of different zero minimizers of the empirical loss. This material is based upon work supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216. ar X iv :1 80 6. 11 37 9v 1 [ cs .L G ] 2 9 Ju n 20 18 Theory IIIb: Generalization in Deep Networks Tomaso Poggio ∗1, Qianli Liao1, Brando Miranda1, Andrzej Banburski1, Xavier Boix1, and Jack Hidary2 1Center for Brains, Minds and Machines, MIT 2Alphabet (Google) X", "title": "" }, { "docid": "ed7a1b09c68876e17679f6e61635bbb8", "text": "Diminished antioxidant defense or increased production of reactive oxygen species in the biological system can result in oxidative stress which may lead to various neurodegenerative diseases including Alzheimer’s disease (AD). Microglial activation also contributes to the progression of AD by producing several proinflammatory cytokines, nitric oxide (NO) and prostaglandin E2 (PGE2). Oxidative stress and inflammation have been reported to be possible pathophysiological mechanisms underlying AD. In addition, the cholinergic hypothesis postulates that memory impairment in patient with AD is also associated with the deficit of cholinergic function in the brain. Although a number of drugs have been approved for the treatment of AD, most of these synthetic drugs have diverse side effects and yield relatively modest benefits. Marine algae have great potential in pharmaceutical and biomedical applications as they are valuable sources of bioactive properties such as anticoagulation, antimicrobial, antioxidative, anticancer and anti-inflammatory. Hence, this study aimed to provide an overview of the properties of Malaysian seaweeds (Padina australis, Sargassum polycystum and Caulerpa racemosa) in inhibiting oxidative stress, neuroinflammation and cholinesterase enzymes. These seaweeds significantly exhibited potent DPPH and moderate superoxide anion radical scavenging ability (P<0.05). Hexane and methanol extracts of S. polycystum exhibited the most potent radical scavenging ability with IC50 values of 0.157±0.004mg/ml and 0.849±0.02mg/ml for DPPH and ABTS assays, respectively. Hexane extract of C. racemosa gave the strongest superoxide radical inhibitory effect (IC50 of 0.386±0.01mg/ml). Most seaweed extracts significantly inhibited the production of cytokine (IL-6, IL-1 β, TNFα) and NO in a concentration-dependent manner without causing significant cytotoxicity to the lipopolysaccharide (LPS)-stimulated microglia cells (P<0.05). All extracts suppressed cytokine and NO level by more than 50% at the concentration of 0.4mg/ml. In addition, C. racemosa and S. polycystum also showed anti-acetylcholinesterase activities with the IC50 values ranging from 0.086-0.115 mg/ml. Moreover, C. racemosa and P. australis were also found to be active against butyrylcholinesterase with IC50 values ranging from 0.1180.287 mg/ml. Keywords—Anticholinesterase, antioxidative, neuroinflammation, seaweeds. Siti Aisya Gany and Swee Ching Tan are with the School of Postgraduate Studies, International Medical University, Jalan Jalil Perkasa 19, 57000 Kuala Lumpur, Malaysia (e-mail: aisyagany@gmail.com, sweeching_tan89@yahoo.com). Sook Yee Gan is with School of Pharmacy, International Medical University, Jalan Jalil Perkasa 19, 57000 Kuala Lumpur, Malaysia (corresponding author: phone: 603-27317518; fax: 603-86567228; e-mail: sookyee_gan@imu.edu.my).", "title": "" }, { "docid": "3f2e76d16149b2591262befc0957e4e2", "text": "In order to improve the performance of the high-speed brushless direct current motor drives, a novel high-precision sensorless drive has been developed. It is well known that the inevitable voltage pulses, which are generated during the commutation periods, will impact the rotor position detecting accuracy, and further impact the performance of the overall sensorless drive, especially in the higher speed range or under the heavier load conditions. For this reason, the active compensation method based on the virtual third harmonic back electromotive force incorporating the SFF-SOGI-PLL (synchronic-frequency filter incorporating the second-order generalized integrator based phase-locked loop) is proposed to precise detect the commutation points for sensorless drive. An experimental driveline system used for testing the electrical performance of the developed magnetically suspended motor is built. The mathematical analysis and the comparable experimental results have been shown to validate the effectiveness of the proposed sensorless drive algorithm.", "title": "" }, { "docid": "3f1a546477d02b09016472574a6f3f6a", "text": "The paper mainly focusses on an improved voice activity detection algorithm employing long-term signal processing and maximum spectral component tracking. The benefits of this approach have been analyzed in a previous work (Ramirez, J. et al., Proc. EUROSPEECH 2003, p.3041-4, 2003) with clear improvements in speech/non-speech discriminability and speech recognition performance in noisy environments. Two clear aspects are now considered. The first one, which improves the performance of the VAD in low noise conditions, considers an adaptive length frame window to track the long-term spectral components. The second one reduces misclassification errors in highly noisy environments by using a noise reduction stage before the long-term spectral tracking. Experimental results show clear improvements over different VAD methods in speech/pause discrimination and speech recognition performance. Particularly, improvements in recognition rate were reported when the proposed VAD replaced the VADs of the ETSI advanced front-end (AFE) for distributed speech recognition (DSR).", "title": "" }, { "docid": "358598f23ee536a22e3dc15ba67e095f", "text": "A new mechanism to balance an autonomous unicycle is explored which makes use of a simple pendulum. Mounted laterally on the unicycle chassis, the pendulum provides a means of controlling the unicycle balance in the lateral (left-right) direction. Longitudinal (forward-backward) balance is achieved by controlling the unicycle wheel, a mechanism exactly the same as that of wheeled inverted pendulum. In this paper, the pendulum-balancing concept is explained and the dynamics model of an autonomous unicycle balanced by such mechanism is derived by Lagrange-Euler formulation. The behavior is analyzed by dynamic simulation in MATLAB. Dynamics comparison with wheeled inverted pendulum and Acrobot is also performed.", "title": "" }, { "docid": "17b8bff80cf87fb7e3c6c729bb41c99e", "text": "Off-policy reinforcement learning enables near-optimal policy from suboptimal experience, thereby provisions opportunity for artificial intelligence applications in healthcare. Previous works have mainly framed patient-clinician interactions as Markov decision processes, while true physiological states are not necessarily fully observable from clinical data. We capture this situation with partially observable Markov decision process, in which an agent optimises its actions in a belief represented as a distribution of patient states inferred from individual history trajectories. A Gaussian mixture model is fitted for the observed data. Moreover, we take into account the fact that nuance in pharmaceutical dosage could presumably result in significantly different effect by modelling a continuous policy through a Gaussian approximator directly in the policy space, i.e. the actor. To address the challenge of infinite number of possible belief states which renders exact value iteration intractable, we evaluate and plan for only every encountered belief, through heuristic search tree by tightly maintaining lower and upper bounds of the true value of belief. We further resort to function approximations to update value bounds estimation, i.e. the critic, so that the tree search can be improved through more compact bounds at the fringe nodes that will be back-propagated to the root. Both actor and critic parameters are learned via gradient-based approaches. Our proposed policy trained from real intensive care unit data is capable of dictating dosing on vasopressors and intravenous fluids for sepsis patients that lead to the best patient outcomes.", "title": "" }, { "docid": "63fef6099108f7990da0a7687e422e14", "text": "The IWSLT 2017 evaluation campaign has organised three tasks. The Multilingual task, which is about training machine translation systems handling many-to-many language directions, including so-called zero-shot directions. The Dialogue task, which calls for the integration of context information in machine translation, in order to resolve anaphoric references that typically occur in human-human dialogue turns. And, finally, the Lecture task, which offers the challenge of automatically transcribing and translating real-life university lectures. Following the tradition of these reports, we will described all tasks in detail and present the results of all runs submitted by their participants.", "title": "" }, { "docid": "7c1b301e45da5af0f5248f04dbf33f75", "text": "[1] We invert 115 differential interferograms derived from 47 synthetic aperture radar (SAR) scenes for a time-dependent deformation signal in the Santa Clara valley, California. The time-dependent deformation is calculated by performing a linear inversion that solves for the incremental range change between SAR scene acquisitions. A nonlinear range change signal is extracted from the ERS InSAR data without imposing a model of the expected deformation. In the Santa Clara valley, cumulative land uplift is observed during the period from 1992 to 2000 with a maximum uplift of 41 ± 18 mm centered north of Sunnyvale. Uplift is also observed east of San Jose. Seasonal uplift and subsidence dominate west of the Silver Creek fault near San Jose with a maximum peak-to-trough amplitude of 35 mm. The pattern of seasonal versus long-term uplift provides constraints on the spatial and temporal characteristics of water-bearing units within the aquifer. The Silver Creek fault partitions the uplift behavior of the basin, suggesting that it acts as a hydrologic barrier to groundwater flow. While no tectonic creep is observed along the fault, the development of a low-permeability barrier that bisects the alluvium suggests that the fault has been active since the deposition of Quaternary units.", "title": "" }, { "docid": "6724f1e8a34a6d9f64a30061ce7f67c0", "text": "Mental contrasting with implementation intentions (MCII) has been found to improve selfregulation across many life domains. The present research investigates whether MCII can benefit time management. In Study 1, we asked students to apply MCII to a pressing academic problem and assessed how they scheduled their time for the upcoming week. MCII participants scheduled more time than control participants who in their thoughts either reflected on similar contents using different cognitive procedures (content control group) or applied the same cognitive procedures on different contents (format control group). In Study 2, students were taught MCII as a metacognitive strategy to be used on any upcoming concerns of the subsequent week. As compared to the week prior to the training, students in the MCII (vs. format control) condition improved in self-reported time management. In Study 3, MCII (vs. format control) helped working mothers who enrolled in a vocational business program to attend classes more regularly. The findings suggest that performing MCII on one’s everyday concerns improves time management.", "title": "" }, { "docid": "e84a03caf97b5a7ee1007c0eab78664d", "text": "We study a mini-batch diversification scheme for stochastic gradient descent (SGD). While classical SGD relies on uniformly sampling data points to form a mini-batch, we propose a non-uniform sampling scheme based on the Determinantal Point Process (DPP). The DPP relies on a similarity measure between data points and gives low probabilities to mini-batches which contain redundant data, and higher probabilities to mini-batches with more diverse data. This simultaneously balances the data and leads to stochastic gradients with lower variance. We term this approach Balanced Mini-batch SGD (BM-SGD). We show that regular SGD and stratified sampling emerge as special cases. Furthermore, BM-SGD can be considered a generalization of stratified sampling to cases where no discrete features exist to bin the data into groups. We show experimentally that our method results more interpretable and diverse features in unsupervised setups, and in better classification accuracies in supervised setups.", "title": "" }, { "docid": "69624e1501b897bf1a9f9a5a84132da3", "text": "360° videos and Head-Mounted Displays (HMDs) are geŠing increasingly popular. However, streaming 360° videos to HMDs is challenging. Œis is because only video content in viewers’ Fieldof-Views (FoVs) is rendered, and thus sending complete 360° videos wastes resources, including network bandwidth, storage space, and processing power. Optimizing the 360° video streaming to HMDs is, however, highly data and viewer dependent, and thus dictates real datasets. However, to our best knowledge, such datasets are not available in the literature. In this paper, we present our datasets of both content data (such as image saliency maps and motion maps derived from 360° videos) and sensor data (such as viewer head positions and orientations derived from HMD sensors). We put extra e‚orts to align the content and sensor data using the timestamps in the raw log €les. Œe resulting datasets can be used by researchers, engineers, and hobbyists to either optimize existing 360° video streaming applications (like rate-distortion optimization) and novel applications (like crowd-driven cameramovements). We believe that our dataset will stimulate more research activities along this exciting new research direction. ACM Reference format: Wen-Chih Lo, Ching-Ling Fan, Jean Lee, Chun-Ying Huang, Kuan-Ta Chen, and Cheng-Hsin Hsu. 2017. 360° Video Viewing Dataset in Head-Mounted Virtual Reality. In Proceedings ofMMSys’17, Taipei, Taiwan, June 20-23, 2017, 6 pages. DOI: hŠp://dx.doi.org/10.1145/3083187.3083219 CCS Concept • Information systems→Multimedia streaming", "title": "" }, { "docid": "3fb2635846f2339dbd68839c1359047e", "text": "The objectives were: (i) to present a method for assessing muscle pain during exercise, (ii) to provide reliability and validity data in support of the measurement tool, (iii) to test whether leg muscle pain threshold during exercise was related to a commonly used measure of pain threshold pain during test, (iv) to examine the relationship between pain and exertion ratings, (v) to test whether leg muscle pain is related to performance, and (vi) to test whether a large dose of aspirin would delay leg muscle pain threshold and/or reduce pain ratings during exercise. In study 1, seven females and seven males completed three 1-min cycling bouts at three different randomly ordered power outputs. Pain was assessed using a 10-point pain scale. High intraclass correlations (R from 0.88 to 0.98) indicated that pain intensity could be rated reliably using the scale. In study 2, 11 college-aged males (age 21.3 +/- 1.3 yr) performed a ramped (24 W.min-1) maximal cycle ergometry test. A button was depressed when leg muscle pain threshold was reached. Pain threshold occurred near 50% of maximal capacity: 50.3 (+/- 12.9% Wmax), 48.6 (+/- 14.8% VO2max), and 55.8 (+/- 12.9% RPEmax). Pain intensity ratings obtained following pain threshold were positively accelerating function of the relative exercise intensity. Volitional exhaustion was associated with pain ratings of 8.2 (+/- 2.5), a value most closely associated with the verbal anchor \"very strong pain.\" In study 3, participants completed the same maximal exercise test as in study 2 as well as leg cycling at 60 rpm for 8 s at four randomly ordered power outputs (100, 150, 200, and 250 W) on a separate day. Pain and RPE ratings were significantly lower during the 8-s bouts compared to those obtained at the same power outputs during the maximal cycle test. The results suggest that noxious metabolites of muscle contraction play a role in leg muscle pain during exercise. In study 4, moderately active male subjects (N = 19) completed two ramped maximal cycle ergometry tests. Subjects drank a water and Kool-Aid mixture, that either was or was not (placebo) combined with a 20 mg.kg-1 dose of powdered aspirin 60 min before exercise. Paired t-tests revealed no differences between conditions for the measures of exercise intensity at pain threshold [aspirin vs placebo mean (+/- SD)]: power output: 150 (+/- 60.3 W) versus 153.5 (+/- 64.8 W); VO2: 21.3 (+/- 8.6 mL.kg-1.min-1) versus 22.1 (+/- 10.0 mL.kg-1.min-1); and RPE: 10.9 (+/- 3.1) versus 11.4 (+/- 2.9). Repeated measures ANOVA revealed no significant condition main effect or condition by trial interaction for pain responses during recovery or during exercise at 60, 70, 80, 90, and 100% of each condition's peak power output. It is concluded that the perception of leg muscle pain intensity during cycle ergometry: (i) is reliably and validly measured using the developed 10-point pain scale, (ii) covaries as a function of objective exercise stimuli such as power output, (iii) is distinct from RPE, (iv) is unrelated to performance of the type employed here, and (v) is not altered by the ingestion of 20 mg.kg-1 acetylsalicylic acid 1 h prior to the exercise bout.", "title": "" }, { "docid": "a6f2cee851d2c22d471f473caf1710a1", "text": "One of the main reasons why Byzantine fault-tolerant (BFT) systems are currently not widely used lies in their high resource consumption: <inline-formula><tex-math notation=\"LaTeX\">$3f+1$</tex-math><alternatives> <inline-graphic xlink:type=\"simple\" xlink:href=\"distler-ieq1-2495213.gif\"/></alternatives></inline-formula> replicas are required to tolerate only <inline-formula><tex-math notation=\"LaTeX\">$f$</tex-math><alternatives> <inline-graphic xlink:type=\"simple\" xlink:href=\"distler-ieq2-2495213.gif\"/></alternatives></inline-formula> faults. Recent works have been able to reduce the minimum number of replicas to <inline-formula><tex-math notation=\"LaTeX\">$2f+1$</tex-math> <alternatives><inline-graphic xlink:type=\"simple\" xlink:href=\"distler-ieq3-2495213.gif\"/></alternatives></inline-formula> by relying on trusted subsystems that prevent a faulty replica from making conflicting statements to other replicas without being detected. Nevertheless, having been designed with the focus on fault handling, during normal-case operation these systems still use more resources than actually necessary to make progress in the absence of faults. This paper presents <italic>Resource-efficient Byzantine Fault Tolerance</italic> (<sc>ReBFT</sc>), an approach that minimizes the resource usage of a BFT system during normal-case operation by keeping <inline-formula> <tex-math notation=\"LaTeX\">$f$</tex-math><alternatives><inline-graphic xlink:type=\"simple\" xlink:href=\"distler-ieq4-2495213.gif\"/> </alternatives></inline-formula> replicas in a passive mode. In contrast to active replicas, passive replicas neither participate in the agreement protocol nor execute client requests; instead, they are brought up to speed by verified state updates provided by active replicas. In case of suspected or detected faults, passive replicas are activated in a consistent manner. To underline the flexibility of our approach, we apply <sc>ReBFT</sc> to two existing BFT systems: PBFT and MinBFT.", "title": "" }, { "docid": "a05b34697055678a607ab4db4d87fa07", "text": "This paper presents a novel set of image descriptors that encodes information from color, shape, spatial and local features of an image to improve upon the popular Pyramid of Histograms of Oriented Gradients (PHOG) descriptor for object and scene image classification. In particular, a new Gabor-PHOG (GPHOG) image descriptor created by enhancing the local features of an image using multiple Gabor filters is first introduced for feature extraction. Second, a comparative assessment of the classification performance of the GPHOG descriptor is made in grayscale and six different color spaces to further propose two novel color GPHOG descriptors that perform well on different object and scene image categories. Finally, an innovative Fused Color GPHOG (FC–GPHOG) descriptor is presented by integrating the Principal Component Analysis (PCA) features of the GPHOG descriptors in the six color spaces to combine color, shape and local feature information. Feature extraction for the proposed descriptors employs PCA and Enhanced Fisher Model (EFM), and the nearest neighbor rule is used for final classification. Experimental results using the MIT Scene dataset and the Caltech 256 object categories dataset show that the proposed new FC–GPHOG descriptor achieves a classification performance better than or comparable to other popular image descriptors, such as the Scale Invariant Feature Transform (SIFT) based Pyramid Histograms of visual Words descriptor, Color SIFT four Concentric Circles, Spatial Envelope, and Local Binary Patterns.", "title": "" }, { "docid": "c8bbc713aecbc6682d21268ee58ca258", "text": "Traditional approaches to knowledge base completion have been based on symbolic representations. Lowdimensional vector embedding models proposed recently for this task are attractive since they generalize to possibly unlimited sets of relations. A significant drawback of previous embedding models for KB completion is that they merely support reasoning on individual relations (e.g., bornIn(X,Y )⇒ nationality(X,Y )). In this work, we develop models for KB completion that support chains of reasoning on paths of any length using compositional vector space models. We construct compositional vector representations for the paths in the KB graph from the semantic vector representations of the binary relations in that path and perform inference directly in the vector space. Unlike previous methods, our approach can generalize to paths that are unseen in training and, in a zero-shot setting, predict target relations without supervised training data for that relation.", "title": "" } ]
scidocsrr
23eb9aea042e83050378f7c8b5e832c2
CARET model checking for malware detection
[ { "docid": "453af7094a854afd1dfb2e7dc36a7cca", "text": "In this paper, we propose a new approach for the static detection of malicious code in executable programs. Our approach rests on a semantic analysis based on behaviour that even makes possible the detection of unknown malicious code. This analysis is carried out directly on binary code. Static analysis offers techniques for predicting properties of the behaviour of programs without running them. The static analysis of a given binary executable is achieved in three major steps: construction of an intermediate representation, flow-based analysis that catches securityoriented program behaviour, and static verification of critical behaviours against security policies (model checking). 1. Motivation and Background With the advent and the rising popularity of networks, Internet, intranets and distributed systems, security is becoming one of the focal points of research. As a matter of fact, more and more people are concerned with malicious code that could exist in software products. A malicious code is a piece of code that can affect the secrecy, the integrity, the data and control flow, and the functionality of a system. Therefore, ∗This research is jointly funded by a research grant from the Natural Sciences and Engineering Research Council, NSERC, Canada and also by a research contract from the Defence Research Establishment, Valcartier (DREV), 2459, Pie XI Nord, Val-Bélair, QC, Canada, G3J 1X5 their detection is a major concern within the computer science community as well as within the user community. As malicious code can affect the data and control flow of a program, static flow analysis may naturally be helpful as part of the detection process. In this paper, we address the problem of static detection of malicious code in binary executables. The primary objective of this research initiative is to elaborate practical methods and tools with robust theoretical foundations for the static detection of malicious code. The rest of the paper is organized in the following way. Section 2 is devoted to a comparison of static and dynamic approaches. Section 3 presents our approach to the detection of malices in binary executable code. Section 4 discusses the implementation of our approach. Finally, a few remarks and a discussion of future research are ultimately sketched as a conclusion in Section 5. 2. Static vs dynamic analysis There are two main approaches for the detection of malices : static analysis and dynamic analysis. Static analysis consists in examining the code of programs to determine properties of the dynamic execution of these programs without running them. This technique has been used extensively in the past by compiler developers to carry out various analyses and transformations aiming at optimizing the code [10]. Static analysis is also used in reverse engineering of software systems and for program understanding [3, 4]. Its use for the detection of malicious code is fairly recent. Dynamic analysis mainly consists in monitoring the execution of a program to detect malicious behaviour. Static analysis has the following advantages over dynamic analysis: • Static analysis techniques permit to make exhaustive analysis. They are not bound to a specific execution of a program and can give guarantees that apply to all executions of the program. In contrast, dynamic analysis techniques only allow examination of behaviours that correspond to selected test cases. • A verdict can be given before execution, where it may be difficult to determine the proper action to take in the presence of malices. • There is no run-time overhead. However, it may be impossible to certify statically that certain properties hold (e.g., due to undecidability). In this case, dynamic monitoring may be the only solution. Thus, static analysis and dynamic analysis are complementary. Static analysis can be used first, and properties that cannot be asserted statically can be monitored dynamically. As mentioned in the introduction, in this paper, we are concerned with static analysis techniques. Not much has been published about their use for the detection of malicious code. In [8], the authors propose a method for statically detecting malicious code in C programs. Their method is based on so-called tell-tale signs, which are program properties that allow one to distinguish between malicious and benign programs. The authors combine the tell-tale sign approach with program slicing in order to produce small fragments of large programs that can be easily analyzed. 3. Description of the Approach Static analysis techniques are generally used to operate on source code. However, as we explained in the introduction, we need to apply them to binary code, and thus, we had to adapt and evolve these techniques. Our approach is structured in three major steps: Firstly, the binary code is translated into an internal intermediate form (see Section 3.1) ; secondly, this intermediate form is abstracted through flowbased analysis as various relevant graphs (controlflow graph, data-flow graph, call graph, critical-API 1 graph, etc.) (Section 3.2); the third step is the static verification and consists in checking these graphs against security policies (Section 3.3). 3.1 Intermediate Representation A binary executable is the machine code version of a high-level or assembly program that has been compiled (or assembled) and linked for a particular platform and operating system. The general format of binary executables varies widely among operating systems. For example, the Portable Executable format (PE) is used by the Windows NT/98/95 operating system. The PE format includes comprehensive information about the different sections of the program that form the main part of the file, including the following segments: • .text, which contains the code and the entry point of the application, • .data, which contains various type of data, • .idata and .edata, which contain respectively the list of imported and exported APIs for an application or a Dynamic-Linking Library (DLL). The code segment (.text) constitutes the main part of the file; in fact, this section contains all the code that is to be analyzed. In order to translate an executable program into an equivalent high-level-language program, we use the disassembly tool IDA32 Pro [7], which can disassemble various types of executable files (ELF, EXE, PE, etc.) for several processors and operating systems (Windows 98, Windows NT, etc.). Also, IDA32 automatically recognizes calls to the standard libraries (i.e., API calls) for a long list of compilers. Statically analysing a program requires the construction of the syntax tree of this program, also called intermediate representation. The various techniques of static analysis are based on this abstract representation. The goal of the first step is to disassemble the binary code and then to parse the assembly code thus generated to produce the syntax tree (Figure 1). API: Application Program Interface.", "title": "" } ]
[ { "docid": "270def19bfb0352d38d30ed8389d6c2a", "text": "Morphology plays an important role in behavioral and locomotion strategies of living and artificial systems. There is biological evidence that adaptive morphological changes can not only extend dynamic performances by reducing tradeoffs during locomotion but also provide new functionalities. In this article, we show that adaptive morphology is an emerging design principle in robotics that benefits from a new generation of soft, variable-stiffness, and functional materials and structures. When moving within a given environment or when transitioning between different substrates, adaptive morphology allows accommodation of opposing dynamic requirements (e.g., maneuverability, stability, efficiency, and speed). Adaptive morphology is also a viable solution to endow robots with additional functionalities, such as transportability, protection, and variable gearing. We identify important research and technological questions, such as variable-stiffness structures, in silico design tools, and adaptive control systems to fully leverage adaptive morphology in robotic systems.", "title": "" }, { "docid": "54d293423026d84bce69e8e073ebd6ac", "text": "AIMS\nPredictors of Response to Cardiac Resynchronization Therapy (CRT) (PROSPECT) was the first large-scale, multicentre clinical trial that evaluated the ability of several echocardiographic measures of mechanical dyssynchrony to predict response to CRT. Since response to CRT may be defined as a spectrum and likely influenced by many factors, this sub-analysis aimed to investigate the relationship between baseline characteristics and measures of response to CRT.\n\n\nMETHODS AND RESULTS\nA total of 286 patients were grouped according to relative reduction in left ventricular end-systolic volume (LVESV) after 6 months of CRT: super-responders (reduction in LVESV > or =30%), responders (reduction in LVESV 15-29%), non-responders (reduction in LVESV 0-14%), and negative responders (increase in LVESV). In addition, three subgroups were formed according to clinical and/or echocardiographic response: +/+ responders (clinical improvement and a reduction in LVESV > or =15%), +/- responders (clinical improvement or a reduction in LVESV > or =15%), and -/- responders (no clinical improvement and no reduction in LVESV > or =15%). Differences in clinical and echocardiographic baseline characteristics between these subgroups were analysed. Super-responders were more frequently females, had non-ischaemic heart failure (HF), and had a wider QRS complex and more extensive mechanical dyssynchrony at baseline. Conversely, negative responders were more frequently in New York Heart Association class IV and had a history of ventricular tachycardia (VT). Combined positive responders after CRT (+/+ responders) had more non-ischaemic aetiology, more extensive mechanical dyssynchrony at baseline, and no history of VT.\n\n\nCONCLUSION\nSub-analysis of data from PROSPECT showed that gender, aetiology of HF, QRS duration, severity of HF, a history of VT, and the presence of baseline mechanical dyssynchrony influence clinical and/or LV reverse remodelling after CRT. Although integration of information about these characteristics would improve patient selection and counselling for CRT, further randomized controlled trials are necessary prior to changing the current guidelines regarding patient selection for CRT.", "title": "" }, { "docid": "76c279b79355efa4d357655e56e84f3d", "text": "BACKGROUND\nHypertension has proven to be a strong liability with 13.5% of all mortality worldwide being attributed to elevated blood pressures in 2001. An accurate blood pressure measurement lies at the crux of an appropriate diagnosis. Despite the mercury sphygmomanometer being the gold standard, the ongoing deliberation as to whether mercury sphygmomanometers should be replaced with the automated oscillometric devices stems from the risk mercury poses to the environment.\n\n\nAIM\nThis study was performed to check the validity of automated oscillometric blood pressure measurements as compared to the manual blood pressure measurements in Karachi, Pakistan.\n\n\nMATERIAL AND METHODS\nBlood pressure was recorded in 200 individuals aged 15 and above using both, an automated oscillometric blood pressure device (Dinamap Procare 100) and a manual mercury sphygmomanometer concomitantly. Two nurses were assigned to each patient and the device, arm for taking the reading and nurses were randomly determined. SPSS version 20 was used for analysis. Mean and standard deviation of the systolic and diastolic measurements from each modality were compared to each other and P values of 0.05 or less were considered to be significant. Validation criteria of British Hypertension Society (BHS) and the US Association for the Advancement of Medical Instrumentation (AAMI) were used.\n\n\nRESULTS\nTwo hundred patients were included. The mean of the difference of systolic was 8.54 ± 9.38 while the mean of the difference of diastolic was 4.21 ± 7.88. Patients were further divided into three groups of different systolic blood pressure <= 120, > 120 to = 150 and > 150, their means were 6.27 ± 8.39 (p-value 0.175), 8.91 ± 8.96 (p-value 0.004) and 10.98 ± 10.49 (p-value 0.001) respectively. In our study 89 patients were previously diagnosed with hypertension; their difference of mean systolic was 9.43 ± 9.89 (p-value 0.000) and difference of mean diastolic was 4.26 ± 7.35 (p-value 0.000).\n\n\nCONCLUSIONS\nSystolic readings from a previously validated device are not reliable when used in the ER and they show a higher degree of incongruency and inaccuracy when they are used outside validation settings. Also, readings from the right arm tend to be more precise.", "title": "" }, { "docid": "33be5718d8a60f36e5faaa0cc4f0019f", "text": "Most of our daily activities are now moving online in the big data era, with more than 25 billion devices already connected to the Internet, to possibly over a trillion in a decade. However, big data also bears a connotation of &ldquo;big brother&rdquo; when personal information (such as sales transactions) is being ubiquitously collected, stored, and circulated around the Internet, often without the data owner's knowledge. Consequently, a new paradigm known as online privacy or Internet privacy is becoming a major concern regarding the privacy of personal and sensitive data.", "title": "" }, { "docid": "2ee579f06ca68d13823f8576122c20fe", "text": "Current trends in distributed denial of service (DDoS) attacks show variations in terms of attack motivation, planning, infrastructure, and scale. “DDoS-for-Hire” and “DDoS mitigation as a Service” are the two services, which are available to attackers and victims, respectively. In this work, we provide a fundamental difference between a “regular” DDoS attack and an “extreme” DDoS attack. We conduct DDoS attacks on cloud services, where having the same attack features, two different services show completely different consequences, due to the difference in the resource utilization per request. We study various aspects of these attacks and find out that the DDoS mitigation service’s performance is dependent on two factors. Gaurav Somani gaurav@curaj.ac.in Manoj Singh Gaur gaurms@mnit.ac.in Dheeraj Sanghi dheeraj@cse.iitk.ac.in Mauro Conti conti@math.unipd.it Rajkumar Buyya rbuyya@unimelb.edu.au 1 Central University of Rajasthan, Rajasthan, India 2 Malaviya National Institute of Technology, Rajasthan, India 3 Indian Institute of Technology, Kanpur, India 4 University of Padua, Padua, Italy 5 The University of Melbourne, Parkville, Australia One factor is related to the severity of the “resource-race” with the victim web-service. Second factor is “attack cooling down period” which is the time taken to bring the service availability post detection of the attack. Utilizing these two important factors, we propose a supporting framework for the DDoS mitigation services, by assisting in reducing the attack mitigation time and the overall downtime. This novel framework comprises of an affinity-based victim-service resizing algorithm to provide performance isolation, and a TCP tuning technique to quickly free the attack connections, hence minimizing the attack cooling down period. We evaluate the proposed novel techniques with real attack instances and compare various attack metrics. Results show a significant improvement to the performance of DDoS mitigation service, providing quick attack mitigation. The presence of proposed DDoS mitigation support framework demonstrated a major reduction of more than 50% in the service downtime.", "title": "" }, { "docid": "43850ef433d1419ed37b7b12f3ff5921", "text": "We have seen ten years of the application of AI planning to the problem of narrative generation in Interactive Storytelling (IS). In that time planning has emerged as the dominant technology and has featured in a number of prototype systems. Nevertheless key issues remain, such as how best to control the shape of the narrative that is generated (e.g., by using narrative control knowledge, i.e., knowledge about narrative features that enhance user experience) and also how best to provide support for real-time interactive performance in order to scale up to more realistic sized systems. Recent progress in planning technology has opened up new avenues for IS and we have developed a novel approach to narrative generation that builds on this. Our approach is to specify narrative control knowledge for a given story world using state trajectory constraints and then to treat these state constraints as landmarks and to use them to decompose narrative generation in order to address scalability issues and the goal of real-time performance in larger story domains. This approach to narrative generation is fully implemented in an interactive narrative based on the “Merchant of Venice.” The contribution of the work lies both in our novel use of state constraints to specify narrative control knowledge for interactive storytelling and also our development of an approach to narrative generation that exploits such constraints. In the article we show how the use of state constraints can provide a unified perspective on important problems faced in IS.", "title": "" }, { "docid": "8bc615dfa51a9c5835660c1b0eb58209", "text": "Large scale grid connected photovoltaic (PV) energy conversion systems have reached the megawatt level. This imposes new challenges on existing grid interface converter topologies and opens new opportunities to be explored. In this paper a new medium voltage multilevel-multistring configuration is introduced based on a three-phase cascaded H-bridge (CHB) converter and multiple string dc-dc converters. The proposed configuration enables a large increase of the total capacity of the PV system, while improving power quality and efficiency. The converter structure is very flexible and modular since it decouples the grid converter from the PV string converter, which allows to accomplish independent control goals. The main challenge of the proposed configuration is to handle the inherent power imbalances that occur not only between the different cells of one phase of the converter but also between the three phases. The control strategy to deal with these imbalances is also introduced in this paper. Simulation results of a 7-level CHB for a multistring PV system are presented to validate the proposed topology and control method.", "title": "" }, { "docid": "b1746ab2946c51bcd10360d051da351f", "text": "BACKGROUND AND OBJECTIVE\nThe ICD-9-CM adaptation of the Charlson comorbidity score has been a valuable resource for health services researchers. With the transition into ICD-10 coding worldwide, an ICD-10 version of the Deyo adaptation was developed and validated using population-based hospital data from Victoria, Australia.\n\n\nMETHODS\nThe algorithm was translated from ICD-9-CM into ICD-10-AM (Australian modification) in a multistep process. After a mapping algorithm was used to develop an initial translation, these codes were manually examined by the coding experts and a general physician for face validity. Because the ICD-10 system is country specific, our goal was to keep many of the translated code at the three-digit level for generalizability of the new index.\n\n\nRESULTS\nThere appears to be little difference in the distribution of the Charlson Index score between the two versions. A strong association between increasing index scores and mortality exists: the area under the ROC curve is 0.865 for the last year using the ICD-9-CM version and remains high, at 0.855, for the ICD-10 version.\n\n\nCONCLUSION\nThis work represents the first rigorous adaptation of the Charlson comorbidity index for use with ICD-10 data. In comparison with a well-established ICD-9-CM coding algorithm, it yields closely similar prevalence and prognosis information by comorbidity category.", "title": "" }, { "docid": "035bfa3cb164cb6d10a7b496c3e74854", "text": "Question Answering (QA) systems over Knowledge Graphs (KG) automatically answer natural language questions using facts contained in a knowledge graph. Simple questions, which can be answered by the extraction of a single fact, constitute a large part of questions asked on the web but still pose challenges to QA systems, especially when asked against a large knowledge resource. Existing QA systems usually rely on various components each specialised in solving different sub-tasks of the problem (such as segmentation, entity recognition, disambiguation, and relation classification etc.). In this work, we follow a quite different approach: We train a neural network for answering simple questions in an end-to-end manner, leaving all decisions to the model. It learns to rank subject-predicate pairs to enable the retrieval of relevant facts given a question. The network contains a nested word/character-level question encoder which allows to handle out-of-vocabulary and rare word problems while still being able to exploit word-level semantics. Our approach achieves results competitive with state-of-the-art end-to-end approaches that rely on an attention mechanism.", "title": "" }, { "docid": "09623c821f05ffb7840702a5869be284", "text": "Area-restricted search (ARS) is a foraging strategy used by many animals to locate resources. The behavior is characterized by a time-dependent reduction in turning frequency after the last resource encounter. This maximizes the time spent in areas in which resources are abundant and extends the search to a larger area when resources become scarce. We demonstrate that dopaminergic and glutamatergic signaling contribute to the neural circuit controlling ARS in the nematode Caenorhabditis elegans. Ablation of dopaminergic neurons eliminated ARS behavior, as did application of the dopamine receptor antagonist raclopride. Furthermore, ARS was affected by mutations in the glutamate receptor subunits GLR-1 and GLR-2 and the EAT-4 glutamate vesicular transporter. Interestingly, preincubation on dopamine restored the behavior in worms with defective dopaminergic signaling, but not in glr-1, glr-2, or eat-4 mutants. This suggests that dopaminergic and glutamatergic signaling function in the same pathway to regulate turn frequency. Both GLR-1 and GLR-2 are expressed in the locomotory control circuit that modulates the direction of locomotion in response to sensory stimuli and the duration of forward movement during foraging. We propose a mechanism for ARS in C. elegans in which dopamine, released in response to food, modulates glutamatergic signaling in the locomotory control circuit, thus resulting in an increased turn frequency.", "title": "" }, { "docid": "d103d856c51a4744d563dff2eff224a7", "text": "Automotive engines is an important application for model-based diagnosis because of legislative regulations. A diagnosis system for the air-intake system of a turbo-charged engine is constructed. The design is made in a systematic way and follows a framework of hypothesis testing. Different types of sensor faults and leakages are considered. It is shown how many different types of fault models, e.g., additive and multiplicative faults, can be used within one common diagnosis system, and using the same underlying design principle. The diagnosis system is experimentally validated on a real engine using industry-standard dynamic test-cycles.", "title": "" }, { "docid": "ee5fbcc34536f675cadb8e20eb6eb520", "text": "This work addresses employing direct and indirect discretization methods to obtain a rational discrete approximation of continuous time parallel fractional PID controllers. The different approaches are illustrated by implementing them on an example.", "title": "" }, { "docid": "d7aeb8de7bf484cbaf8e23fcf675d002", "text": "One method for detecting fraud is to check for suspicious changes in user behavior. This paper proposes a novel method, built upon ontology and ontology instance similarity. Ontology is now widely used to enable knowledge sharing and reuse, so some personality ontologies can be easily used to present user behavior. By measure the similarity of ontology instances, we can determine whether an account is defrauded. This method lows the data model cost and make the system very adaptive to different applications.", "title": "" }, { "docid": "1c4e71d00521219717607cbef90b5bec", "text": "The design of security for cyber-physical systems must take into account several characteristics common to such systems. Among these are feedback between the cyber and physical environment, distributed management and control, uncertainty, real-time requirements, and geographic distribution. This paper discusses these characteristics and suggests a design approach that better integrates security into the core design of the system. A research roadmap is presented that highlights some of the missing pieces needed to enable such an approach. 1. What is a Cyber-Physical-System? The term cyber-physical system has been applied to many problems, ranging from robotics, through SCADA, and distributed control systems. Not all cyber-physical systems involve critical infrastructure, but there are common elements that change the nature of the solutions that must be considered when securing cyber-physical systems. First, the extremely critical nature of activities performed by some cyber-physical systems means that we need security that works, and that by itself means we need something different. All kidding aside, there are fundamental system differences in cyber-physical systems that will force us to look at security in ways more closely tied to the physical application. It is my position that by focusing on these differences we can see where new (or rediscovered) approaches are needed, and that by building systems that support the inclusion of security as part of the application architecture, we can improve the security of both cyber-physical systems, where such an approach is most clearly warranted, as well as improve the security of cyber-only systems, where such an approach is more easily ignored. In this position paper I explain the characteristics of cyber-physical systems that must drive new research in security. I discuss the security problem areas that need attention because of these characteristics and I describe a design methodology for security that provides for better integration of security design with application design. Finally, I suggest some of the components of future systems that can help us include security as a focusing issue in the architectural design of critical applications.", "title": "" }, { "docid": "8adc8d2bf7f26d43ed0656126f50566a", "text": "Framing is a potentially useful paradigm for examining the strategic creation of public relations messages and audience responses. Based on a literature review across disciplines, this article identifies 7 distinct types of framing applicable to public relations. These involve the framing of situations, attributes, choices, actions, issues, responsibility, and news. Potential applications for public relations practice and research are discussed.", "title": "" }, { "docid": "1fa7c954f5e352679c33d8946f4cac4e", "text": "In some cases, such as in the estimation of impulse responses, it has been found that for plausible sample sizes the coverage accuracy of single bootstrap confidence intervals can be poor. The error in the coverage probability of single bootstrap confidence intervals may be reduced by the use of double bootstrap confidence intervals. The computer resources required for double bootstrap confidence intervals are often prohibitive, especially in the context of Monte Carlo studies. Double bootstrap confidence intervals can be estimated using computational algorithms incorporating simple deterministic stopping rules that avoid unnecessary computations. These algorithms may make the use and Monte Carlo evaluation of double bootstrap confidence intervals feasible in cases where otherwise they would not be feasible. The efficiency gains due to the use of these algorithms are examined by means of a Monte Carlo study for examples of confidence intervals for a mean and for the cumulative impulse response in a second order autoregressive model.", "title": "" }, { "docid": "4c0dc05d6571a5411be60320893c65db", "text": "Online labor markets, such as Amazon's Mechanical Turk, have been used to crowdsource simple, short tasks like image labeling and transcription. However, expert knowledge is often lacking in such markets, making it impossible to complete certain classes of tasks. In this work we introduce an alternative mechanism for crowdsourcing tasks that require specialized knowledge or skill: communitysourcing --- the use of physical kiosks to elicit work from specific populations. We investigate the potential of communitysourcing by designing, implementing and evaluating Umati: the communitysourcing vending machine. Umati allows users to earn credits by performing tasks using a touchscreen attached to the machine. Physical rewards (in this case, snacks) are dispensed through traditional vending mechanics. We evaluated whether communitysourcing can accomplish expert work by using Umati to grade Computer Science exams. We placed Umati in a university Computer Science building, targeting students with grading tasks for snacks. Over one week, 328 unique users (302 of whom were students) completed 7771 tasks (7240 by students). 80% of users had never participated in a crowdsourcing market before. We found that Umati was able to grade exams with 2% higher accuracy (at the same price) or at 33% lower cost (at equivalent accuracy) than traditional single-expert grading. Mechanical Turk workers had no success grading the same exams. These results indicate that communitysourcing can successfully elicit high-quality expert work from specific communities.", "title": "" }, { "docid": "aaf3f18581f141355a5865883a30759a", "text": "Matrix factorization is a fundamental problem that is often encountered in many computer vision and machine learning tasks. In recent years, enhancing the robustness of matrix factorization methods has attracted much attention in the research community. To benefit from the strengths of full Bayesian treatment over point estimation, we propose here a full Bayesian approach to robust matrix factorization. For the generative process, the model parameters have conjugate priors and the likelihood (or noise model) takes the form of a Laplace mixture. For Bayesian inference, we devise an efficient sampling algorithm by exploiting a hierarchical view of the Laplace distribution. Besides the basic model, we also propose an extension which assumes that the outliers exhibit spatial or temporal proximity as encountered in many computer vision applications. The proposed methods give competitive experimental results when compared with several state-of-the-art methods on some benchmark image and video processing tasks.", "title": "" }, { "docid": "d4dc33b15df0a27259180fef3c28b546", "text": "Author name ambiguity is one of the problems that decrease the quality and reliability of information retrieved from digital libraries. Existing methods have tried to solve this problem by predefining a feature set based on expert’s knowledge for a specific dataset. In this paper, we propose a new approach which uses deep neural network to learn features automatically for solving author name ambiguity. Additionally, we propose the general system architecture for author name disambiguation on any dataset. We evaluate the proposed method on a dataset containing Vietnamese author names. The results show that this method significantly outperforms other methods that use predefined feature set. The proposed method achieves 99.31% in terms of accuracy. Prediction error rate decreases from 1.83% to 0.69%, i.e., it decreases by 1.14%, or 62.3% relatively compared with other methods that use predefined feature set (Table 3).", "title": "" }, { "docid": "a7d7c7ae9da5936f050443f684f48916", "text": "There is growing evidence for the presence of viable microorganisms in geological salt formations that are millions of years old. It is still not known, however, whether these bacteria are dormant organisms that are themselves millions of years old or whether the salt crystals merely provide a habitat in which contemporary microorganisms can grow, perhaps interspersed with relatively short periods of dormancy (McGenity et al. 2000). Vreeland, Rosenzweig and Powers (2000) have recently reported the isolation and growth of a halotolerant spore-formingBacillus species from a brine inclusion within a 250-Myr-old salt crystal from the Permian Salado Formation in New Mexico. This bacterium, Bacillus strain 2-9-3, was informally christened Bacillus permians, and a 16S ribosomal RNA gene was sequenced and deposited in GenBank under the name B. permians (accession number AF166093). It has been claimed thatB. permians was trapped inside the salt crystal 250 MYA and survived within the crystal until the present, most probably as a spore. Serious doubts have been raised concerning the possibility of spore survival for 250 Myr (Tomas Lindahl, personal communication), mostly because spores contain no active DNA repair enzymes, so the DNA is expected to decay into small fragments due to such factors as the natural radioactive radiation in the soil, and the bacterium is expected to lose its viability within at most several hundred years (Lindahl 1993). In this note, we apply theproof-of-the-pudding-is-in-the-eating principle to test whether the newly reported B. permians 16S ribosomal RNA gene sequence is ancient or not. There are several reasons to doubt the antiquity of B. permians. The first concerns the extraordinary similarity of its 16S rRNA gene sequence to that of Bacillus marismortui. Bacillus marismortui was described by Arahal et al. (1999) as a moderately halophilic species from the Dead Sea and was later renamed Salibacillus marismortui (Arahal et al. 2000). TheB. permians sequence differs from that of S. marismortui by only one transition and one transversion out of the 1,555 aligned and unambiguously determined nucleotides. In comparison, the 16S rRNA gene fromStaphylococcus succinus, which was claimed to be ‘‘25–35 million years old’’ (Lambert et al. 1998), differs from its homolog in its closest present-day relative (a urinary pathogen called Staphylococcus saprophyticus) by 19 substitutions out of 1,525 aligned nucleotides. Using Kimura’s (1980) two-parameter model, the difference between the B. permians and S. marismortui sequences translates into 1.3", "title": "" } ]
scidocsrr
a985dd470a44af9003a57e24ab4066bc
Leveraging mid-level deep representations for predicting face attributes in the wild
[ { "docid": "af56806a30f708cb0909998266b4d8c1", "text": "There are many excellent toolkits which provide support for developing machine learning software in Python, R, Matlab, and similar environments. Dlib-m l is an open source library, targeted at both engineers and research scientists, which aims to pro vide a similarly rich environment for developing machine learning software in the C++ language. T owards this end, dlib-ml contains an extensible linear algebra toolkit with built in BLAS supp ort. It also houses implementations of algorithms for performing inference in Bayesian networks a nd kernel-based methods for classification, regression, clustering, anomaly detection, and fe atur ranking. To enable easy use of these tools, the entire library has been developed with contract p rogramming, which provides complete and precise documentation as well as powerful debugging too ls.", "title": "" } ]
[ { "docid": "73d09f005f9335827493c3c47d02852b", "text": "Multiprotocol Label Switched Networks need highly intelligent controls to manage high volume traffic due to issues of traffic congestion and best path selection. The work demonstrated in this paper shows results from simulations for building optimal fuzzy based algorithm for traffic splitting and congestion avoidance. The design and implementation of Fuzzy based software defined networking is illustrated by introducing the Fuzzy Traffic Monitor in an ingress node. Finally, it displays improvements in the terms of mean delay (42.0%) and mean loss rate (2.4%) for Video Traffic. Then, the resu1t shows an improvement in the terms of mean delay (5.4%) and mean loss rate (3.4%) for Data Traffic and an improvement in the terms of mean delay(44.9%) and mean loss rate(4.1%) for Voice Traffic as compared to default MPLS implementation. Keywords—Multiprotocol Label Switched Networks; Fuzzy Traffic Monitor; Network Simulator; Ingress; Traffic Splitting; Fuzzy Logic Control System; Label setup System; Traffic Splitting System", "title": "" }, { "docid": "2746d538694db54381639e5e5acdb4ca", "text": "In the present research, the aqueous stability of leuprolide acetate (LA) in phosphate buffered saline (PBS) medium was studied (pH = 2.0-7.4). For this purpose, the effect of temperature, dissolved oxygen and pH on the stability of LA during 35 days was investigated. Results showed that the aqueous stability of LA was higher at low temperatures. Degassing of the PBS medium partially increased the stability of LA at 4 °C, while did not change at 37 °C. The degradation of LA was accelerated at lower pH values. In addition, complexes of LA with different portions of β-cyclodextrin (β-CD) were prepared through freeze-drying procedure and characterized by Fourier transform infrared (FTIR) and differential scanning calorimetry (DSC) analyses. Studying their aqueous stability at various pH values (2.0-7.4) showed LA/β-CD complexes exhibited higher stability when compared with LA at all pH values. The stability of complexes was also improved by increasing the portion of LA/β-CD up to 1/10.", "title": "" }, { "docid": "997228bb93bc851498877047fec4a42f", "text": "A method with clear guidelines is presented to design compact planar phase shifters with ultra-wideband (UWB) characteristics. The proposed method exploits broadside coupling between top and bottom elliptical microstrip patches via an elliptical slot located in the mid layer, which forms the ground plane. A theoretical model is used to analyze performance of the proposed devices. The model shows that it is possible to design high-performance UWB phase shifters for the 25deg-48deg range using the proposed structure. The method is used to design 30deg and 45deg phase shifters that have compact size, i.e., 2.5 cm times 2 cm. The simulated and measured results show that the designed phase shifters achieve better than plusmn3deg differential phase stability, less than 1-dB insertion loss, and better than 10-dB return loss across the UWB, i.e., 3.1-10.6 GHz.", "title": "" }, { "docid": "c25144cf41462c58820fdcd3652e9fec", "text": "0957-4174/$ see front matter 2009 Elsevier Ltd. A doi:10.1016/j.eswa.2009.02.043 * Corresponding author. Tel.: +3", "title": "" }, { "docid": "714242b8967ef68c022e568ef2fe01dd", "text": "Visual localization is a key step in many robotics pipelines, allowing the robot to (approximately) determine its position and orientation in the world. An efficient and scalable approach to visual localization is to use image retrieval techniques. These approaches identify the image most similar to a query photo in a database of geo-tagged images and approximate the query’s pose via the pose of the retrieved database image. However, image retrieval across drastically different illumination conditions, e.g. day and night, is still a problem with unsatisfactory results, even in this age of powerful neural models. This is due to a lack of a suitably diverse dataset with true correspondences to perform end-to-end learning. A recent class of neural models allows for realistic translation of images among visual domains with relatively little training data and, most importantly, without ground-truth pairings. In this paper, we explore the task of accurately localizing images captured from two traversals of the same area in both day and night. We propose ToDayGAN – a modified imagetranslation model to alter nighttime driving images to a more useful daytime representation. We then compare the daytime and translated night images to obtain a pose estimate for the night image using the known 6-DOF position of the closest day image. Our approach improves localization performance by over 250% compared the current state-of-the-art, in the context of standard metrics in multiple categories.", "title": "" }, { "docid": "e7232201e629e45b1f8f9a49cb1fdedf", "text": "Semantic Data Mining refers to the data mining tasks that systematically incorporate domain knowledge, especially formal semantics, into the process. In the past, many research efforts have attested the benefits of incorporating domain knowledge in data mining. At the same time, the proliferation of knowledge engineering has enriched the family of domain knowledge, especially formal semantics and Semantic Web ontologies. Ontology is an explicit specification of conceptualization and a formal way to define the semantics of knowledge and data. The formal structure of ontology makes it a nature way to encode domain knowledge for the data mining use. In this survey paper, we introduce general concepts of semantic data mining. We investigate why ontology has the potential to help semantic data mining and how formal semantics in ontologies can be incorporated into the data mining process. We provide detail discussions for the advances and state of art of ontology-based approaches and an introduction of approaches that are based on other form of knowledge representations.", "title": "" }, { "docid": "89bcf5b0af2f8bf6121e28d36ca78e95", "text": "3 Relating modules to external clinical traits 2 3.a Quantifying module–trait associations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 3.b Gene relationship to trait and important modules: Gene Significance and Module Membership . . . . 2 3.c Intramodular analysis: identifying genes with high GS and MM . . . . . . . . . . . . . . . . . . . . . . 3 3.d Summary output of network analysis results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4", "title": "" }, { "docid": "ec4bde3a67cccca41ca3e7af00072f1c", "text": "Single-nucleus RNA sequencing (sNuc-seq) profiles RNA from tissues that are preserved or cannot be dissociated, but it does not provide high throughput. Here, we develop DroNc-seq: massively parallel sNuc-seq with droplet technology. We profile 39,111 nuclei from mouse and human archived brain samples to demonstrate sensitive, efficient, and unbiased classification of cell types, paving the way for systematic charting of cell atlases.", "title": "" }, { "docid": "9c1f7c4fc30a10f306354f83f6b8d9cd", "text": "A unified and powerful approach is presented for devising polynomial approximation schemes for many strongly NP-complete problems. Such schemes consist of families of approximation algorithms for each desired performance bound on the relative error ε > &Ogr;, with running time that is polynomial when ε is fixed. Though the polynomiality of these algorithms depends on the degree of approximation ε being fixed, they cannot be improved, owing to a negative result stating that there are no fully polynomial approximation schemes for strongly NP-complete problems unless NP = P.\nThe unified technique that is introduced here, referred to as the shifting strategy, is applicable to numerous geometric covering and packing problems. The method of using the technique and how it varies with problem parameters are illustrated. A similar technique, independently devised by B. S. Baker, was shown to be applicable for covering and packing problems on planar graphs.", "title": "" }, { "docid": "bd72a921c7bfa4a7db8ca9dd8715fa45", "text": "Augmented Reality (AR) is growing rapidly and becoming a mature and robust technology, which combines virtual information with the real environment and real-time performance. It is important to ensure the acceptance and success of augmented reality systems. With the growth of elderly users, evidence shows potential trends for AR systems to support the elderly, including transport, ageing in place, entertainment and training. However, there is a lack of research to provide the theoretical framework or AR design principles to support designers when developing suitable AR applications for specific populations (e.g. older people). In my PhD thesis, I will focus on the possibility of developing and applying AR design principles to support the design of applications that address older people's requirements. In this paper, I first discuss the architecture of augmented reality and identify the relationship between different elements. Secondly, the relevant literature has been reviewed in terms of design challenges of AR and design principles. Thirdly, I formulate the five initial design principles as the fundamental work of my PhD. It is expected that design principles could help AR designers to explore quality design alternatives, which could potentially benefit the ageing population. Fourthly, I identify the AR pillbox as an example to explain how design principles can be applied to AR applications. In terms of the methodology, preparation, refinement and validation are the three main stages to achieve the research goal. Preparation stage aims to generate the preliminary AR design principles and identify the relevant scenarios that might assist the designers to understand the principles and explore the design alternatives. In the stages of refinement, a half-day workshop has been conducted to explore different design issues based on different scenarios and refine the preliminary design principles. After that, a new set of design principles will be formulated. The final stage is to validate the effectiveness of new design principles based on the previous workshop’s feedback.", "title": "" }, { "docid": "2c68945d68f8ccf90648bec7fd5b0547", "text": "The number of seniors and other people needing daily assistance continues to increase, but the current human resources available to achieve this in the coming years will certainly be insufficient. To remedy this situation, smart habitats have emerged as an innovative avenue for supporting needs of daily assistance. Smart homes aim to provide cognitive assistance in decision making by giving hints, suggestions, and reminders, with different kinds of effectors, to residents. To implement such technology, the first challenge to overcome is the recognition of ongoing activity. Some researchers have proposed solutions based on binary sensors or cameras, but these types of approaches infringed on residents' privacy. A new affordable activity-recognition system based on passive RFID technology can detect errors related to cognitive impairment. The entire system relies on an innovative model of elliptical trilateration with several filters, as well as on an ingenious representation of activities with spatial zones. The authors have deployed the system in a real smart-home prototype; this article renders the results of a complete set of experiments conducted on this new activity-recognition system with real scenarios.", "title": "" }, { "docid": "24ecf1119592cc5496dc4994d463eabe", "text": "To improve data availability and resilience MapReduce frameworks use file systems that replicate data uniformly. However, analysis of job logs from a large production cluster shows wide disparity in data popularity. Machines and racks storing popular content become bottlenecks; thereby increasing the completion times of jobs accessing this data even when there are machines with spare cycles in the cluster. To address this problem, we present Scarlett, a system that replicates blocks based on their popularity. By accurately predicting file popularity and working within hard bounds on additional storage, Scarlett causes minimal interference to running jobs. Trace driven simulations and experiments in two popular MapReduce frameworks (Hadoop, Dryad) show that Scarlett effectively alleviates hotspots and can speed up jobs by 20.2%.", "title": "" }, { "docid": "888ca06bc504dd82308a4ecc462e869b", "text": "This paper describes the conceptual design of an arm (right or left) powered exoframe (exoskeleton) which can be used in rehabilitation or by an army soldier who are debilitated to move their hands freely, and to lift the weight .This machine is designed for the application of teleoperation, virtual reality, military and rehabilitation. The option is put forward for a mechanical structure kinematical equivalent to the structure of the human arm. The elbow joint rotation is about -90 to 70 degrees. This arm can be used in both hands. This is a wearable robot i.e. mechatronic system with Velcro straps along with that it is a light weight device. It will also work mechanically with a push of a button as well as electrically with the help of solenoidal valve. Here the energy conversion is done using Pneumatic Cylinder (double acting) which is given the flow of compressed air through Solenoidal Valve, which control direction of flow and movement of piston.", "title": "" }, { "docid": "957a179c41a641f337b89dbfdc8ea1a9", "text": "Medical staff around the world must take reasonable steps to identify newborns and infants clearly, so as to prevent mix-ups, and to ensure the correct medication reaches the correct child. Footprints are frequently taken despite verification with footprints being challenging due to strong noise. The noise is introduced by the tininess of the structures, movement during capture, and the infant's rapid growth. In this article we address the image processing part of the problem and introduce a novel algorithm for the extraction of creases from infant footprints. The algorithm uses directional filtering on different resolution levels, morphological processing, and block-wise crease line reconstruction. We successfully test our method on noise-affected infant footprints taken from the same infants at different ages.", "title": "" }, { "docid": "7c23d90cd8e7e5223a13882833fa7c66", "text": "The ability to associate images with natural language sentences that describe what is depicted in them is a hallmark of image understanding, and a prerequisite for applications such as sentence-based image search. In analogy to image search, we propose to frame sentence-based image annotation as the task of ranking a given pool of captions. We introduce a new benchmark collection for sentence-based image description and search, consisting of 8,000 images that are each paired with five different captions which provide clear descriptions of the salient entities and events. We introduce a number of systems that perform quite well on this task, even though they are only based on features that can be obtained with minimal supervision. Our results clearly indicate the importance of training on multiple captions per image, and of capturing syntactic (word order-based) and semantic features of these captions. We also perform an in-depth comparison of human and automatic evaluation metrics for this task, and propose strategies for collecting human judgments cheaply and on a very large scale, allowing us to augment our collection with additional relevance judgments of which captions describe which image. Our analysis shows that metrics that consider the ranked list of results for each query image or sentence are significantly more robust than metrics that are based on a single response per query. Moreover, our study suggests that the evaluation of ranking-based image description systems may be fully automated.", "title": "" }, { "docid": "6bc611936d412dde15999b2eb179c9e2", "text": "Smith-Lemli-Opitz syndrome, a severe developmental disorder associated with multiple congenital anomalies, is caused by a defect of cholesterol biosynthesis. Low cholesterol and high concentrations of its direct precursor, 7-dehydrocholesterol, in plasma and tissues are the diagnostic biochemical hallmarks of the syndrome. The plasma sterol concentrations correlate with severity and disease outcome. Mutations in the DHCR7 gene lead to deficient activity of 7-dehydrocholesterol reductase (DHCR7), the final enzyme of the cholesterol biosynthetic pathway. The human DHCR7 gene is localised on chromosome 11q13 and its structure has been characterized. Ninetyone different mutations in the DHCR7 gene have been published to date. This paper is a review of the clinical, biochemical and molecular genetic aspects.", "title": "" }, { "docid": "8eb5e5d7c224782506aba37dcb91614f", "text": "With adolescents’ frequent use of social media, electronic bullying has emerged as a powerful platform for peer victimization. The present two studies explore how adolescents perceive electronic vs. traditional bullying in emotional impact and strategic responses. In Study 1, 97 adolescents (mean age = 15) viewed hypothetical peer victimization scenarios, in parallel electronic and traditional forms, with female characters experiencing indirect relational aggression and direct verbal aggression. In Study 2, 47 adolescents (mean age = 14) viewed the direct verbal aggression scenario from Study 1, and a new scenario, involving male characters in the context of direct verbal aggression. Participants were asked to imagine themselves as the victim in all scenarios and then rate their emotional reactions, strategic responses, and goals for the outcome. Adolescents reported significant negative emotions and disruptions in typical daily activities as the victim across divergent bullying scenarios. In both studies few differences emerged when comparing electronic to traditional bullying, suggesting that online and off-line bullying are subtypes of peer victimization. There were expected differences in strategic responses that fit the medium of the bullying. Results also suggested that embarrassment is a common and highly relevant negative experience in both indirect relational and direct verbal aggression among", "title": "" }, { "docid": "6dfb4c016db41a27587ef08011a7cf0e", "text": "The objective of this work is to detect shadows in images. We pose this as the problem of labeling image regions, where each region corresponds to a group of superpixels. To predict the label of each region, we train a kernel Least-Squares Support Vector Machine (LSSVM) for separating shadow and non-shadow regions. The parameters of the kernel and the classifier are jointly learned to minimize the leave-one-out cross validation error. Optimizing the leave-one-out cross validation error is typically difficult, but it can be done efficiently in our framework. Experiments on two challenging shadow datasets, UCF and UIUC, show that our region classifier outperforms more complex methods. We further enhance the performance of the region classifier by embedding it in a Markov Random Field (MRF) framework and adding pairwise contextual cues. This leads to a method that outperforms the state-of-the-art for shadow detection. In addition we propose a new method for shadow removal based on region relighting. For each shadow region we use a trained classifier to identify a neighboring lit region of the same material. Given a pair of lit-shadow regions we perform a region relighting transformation based on histogram matching of luminance values between the shadow region and the lit region. Once a shadow is detected, we demonstrate that our shadow removal approach produces results that outperform the state of the art by evaluating our method using a publicly available benchmark dataset.", "title": "" }, { "docid": "15d932b1344d48f13dfbb5e7625b22ad", "text": "Predictive modeling of human or humanoid movement becomes increasingly complex as the dimensionality of those movements grows. Dynamic Movement Primitives (DMP) have been shown to be a powerful method of representing such movements, but do not generalize well when used in configuration or task space. To solve this problem we propose a model called autoencoded dynamic movement primitive (AE-DMP) which uses deep autoencoders to find a representation of movement in a latent feature space, in which DMP can optimally generalize. The architecture embeds DMP into such an autoencoder and allows the whole to be trained as a unit. To further improve the model for multiple movements, sparsity is added for the feature layer neurons; therefore, various movements can be observed clearly in the feature space. After training, the model finds a single hidden neuron from the sparsity that can efficiently generate new movements. Our experiments clearly demonstrate the efficiency of missing data imputation using 50-dimensional human movement data.", "title": "" } ]
scidocsrr
5a1059d4321e5bbc017d81813556d3ad
Forces acting on a biped robot. Center of pressure-zero moment point
[ { "docid": "1db57f3b594afa363c81c8e63cc82c3c", "text": "This paper newly considers the ZMP(Zero Moment Point) of a humanoid robot under arm/leg coordination. By considering the infinitesimal displacement and the moment acting on the convex hull of the supporting points, we show that our method for determining the region of ZMP can be applicable to several cases of the arm/leg coordination tasks. We first express two kinds of ZMPs for such coordination tasks, i.e., the conventional ZMP, and the “Generalized Zero Moment Point (GZMP)” which is a generalization of the ZMP to the arm/leg coordination tasks. By projecting the edges of the convex hull of the supporting points onto the floor, we show that the position and the region of the GZMP for keeping the dynamical balance can be uniquely obtained. The effectiveness of the proposed method is shown by simulation results(see video).", "title": "" } ]
[ { "docid": "5b7ff9036a43b32cc82ca04bdbfd9fb1", "text": "Cloud computing provides computing resources as a service over a network. As rapid application of this emerging technology in real world, it becomes more and more important how to evaluate the performance and security problems that cloud computing confronts. Currently, modeling and simulation technology has become a useful and powerful tool in cloud computing research community to deal with these issues. In this paper, to the best of our knowledge, we review the existing results on modeling and simulation of cloud computing. We start from reviewing the basic concepts of cloud computing and its security issues, and subsequently review the existing cloud computing simulators. Furthermore, we indicate that there exist two types of cloud computing simulators, that is, simulators just based on software and simulators based on both software and hardware. Finally, we analyze and compare features of the existing cloud computing simulators.", "title": "" }, { "docid": "8dce819cc31cf4899cf4bad2dd117dc1", "text": "BACKGROUND\nCaffeine and sodium bicarbonate ingestion have been suggested to improve high-intensity intermittent exercise, but it is unclear if these ergogenic substances affect performance under provoked metabolic acidification. To study the effects of caffeine and sodium bicarbonate on intense intermittent exercise performance and metabolic markers under exercise-induced acidification, intense arm-cranking exercise was performed prior to intense intermittent running after intake of placebo, caffeine and sodium bicarbonate.\n\n\nMETHODS\nMale team-sports athletes (n = 12) ingested sodium bicarbonate (NaHCO3; 0.4 g.kg(-1) b.w.), caffeine (CAF; 6 mg.kg(-1) b.w.) or placebo (PLA) on three different occasions. Thereafter, participants engaged in intense arm exercise prior to the Yo-Yo intermittent recovery test level-2 (Yo-Yo IR2). Heart rate, blood lactate and glucose as well as rating of perceived exertion (RPE) were determined during the protocol.\n\n\nRESULTS\nCAF and NaHCO3 elicited a 14 and 23% improvement (P < 0.05), respectively, in Yo-Yo IR2 performance, post arm exercise compared to PLA. The NaHCO3 trial displayed higher [blood lactate] (P < 0.05) compared to CAF and PLA (10.5 ± 1.9 vs. 8.8 ± 1.7 and 7.7 ± 2.0 mmol.L(-1), respectively) after the Yo-Yo IR2. At exhaustion CAF demonstrated higher (P < 0.05) [blood glucose] compared to PLA and NaHCO3 (5.5 ± 0.7 vs. 4.2 ± 0.9 vs. 4.1 ± 0.9 mmol.L(-1), respectively). RPE was lower (P < 0.05) during the Yo-Yo IR2 test in the NaHCO3 trial in comparison to CAF and PLA, while no difference in heart rate was observed between trials.\n\n\nCONCLUSIONS\nCaffeine and sodium bicarbonate administration improved Yo-Yo IR2 performance and lowered perceived exertion after intense arm cranking exercise, with greater overall effects of sodium bicarbonate intake.", "title": "" }, { "docid": "057b397d3b72a30352697ce0940e490a", "text": "Recent events of multiple earthquakes in Nepal, Italy and New Zealand resulting loss of life and resources bring our attention to the ever growing significance of disaster management, especially in the context of large scale nature disasters such as earthquake and Tsunami. In this paper, we focus on how disaster communication system can benefit from recent advances in wireless communication technologies especially mobile technologies and devices. The paper provides an overview of how the new generation of telecommunications and technologies such as 4G/LTE, Device to Device (D2D) and 5G can improve the potential of disaster networks. D2D is a promising technology for 5G networks, providing high data rates, increased spectral and energy efficiencies, reduced end-to-end delay and transmission power. We examine a scenario of multi-hop D2D communications where one UE may help other UEs to exchange information, by utilizing cellular network technique. Results show the average energy-efficiency spectral- efficiency of these transmission types are enhanced when the number of hops used in multi-hop links increases. The effect of resource group allocation is also pointed out for efficient design of system.", "title": "" }, { "docid": "db2160b80dd593c33661a16ed2e404d1", "text": "Steganalysis tools play an important part in saving time and providing new angles of attack for forensic analysts. StegExpose is a solution designed for use in the real world, and is able to analyse images for LSB steganography in bulk using proven attacks in a time efficient manner. When steganalytic methods are combined intelligently, they are able generate even more accurate results. This is the prime focus of StegExpose.", "title": "" }, { "docid": "252f7393393a7ef16eda8388d601ef00", "text": "In computer vision, moving object detection and tracking methods are the most important preliminary steps for higher-level video analysis applications. In this frame, background subtraction (BS) method is a well-known method in video processing and it is based on frame differencing. The basic idea is to subtract the current frame from a background image and to classify each pixel either as foreground or background by comparing the difference with a threshold. Therefore, the moving object is detected and tracked by using frame differencing and by learning an updated background model. In addition, simulated annealing (SA) is an optimization technique for soft computing in the artificial intelligence area. The p-median problem is a basic model of discrete location theory of operational research (OR) area. It is a NP-hard combinatorial optimization problem. The main aim in the p-median problem is to find p number facility locations, minimize the total weighted distance between demand points (nodes) and the closest facilities to demand points. The SA method is used to solve the p-median problem as a probabilistic metaheuristic. In this paper, an SA-based hybrid method called entropy-based SA (EbSA) is developed for performance optimization of BS, which is used to detect and track object(s) in videos. The SA modification to the BS method (SA–BS) is proposed in this study to determine the optimal threshold for the foreground-background (i.e., bi-level) segmentation and to learn background model for object detection. At these segmentation and learning stages, all of the optimization problems considered in this study are taken as p-median problems. Performances of SA–BS and regular BS methods are measured using four videoclips. Therefore, these results are evaluated quantitatively as the overall results of the given method. The obtained performance results and statistical analysis (i.e., Wilcoxon median test) show that our proposed method is more preferable than regular BS method. Meanwhile, the contribution of this", "title": "" }, { "docid": "7fa9bacbb6b08065ecfe0530f082a391", "text": "This paper considers the task of articulated human pose estimation of multiple people in real world images. We propose an approach that jointly solves the tasks of detection and pose estimation: it infers the number of persons in a scene, identifies occluded body parts, and disambiguates body parts between people in close proximity of each other. This joint formulation is in contrast to previous strategies, that address the problem by first detecting people and subsequently estimating their body pose. We propose a partitioning and labeling formulation of a set of body-part hypotheses generated with CNN-based part detectors. Our formulation, an instance of an integer linear program, implicitly performs non-maximum suppression on the set of part candidates and groups them to form configurations of body parts respecting geometric and appearance constraints. Experiments on four different datasets demonstrate state-of-the-art results for both single person and multi person pose estimation.", "title": "" }, { "docid": "103ec725b4c07247f1a8884610ea0e42", "text": "In this paper we have introduced the notion of distance between two single valued neutrosophic sets and studied its properties. We have also defined several similarity measures between them and investigated their characteristics. A measure of entropy of a single valued neutrosophic set has also been introduced.", "title": "" }, { "docid": "b3209409f5fa834803673ed39eb0f2a1", "text": "Three-dimensional (3-D) urban models are an integral part of numerous applications, such as urban planning and performance simulation, mapping and visualization, emergency response training and entertainment, among others. We consolidate various algorithms proposed for reconstructing 3-D models of urban objects from point clouds. Urban models addressed in this review include buildings, vegetation, utilities such as roads or power lines and free-form architectures such as curved buildings or statues, all of which are ubiquitous in a typical urban scenario. While urban modeling, building reconstruction, in particular, clearly demand specific traits in the models, such as regularity, symmetry, and repetition; most of the traditional and state-of-the-art 3-D reconstruction algorithms are designed to address very generic objects of arbitrary shapes and topology. The recent efforts in the urban reconstruction arena, however, strive to accommodate the various pressing needs of urban modeling. Strategically, urban modeling research nowadays focuses on the usage of specialized priors, such as global regularity, Manhattan-geometry or symmetry to aid the reconstruction, or efficient adaptation of existing reconstruction techniques to the urban modeling pipeline. Aimed at an in-depth exploration of further possibilities, we review the existing urban reconstruction algorithms, prevalent in computer graphics, computer vision and photogrammetry disciplines, evaluate their performance in the architectural modeling context, and discuss the adaptability of generic mesh reconstruction techniques to the urban modeling pipeline. In the end, we suggest a few directions of research that may be adopted to close in the technology gaps.", "title": "" }, { "docid": "c38a6685895c23620afb6570be4c646b", "text": "Today, artificial neural networks (ANNs) are widely used in a variety of applications, including speech recognition, face detection, disease diagnosis, etc. And as the emerging field of ANNs, Long Short-Term Memory (LSTM) is a recurrent neural network (RNN) which contains complex computational logic. To achieve high accuracy, researchers always build large-scale LSTM networks which are time-consuming and power-consuming. In this paper, we present a hardware accelerator for the LSTM neural network layer based on FPGA Zedboard and use pipeline methods to parallelize the forward computing process. We also implement a sparse LSTM hidden layer, which consumes fewer storage resources than the dense network. Our accelerator is power-efficient and has a higher speed than ARM Cortex-A9 processor.", "title": "" }, { "docid": "295ec5187615caec8b904c81015f4999", "text": "As modern 64-bit x86 processors no longer support the segmentation capabilities of their 32-bit predecessors, most research projects assume that strong in-process memory isolation is no longer an affordable option. Instead of strong, deterministic isolation, new defense systems therefore rely on the probabilistic pseudo-isolation provided by randomization to \"hide\" sensitive (or safe) regions. However, recent attacks have shown that such protection is insufficient; attackers can leak these safe regions in a variety of ways.\n In this paper, we revisit isolation for x86-64 and argue that hardware features enabling efficient deterministic isolation do exist. We first present a comprehensive study on commodity hardware features that can be repurposed to isolate safe regions in the same address space (e.g., Intel MPX and MPK). We then introduce MemSentry, a framework to harden modern defense systems with commodity hardware features instead of information hiding. Our results show that some hardware features are more effective than others in hardening such defenses in each scenario and that features originally conceived for other purposes (e.g., Intel MPX for bounds checking) are surprisingly efficient at isolating safe regions compared to their software equivalent (i.e., SFI).", "title": "" }, { "docid": "131a866cba7a8b2e4f66f2496a80cb41", "text": "The Python language is highly dynamic, most notably due to late binding. As a consequence, programs using Python typically run an order of magnitude slower than their C counterpart. It is also a high level language whose semantic can be made more static without much change from a user point of view in the case of mathematical applications. In that case, the language provides several vectorization opportunities that are studied in this paper, and evaluated in the context of Pythran, an ahead-of-time compiler that turns Python module into C++ meta-programs.", "title": "" }, { "docid": "09623c821f05ffb7840702a5869be284", "text": "Area-restricted search (ARS) is a foraging strategy used by many animals to locate resources. The behavior is characterized by a time-dependent reduction in turning frequency after the last resource encounter. This maximizes the time spent in areas in which resources are abundant and extends the search to a larger area when resources become scarce. We demonstrate that dopaminergic and glutamatergic signaling contribute to the neural circuit controlling ARS in the nematode Caenorhabditis elegans. Ablation of dopaminergic neurons eliminated ARS behavior, as did application of the dopamine receptor antagonist raclopride. Furthermore, ARS was affected by mutations in the glutamate receptor subunits GLR-1 and GLR-2 and the EAT-4 glutamate vesicular transporter. Interestingly, preincubation on dopamine restored the behavior in worms with defective dopaminergic signaling, but not in glr-1, glr-2, or eat-4 mutants. This suggests that dopaminergic and glutamatergic signaling function in the same pathway to regulate turn frequency. Both GLR-1 and GLR-2 are expressed in the locomotory control circuit that modulates the direction of locomotion in response to sensory stimuli and the duration of forward movement during foraging. We propose a mechanism for ARS in C. elegans in which dopamine, released in response to food, modulates glutamatergic signaling in the locomotory control circuit, thus resulting in an increased turn frequency.", "title": "" }, { "docid": "6b8329ef59c6811705688e48bf6c0c08", "text": "Since the invention of word2vec, the skip-gram model has significantly advanced the research of network embedding, such as the recent emergence of the DeepWalk, LINE, PTE, and node2vec approaches. In this work, we show that all of the aforementioned models with negative sampling can be unified into the matrix factorization framework with closed forms. Our analysis and proofs reveal that: (1) DeepWalk empirically produces a low-rank transformation of a network's normalized Laplacian matrix; (2) LINE, in theory, is a special case of DeepWalk when the size of vertices' context is set to one; (3) As an extension of LINE, PTE can be viewed as the joint factorization of multiple networks» Laplacians; (4) node2vec is factorizing a matrix related to the stationary distribution and transition probability tensor of a 2nd-order random walk. We further provide the theoretical connections between skip-gram based network embedding algorithms and the theory of graph Laplacian. Finally, we present the NetMF method as well as its approximation algorithm for computing network embedding. Our method offers significant improvements over DeepWalk and LINE for conventional network mining tasks. This work lays the theoretical foundation for skip-gram based network embedding methods, leading to a better understanding of latent network representation learning.", "title": "" }, { "docid": "00223ccf5b5aebfc23c76afb7192e3f7", "text": "Computer Security System / technology have passed through several changes. The trends have been from what you know (e.g. password, PIN, etc) to what you have (ATM card, Driving License, etc) and presently to who you are (Biometry) or combinations of two or more of the trios. This technology (biometry) has come to solve the problems identified with knowledge-based and token-based authentication systems. It is possible to forget your password and what you have can as well be stolen. The security of determining who you are is referred to as BIOMETRIC. Biometric, in a nutshell, is the use of your body as password. This paper explores the various methods of biometric identification that have evolved over the years and the features used for each modality.", "title": "" }, { "docid": "f67990c307da0b95628441e11ddfb70b", "text": "I shall present an overview of the Java language and a brief description of the Java virtual machine|this will not be a tutorial. For a good Java tutorial and reference, I refer you to [Fla96]. There will be a brief summary and the odd snide remark about network computers. Chapter 1 History Java started o as a control language for embedded systems (c.1990)|the idea was to unite the heterogenous microcontrollers used in embedded appliances (initially mainly consumer electronics) with a common language (the Java Virtual Machine (JVM)) and API so that manufacturers wouldn't have to change their software whenever a new microcontroller came on the market. It was realised that the JVM would work just as well over the internet, and the JVM (and the Java language that was developed in association with it) was pushed as a vehicle for active web content. More recently, the Java bandwagon has acquired such initiatives as the network computer, personal and embedded Java (which take Java back to its roots as an embedded control system), and are being pushed as tools for developing `serious' applications. Unfortunately, the Java bandwagon has also acquired a tremendous amount of hype, snake-oil, and general misinformation. It is being used by Sun as a weapon in its endless and byzantine war with Microsoft, and has pioneered the industry acceptance of vapourware sales (whereby people pay to licence technologies which don't exist yet). The typical lead time for a Java technology is six months (that is, the period between it being announced as available and shipping is usually circa six months). It will be useful to keep this in mind. Most terms to do with Java are trademarks of Sun Microsystems, who controls them in the somewhat vain hope of being able to maintain some degree of standardisation. I shall refer to the JDK, by which I mean Sun's Java Development Kit, the standard Java compiler and runtime environment. 2 Chapter 2 The Java Language As mentioned in the previous section, Java is really a remote execution system, broken into two parts|Java and the Java Virtual Machine. The two are pretty much inseparable. Java has a C/C++-based syntax, inheriting the nomenclature of classes, the private, public, and protected nomenclature, and its concepts of constructors and destructors, from C++. It also borrows heavily from the family of languages that spawned Modula-3|to these it owes garbage collection, threads, exceptions, safety, and much of its inheritance model. Java introduces some new features of its own : Ubiquitous classes |classes and interfaces (which are like classes, but are used to describe speci cations rather than type de nitions) are the only real structure in Java: everything from your window system to an element of your linked list will be a class. Dynamic loading |Java provides for dynamic class (and interface) loading (indeed, it would be di cult to produce a JVM implementation without it). Unicode (2.0) source format |Java source code is written in Unicode 2.0 ([The96])|internationalisation at last ? Labelled breaks {help solve a typical problem with the break construct|you sometimes want to exit more than one loop. We can write, eg: bool k = false; while (a) { while (b) { if (a->head==b->head) { k = true; break; } b=b->tail; 3 }if (k) { break; }; a=a->tail; } Becomes foo: while (a) { while (b) { if (a->head==b->head) { break foo; }b = b->tail; }a=a->tail; } Object-Orientated Synchronisation and exception handling |every object can be locked, waited on and signalled. Every object which is a subclass (in Modula-3 terms, a subtype) of java.lang.Exception may be thrown as an exception. Documentation Comments |There is a special syntax for `documentation comments' (/** ...*/), which may be used to automatically generate documentation from source les. Such tools are fairly primitive at present, and if you look at the automatically generated html documentation for the JDK libraries, you will nd that you need to scoot up and down the object heirarchy several times before very much of it begins to make sense. Widely-used exceptions |Java tends to raise an exception when it encounters a run-time error, rather than aborting your program|so, for example, attempting an out-of-bounds array access throws ArrayIndexOutOfBoundsException rather than aborting your program. It will be useful to note here that Java has complete safety|there are no untraced references, and no way to do pointer arithmetic. Anything unsafe must be done outside Java by another language, the results being communicated back via. the foreign language interface mechanism, native methods, which we will consider later. 2.1 Types Java has a fairly typical type system. As in Modula-3, there are two classes of types|base types and reference types.4 2.1.1 Base types The following categories of base types are de ned: Boolean: bool2 ftrue; falseg 1 Integral { byte 2 f 27 : : :27 1g { short 2 f 215 : : :215 1g { int 2 f 231 : : :231 1g { long 2 f 261 : : :261 1g { char 2 f0 : : :FFFF16g Floating point: IEEE 754 single precision (float), and IEEE 754 double precistion (double) oating point numbers. Note that there is no extended type, so you cannot use IEEE extended precision. You will observe a number of changes from C: No enumerations |the intended methodology is to use class (or interface) variables, eg. static int RED=1;. Hopefully, this will become clearer later. 16-bit char |char has been widened to 16 bits to accomodate Unicode characters. C programmers (and others who assume sizeof(char)==1) beware! No signed or unsigned |this avoids the problems that unsigned types always cause: either LAST(unsigned k) = 2*LAST(k)+1 (C), in which case implicit conversions to signed types can fail, or LAST(unsigned k) = LAST(signed k) (Modula-3) in which case you can never subtract two signed types and put their results in an unsigned variable (try Rect.Horsize(Rect.Full) and watch the pretty value out of range errors abort your program. . . ). 2.1.2 Reference types Reference types subsume classes, interfaces, objects and arrays. The Java equivalent of NIL is null, and the equivalent of ROOT is java.lang.Object. Note that we need no equivalent for ADDRESS, as there are no untraced references in Java, and we need no equivalent for REFANY as there are no records, and it turns out that arrays are also objects2. 1This is the only type that doesn't exist in the JVM|see 3. 2though this is obviously not explicit, since it would introduce parametric polymorphism into the type system. It is, however, possible to introduce parametric polymorphism, as we shall see later in our discussion of Pizza. 5 2.2 Operators and conversion With Java's syntax lifted mostly from C and C++, it is no surprise to nd that it shares many of the same operators for base types: < <= > >= == != && || return a boolean. + * / % ++ -<< >> >>> ~ & | ^ ?: ++ -+= -= *= /= &= |= =̂ %= <<= >>= >>>= instanceof is a binary operator (a instanceof T) which returns a boolean| true if a is of type T, and false otherwise. Conversion is done by typecasting, as in C, using ( and ). .. and + can also be used for strings (\"foo\" + \"bar\"). You will note that the comparison operators now return a boolean, and that Java has standardised (mainly through not having unary *) the behaviour of *=. There is also a new right shift operator, >>>, meaning `arithmetic shift right', some syntactic sugar for concatenating strings, and instanceof and casting replace ISTYPE and NARROW respectively. The `+' syntax for strings is similar to & in Modula-3; note, however, that Java distinguishes between constant strings (of class java.lang.String) and mutable strings (of class java.lang.StringBuffer). \"a\" + \"b\" produces a new String, \"ab\". Integer and oating-point operations with mixed-precision types (eg. int + long or float + double) implicitly convert all their arguments to the `widest' type present, and their results are of the type of their widest operand. Numerical analysts beware. . . There are actually several types of type conversion in Java: Identity conversions |the identity conversion. Assignment conversion |takes place when assigning a variable to the value of an expression. Primitive Widening conversion |widens a value of a base type to another base type with a greater range, and may also convert integer types to oating point types. Primitive Narrowing conversion |the inverse of primitive widening conversion (narrows to a type with a smaller range), and may also convert oating point types to integer types. Widening reference conversion |intuitively, converts an object of a given type to one of one of its supertypes. 6 Narrowing reference conversion |the inverse of widening reference conversion (the reference conversions are like NARROW() in Modula-3). String conversion |there is a conversion from any type to type String. Forbidden Conversions |some conversions are forbidden. Assignment Conversion |occurs during assignment. Method invocation conversion |occurs during method invocation. Casting conversion |occurs when the casting operator is used, eg. (Foo)bar. All of which are described in excruciating detail in x5 of [GS97]. The question of reference type equivalence is a little confused due to the presence of interfaces, but Java basically uses name-equivalence, in contrast with Modula-3's structural equivalence. 2.3 Imperative Constructs Java provides basically the same imperative constructs as C, but there are a few di erences (and surprises): Scoping |Java supports nested scopes (at last!), so { int i=1; if (i==1) { int k; k=4; } }Now works properly3. Indeed, you may even declare variables half way through a scope (though it is considered bad practice to do so): { int i; foo; int k; bar; } 3Java does not support implicit scoping in for...next loops, however, so your loop variables must still be declared in the enclosing scope, or the initialisation clause of the loop. 7 Is equivalent to: { int i; foo; { int k; bar; } }And it", "title": "" }, { "docid": "8c46f24d8e710c5fb4e25be76fc5b060", "text": "This paper presents the novel design of a wideband circularly polarized (CP) Radio Frequency Identification (RFID) reader microstrip patch antenna for worldwide Ultra High Frequency (UHF) band which covers 840–960 MHz. The proposed antenna, which consists of a microstrip patch with truncated corners and a cross slot, is placed on a foam substrate (εr = 1.06) above a ground plane and is fed through vias through ground plane holes that extend from the quadrature 3 dB branch line hybrid coupler placed below the ground plane. This helps to separate feed network radiation, from the patch antenna and keeping the CP purity. The prototype antenna was fabricated with a total size of 225 × 250 × 12.8 mm3 which shows a measured impedance matching band of 840–1150MHz (31.2%) as well as measured rotating linear based circularly polarized radiation patterns. The simulated and measured 3 dB Axial Ratio (AR) bandwidth is better than 23% from 840–1050 MHz meeting and exceeding the target worldwide RFID UHF band.", "title": "" }, { "docid": "1e100608fd78b1e20020f892784199ed", "text": "In this paper we introduce a system for unsupervised object discovery and segmentation of RGBD-images. The system models the sensor noise directly from data, allowing accurate segmentation without sensor specific hand tuning of measurement noise models making use of the recently introduced Statistical Inlier Estimation (SIE) method [1]. Through a fully probabilistic formulation, the system is able to apply probabilistic inference, enabling reliable segmentation in previously challenging scenarios. In addition, we introduce new methods for filtering out false positives, significantly improving the signal to noise ratio. We show that the system significantly outperform state-of-the-art in on a challenging real-world dataset.", "title": "" }, { "docid": "c7bbde452a68f84ca9d09c7da2cb29ab", "text": "Recently, application-specific requirement becomes one of main research challenges in the area of routing for delay tolerant networks. Among various requirements, in this paper, we focus on achieving the desired delivery ratio within bounded given deadline. For this goal, we use analytical model and develop a new forwarding scheme in respective phase. The proposed protocol dynamically adjusts the number of message copies by analytical model and the next hop node is determined depending on the delivery probability and the inter-meeting time of the encountering nodes as well as remaining time. Simulation results demonstrate that our proposed algorithm meets bounded delay with lower overhead than existing protocols in an adaptive way to varying network conditions.", "title": "" }, { "docid": "6edf0db1e517c8786f004fd79f4ef973", "text": "The alarming increase of resistance against multiple currently available antibiotics is leading to a rapid lose of treatment options against infectious diseases. Since the antibiotic resistance is partially due to a misuse or abuse of the antibiotics, this situation can be reverted when improving their use. One strategy is the optimization of the antimicrobial dosing regimens. In fact, inappropriate drug choice and suboptimal dosing are two major factors that should be considered because they lead to the emergence of drug resistance and consequently, poorer clinical outcomes. Pharmacokinetic/pharmacodynamic (PK/PD) analysis in combination with Monte Carlo simulation allows to optimize dosing regimens of the antibiotic agents in order to conserve their therapeutic value. Therefore, the aim of this review is to explain the basis of the PK/PD analysis and associated techniques, and provide a brief revision of the applications of PK/PD analysis from a therapeutic point-of-view. The establishment and reevaluation of clinical breakpoints is the sticking point in antibiotic therapy as the clinical use of the antibiotics depends on them. Two methodologies are described to establish the PK/PD breakpoints, which are a big part of the clinical breakpoint setting machine. Furthermore, the main subpopulations of patients with altered characteristics that can condition the PK/PD behavior (such as critically ill, elderly, pediatric or obese patients) and therefore, the outcome of the antibiotic therapy, are reviewed. Finally, some recommendations are provided from a PK/PD point of view to enhance the efficacy of prophylaxis protocols used in surgery.", "title": "" }, { "docid": "035cb90504d8bf4bff9c9bac7d8c4306", "text": "Automated trolleys have been developed to meet the needs of material handling in industries. The velocity of automated trolleys is regulated by an S-shaped (or trapezoid-shaped) acceleration and deceleration profile. In consequence of the velocity profile, the control system of automated trolleys is nonlinear and open-looped. In order to linearize the control system, we use a second order dynamic element to replace the acceleration and declaration curve in practice, and design an optimal controller under the quadratic cost function. Performance of the proposed approach is also compared to the conventional method. The simulation shows a better dynamic performance of the developed control system.", "title": "" } ]
scidocsrr
79ef050bffaf659a0ec1b26ba8fcd5b1
Discussing the Value of Automatic Hate Speech Detection in Online Debates
[ { "docid": "d5d2e1feeb2d0bf2af49e1d044c9e26a", "text": "ISSN: 2167-0811 (Print) 2167-082X (Online) Journal homepage: http://www.tandfonline.com/loi/rdij20 Algorithmic Transparency in the News Media Nicholas Diakopoulos & Michael Koliska To cite this article: Nicholas Diakopoulos & Michael Koliska (2016): Algorithmic Transparency in the News Media, Digital Journalism, DOI: 10.1080/21670811.2016.1208053 To link to this article: http://dx.doi.org/10.1080/21670811.2016.1208053", "title": "" }, { "docid": "79ece5e02742de09b01908668383e8f2", "text": "Hate speech in the form of racist and sexist remarks are a common occurrence on social media. For that reason, many social media services address the problem of identifying hate speech, but the definition of hate speech varies markedly and is largely a manual effort (BBC, 2015; Lomas, 2015). We provide a list of criteria founded in critical race theory, and use them to annotate a publicly available corpus of more than 16k tweets. We analyze the impact of various extra-linguistic features in conjunction with character n-grams for hatespeech detection. We also present a dictionary based the most indicative words in our data.", "title": "" }, { "docid": "c8dbc63f90982e05517bbdb98ebaeeb5", "text": "Even though considerable attention has been given to the polarity of words (positive and negative) and the creation of large polarity lexicons, research in emotion analysis has had to rely on limited and small emotion lexicons. In this paper we show how the combined strength and wisdom of the crowds can be used to generate a large, high-quality, word–emotion and word–polarity association lexicon quickly and inexpensively. We enumerate the challenges in emotion annotation in a crowdsourcing scenario and propose solutions to address them. Most notably, in addition to questions about emotions associated with terms, we show how the inclusion of a word choice question can discourage malicious data entry, help identify instances where the annotator may not be familiar with the target term (allowing us to reject such annotations), and help obtain annotations at sense level (rather than at word level). We conducted experiments on how to formulate the emotionannotation questions, and show that asking if a term is associated with an emotion leads to markedly higher inter-annotator agreement than that obtained by asking if a term evokes an emotion.", "title": "" }, { "docid": "f5a188c87dd38a0a68612352891bcc3f", "text": "Sentiment analysis of online documents such as news articles, blogs and microblogs has received increasing attention in recent years. In this article, we propose an efficient algorithm and three pruning strategies to automatically build a word-level emotional dictionary for social emotion detection. In the dictionary, each word is associated with the distribution on a series of human emotions. In addition, a method based on topic modeling is proposed to construct a topic-level dictionary, where each topic is correlated with social emotions. Experiment on the real-world data sets has validated the effectiveness and reliability of the methods. Compared with other lexicons, the dictionary generated using our approach is language-independent, fine-grained, and volume-unlimited. The generated dictionary has a wide range of applications, including predicting the emotional distribution of news articles, identifying social emotions on certain entities and news events.", "title": "" } ]
[ { "docid": "cc4c58f1bd6e5eb49044353b2ecfb317", "text": "Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net/datasets/kitti.", "title": "" }, { "docid": "7f5815a918c6d04783d68dbc041cc6a0", "text": "This paper proposes a method for learning joint embeddings of images and text using a two-branch neural network with multiple layers of linear projections followed by nonlinearities. The network is trained using a large-margin objective that combines cross-view ranking constraints with within-view neighborhood structure preservation constraints inspired by metric learning literature. Extensive experiments show that our approach gains significant improvements in accuracy for image-to-text and text-to-image retrieval. Our method achieves new state-of-the-art results on the Flickr30K and MSCOCO image-sentence datasets and shows promise on the new task of phrase localization on the Flickr30K Entities dataset.", "title": "" }, { "docid": "dfdf2581010777e51ff3e29c5b9aee7f", "text": "This paper proposes a parallel architecture with resistive crosspoint array. The design of its two essential operations, read and write, is inspired by the biophysical behavior of a neural system, such as integrate-and-fire and local synapse weight update. The proposed hardware consists of an array with resistive random access memory (RRAM) and CMOS peripheral circuits, which perform matrix-vector multiplication and dictionary update in a fully parallel fashion, at the speed that is independent of the matrix dimension. The read and write circuits are implemented in 65 nm CMOS technology and verified together with an array of RRAM device model built from experimental data. The overall system exploits array-level parallelism and is demonstrated for accelerated dictionary learning tasks. As compared to software implementation running on a 8-core CPU, the proposed hardware achieves more than 3000 × speedup, enabling high-speed feature extraction on a single chip.", "title": "" }, { "docid": "74287743f75368623da74e716ae8e263", "text": "Organizations increasingly use social media and especially social networking sites (SNS) to support their marketing agenda, enhance collaboration, and develop new capabilities. However, the success of SNS initiatives is largely dependent on sustainable user participation. In this study, we argue that the continuance intentions of users may be gendersensitive. To theorize and investigate gender differences in the determinants of continuance intentions, this study draws on the expectation-confirmation model, the uses and gratification theory, as well as the self-construal theory and its extensions. Our survey of 488 users shows that while both men and women are motivated by the ability to selfenhance, there are some gender differences. Specifically, while women are mainly driven by relational uses, such as maintaining close ties and getting access to social information on close and distant networks, men base their continuance intentions on their ability to gain information of a general nature. Our research makes several contributions to the discourse in strategic information systems literature concerning the use of social media by individuals and organizations. Theoretically, it expands the understanding of the phenomenon of continuance intentions and specifically the role of the gender differences in its determinants. On a practical level, it delivers insights for SNS providers and marketers into how satisfaction and continuance intentions of male and female SNS users can be differentially promoted. Furthermore, as organizations increasingly rely on corporate social networks to foster collaboration and innovation, our insights deliver initial recommendations on how organizational social media initiatives can be supported with regard to gender-based differences. 2017 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "b4b6417ea0e1bc70c5faa50f8e2edf59", "text": "As secure processing as well as correct recovery of data getting more important, digital forensics gain more value each day. This paper investigates the digital forensics tools available on the market and analyzes each tool based on the database perspective. We present a survey of digital forensics tools that are either focused on data extraction from databases or assist in the process of database recovery. In our work, a detailed list of current database extraction software is provided. We demonstrate examples of database extractions executed on representative selections from among tools provided in the detailed list. We use a standard sample database with each tool for comparison purposes. Based on the execution results obtained, we compare these tools regarding different criteria such as runtime, static or live acquisition, and more.", "title": "" }, { "docid": "cd78dd2ef989917c01a325a460c07223", "text": "This paper proposes a multi-joint-gripper that achieves envelope grasping for unknown shape objects. Proposed mechanism is based on a chain of Differential Gear Systems (DGS) controlled by only one motor. It also has a Variable Stiffness Mechanism (VSM) that controls joint stiffness to relieve interfering effects suffered from grasping environment and achieve a dexterous grasping. The experiments elucidate that the developed gripper achieves envelop grasping; the posture of the gripper automatically fits the shape of the object with no sensory feedback. And they also show that the VSM effectively works to relieve external interfering. This paper shows the mechanism and experimental results of the second test machine that was developed inheriting the idea of DGS used in the first test machine but has a completely altered VSM.", "title": "" }, { "docid": "7ea3d3002506e0ea6f91f4bdab09c2d5", "text": "We propose a novel and robust computational framework for automatic detection of deformed 2D wallpaper patterns in real-world images. The theory of 2D crystallographic groups provides a sound and natural correspondence between the underlying lattice of a deformed wallpaper pattern and a degree-4 graphical model. We start the discovery process with unsupervised clustering of interest points and voting for consistent lattice unit proposals. The proposed lattice basis vectors and pattern element contribute to the pairwise compatibility and joint compatibility (observation model) functions in a Markov random field (MRF). Thus, we formulate the 2D lattice detection as a spatial, multitarget tracking problem, solved within an MRF framework using a novel and efficient mean-shift belief propagation (MSBP) method. Iterative detection and growth of the deformed lattice are interleaved with regularized thin-plate spline (TPS) warping, which rectifies the current deformed lattice into a regular one to ensure stability of the MRF model in the next round of lattice recovery. We provide quantitative comparisons of our proposed method with existing algorithms on a diverse set of 261 real-world photos to demonstrate significant advances in accuracy and speed over the state of the art in automatic discovery of regularity in real images.", "title": "" }, { "docid": "56b0876b265437f1f3e6f4fc25592685", "text": "Currently, progressively larger deep neural networks are trained on ever growing data corpora. As this trend is only going to increase in the future, distributed training schemes are becoming increasingly relevant. A major issue in distributed training is the limited communication bandwidth between contributing nodes or prohibitive communication cost in general. These challenges become even more pressing, as the number of computation nodes increases. To counteract this development we propose sparse binary compression (SBC), a compression framework that allows for a drastic reduction of communication cost for distributed training. SBC combines existing techniques of communication delay and gradient sparsification with a novel binarization method and optimal weight update encoding to push compression gains to new limits. By doing so, our method also allows us to smoothly trade-off gradient sparsity and temporal sparsity to adapt to the requirements of the learning task. Our experiments show, that SBC can reduce the upstream communication on a variety of convolutional and recurrent neural network architectures by more than four orders of magnitude without significantly harming the convergence speed in terms of forward-backward passes. For instance, we can train ResNet50 on ImageNet in the same number of iterations to the baseline accuracy, using ×3531 less bits or train it to a 1% lower accuracy using ×37208 less bits. In the latter case, the total upstream communication required is cut from 125 terabytes to 3.35 gigabytes for every participating client.", "title": "" }, { "docid": "d0f71092df2eab53e7f32eff1cb7af2e", "text": "Topic modeling of textual corpora is an important and challenging problem. In most previous work, the “bag-of-words” assumption is usually made which ignores the ordering of words. This assumption simplifies the computation, but it unrealistically loses the ordering information and the semantic of words in the context. In this paper, we present a Gaussian Mixture Neural Topic Model (GMNTM) which incorporates both the ordering of words and the semantic meaning of sentences into topic modeling. Specifically, we represent each topic as a cluster of multi-dimensional vectors and embed the corpus into a collection of vectors generated by the Gaussian mixture model. Each word is affected not only by its topic, but also by the embedding vector of its surrounding words and the context. The Gaussian mixture components and the topic of documents, sentences and words can be learnt jointly. Extensive experiments show that our model can learn better topics and more accurate word distributions for each topic. Quantitatively, comparing to state-of-the-art topic modeling approaches, GMNTM obtains significantly better performance in terms of perplexity, retrieval accuracy and classification accuracy.", "title": "" }, { "docid": "076ad699191bd3df87443f427268222a", "text": "Robotic systems for disease detection in greenhouses are expected to improve disease control, increase yield, and reduce pesticide application. We present a robotic detection system for combined detection of two major threats of greenhouse bell peppers: Powdery mildew (PM) and Tomato spotted wilt virus (TSWV). The system is based on a manipulator, which facilitates reaching multiple detection poses. Several detection algorithms are developed based on principal component analysis (PCA) and the coefficient of variation (CV). Tests ascertain the system can successfully detect the plant and reach the detection pose required for PM (along the side of the plant), yet it has difficulties in reaching the TSWV detection pose (above the plant). Increasing manipulator work-volume is expected to solve this issue. For TSWV, PCA-based classification with leaf vein removal, achieved the highest classification accuracy (90%) while the accuracy of the CV methods was also high (85% and 87%). For PM, PCA-based pixel-level classification was high (95.2%) while leaf condition classification accuracy was low (64.3%) since it was determined based on the upper side of the leaf while disease symptoms start on its lower side. Exposure of the lower side of the leaf during detection is expected to improve PM condition detection.", "title": "" }, { "docid": "02eec4b9078af92a774f6e46b36808f7", "text": "Cancer cell migration is a plastic and adaptive process integrating cytoskeletal dynamics, cell-extracellular matrix and cell-cell adhesion, as well as tissue remodeling. In response to molecular and physical microenvironmental cues during metastatic dissemination, cancer cells exploit a versatile repertoire of invasion and dissemination strategies, including collective and single-cell migration programs. This diversity generates molecular and physical heterogeneity of migration mechanisms and metastatic routes, and provides a basis for adaptation in response to microenvironmental and therapeutic challenge. We here summarize how cytoskeletal dynamics, protease systems, cell-matrix and cell-cell adhesion pathways control cancer cell invasion programs, and how reciprocal interaction of tumor cells with the microenvironment contributes to plasticity of invasion and dissemination strategies. We discuss the potential and future implications of predicted \"antimigration\" therapies that target cytoskeletal dynamics, adhesion, and protease systems to interfere with metastatic dissemination, and the options for integrating antimigration therapy into the spectrum of targeted molecular therapies.", "title": "" }, { "docid": "832eb4f28b217842e60bfd4820bb6acb", "text": "It has been recognized that system design will benefit from explicit study of the context in which users work. The unaided individual divorced from a social group and from supporting artifacts is no longer the model user. But with this realization about the importance of context come many difficult questions. What exactly is context? If the individual is no longer central, what is the correct unit of analysis? What are the relations between artifacts, individuals, and the social groups to which they belong? This chapter compares three approaches to the study of context: activity theory, situated action models, and distributed cognition. I consider the basic concepts each approach promulgates and evaluate the usefulness of each for the design of technology. 1", "title": "" }, { "docid": "d799390b673cc28842a310af8cd1eb03", "text": "This paper focuses on dietary approaches to control intramuscular fat deposition to increase beneficial omega-3 polyunsaturated fatty acids (PUFA) and conjugated linoleic acid content and reduce saturated fatty acids in beef. Beef lipid trans-fatty acids are considered, along with relationships between lipids in beef and colour shelf-life and sensory attributes. Ruminal lipolysis and biohydrogenation limit the ability to improve beef lipids. Feeding omega-3 rich forage increases linolenic acid and long-chain PUFA in beef lipids, an effect increased by ruminally-protecting lipids, but consequently may alter flavour characteristics and shelf-life. Antioxidants, particularly α-tocopherol, stabilise high concentrations of muscle PUFA. Currently, the concentration of long-chain omega-3 PUFA in beef from cattle fed non-ruminally-protected lipids falls below the limit considered by some authorities to be labelled a source of omega-3 PUFA. The mechanisms regulating fatty acid isomer distribution in bovine tissues remain unclear. Further enhancement of beef lipids requires greater understanding of ruminal biohydrogenation.", "title": "" }, { "docid": "6a33013c19dc59d8871e217461d479e9", "text": "Cancer tissues in histopathology images exhibit abnormal patterns; it is of great clinical importance to label a histopathology image as having cancerous regions or not and perform the corresponding image segmentation. However, the detailed annotation of cancer cells is often an ambiguous and challenging task. In this paper, we propose a new learning method, multiple clustered instance learning (MCIL), to classify, segment and cluster cancer cells in colon histopathology images. The proposed MCIL method simultaneously performs image-level classification (cancer vs. non-cancer image), pixel-level segmentation (cancer vs. non-cancer tissue), and patch-level clustering (cancer subclasses). We embed the clustering concept into the multiple instance learning (MIL) setting and derive a principled solution to perform the above three tasks in an integrated framework. Experimental results demonstrate the efficiency and effectiveness of MCIL in analyzing colon cancers.", "title": "" }, { "docid": "36b5440a80238293fbb2db38db04f87d", "text": "Mobile-app quality is becoming an increasingly important issue. These apps are generally delivered through app stores that let users post reviews. These reviews provide a rich data source you can leverage to understand user-reported issues. Researchers qualitatively studied 6,390 low-rated user reviews for 20 free-to-download iOS apps. They uncovered 12 types of user complaints. The most frequent complaints were functional errors, feature requests, and app crashes. Complaints about privacy and ethical issues and hidden app costs most negatively affected ratings. In 11 percent of the reviews, users attributed their complaints to a recent app update. This study provides insight into the user-reported issues of iOS apps, along with their frequency and impact, which can help developers better prioritize their limited quality assurance resources.", "title": "" }, { "docid": "d9fcfc15c1c310aef6eec96e230074d1", "text": "There is intense interest in applying machine learning to problems of causal inference in fields such as healthcare, economics and education. In particular, individual-level causal inference has important applications such as precision medicine. We give a new theoretical analysis and family of algorithms for predicting individual treatment effect (ITE) from observational data, under the assumption known as strong ignorability. The algorithms learn a “balanced” representation such that the induced treated and control distributions look similar. We give a novel, simple and intuitive generalization-error bound showing that the expected ITE estimation error of a representation is bounded by a sum of the standard generalization-error of that representation and the distance between the treated and control distributions induced by the representation. We use Integral Probability Metrics to measure distances between distributions, deriving explicit bounds for the Wasserstein and Maximum Mean Discrepancy (MMD) distances. Experiments on real and simulated data show the new algorithms match or outperform the state-of-the-art.", "title": "" }, { "docid": "c817e872fa02f93ae967168a5aa15d20", "text": "We introduce an SIR particle filter for tracking civilian targets including vehicles and pedestrians in dual-band midwave/longwave infrared imagery as well as a novel dual-band track consistency check for triggering appearance model updates. Because of the paucity of available dual-band data, we constructed a custom sensor to acquire the test sequences. The proposed algorithm is robust against magnification changes, aspect changes, and clutter and successfully tracked all 17 cases tested, including two partial occlusions. Future work is needed to comprehensively evaluate performance of the algorithm against state-of-the-art video trackers, especially considering the relatively small number of previous dual-band tracking results that have appeared.", "title": "" }, { "docid": "9f5f79a19d3a181f5041a7b5911db03a", "text": "BACKGROUND\nNucleoside analogues against herpes simplex virus (HSV) have been shown to suppress shedding of HSV type 2 (HSV-2) on genital mucosal surfaces and may prevent sexual transmission of HSV.\n\n\nMETHODS\nWe followed 1484 immunocompetent, heterosexual, monogamous couples: one with clinically symptomatic genital HSV-2 and one susceptible to HSV-2. The partners with HSV-2 infection were randomly assigned to receive either 500 mg of valacyclovir once daily or placebo for eight months. The susceptible partner was evaluated monthly for clinical signs and symptoms of genital herpes. Source partners were followed for recurrences of genital herpes; 89 were enrolled in a substudy of HSV-2 mucosal shedding. Both partners were counseled on safer sex and were offered condoms at each visit. The predefined primary end point was the reduction in transmission of symptomatic genital herpes.\n\n\nRESULTS\nClinically symptomatic HSV-2 infection developed in 4 of 743 susceptible partners who were given valacyclovir, as compared with 16 of 741 who were given placebo (hazard ratio, 0.25; 95 percent confidence interval, 0.08 to 0.75; P=0.008). Overall, acquisition of HSV-2 was observed in 14 of the susceptible partners who received valacyclovir (1.9 percent), as compared with 27 (3.6 percent) who received placebo (hazard ratio, 0.52; 95 percent confidence interval, 0.27 to 0.99; P=0.04). HSV DNA was detected in samples of genital secretions on 2.9 percent of the days among the HSV-2-infected (source) partners who received valacyclovir, as compared with 10.8 percent of the days among those who received placebo (P<0.001). The mean rates of recurrence were 0.11 per month and 0.40 per month, respectively (P<0.001).\n\n\nCONCLUSIONS\nOnce-daily suppressive therapy with valacyclovir significantly reduces the risk of transmission of genital herpes among heterosexual, HSV-2-discordant couples.", "title": "" }, { "docid": "945f94bd0022e14c1726cb36dd5deefc", "text": "This paper introduces a mobile human airbag system designed for fall protection for the elderly. A Micro Inertial Measurement Unit ( muIMU) of 56 mm times 23 mm times 15 mm in size is built. This unit consists of three dimensional MEMS accelerometers, gyroscopes, a Bluetooth module and a Micro Controller Unit (MCU). It records human motion information, and, through the analysis of falls using a high-speed camera, a lateral fall can be determined by gyro threshold. A human motion database that includes falls and other normal motions (walking, running, etc.) is set up. Using a support vector machine (SVM) training process, we can classify falls and other normal motions successfully with a SVM filter. Based on the SVM filter, an embedded digital signal processing (DSP) system is developed for real-time fall detection. In addition, a smart mechanical airbag deployment system is finalized. The response time for the mechanical trigger is 0.133 s, which allows enough time for compressed air to be released before a person falls to the ground. The integrated system is tested and the feasibility of the airbag system for real-time fall protection is demonstrated.", "title": "" }, { "docid": "20830c435c95317fbd189341ff5cdebd", "text": "Relational databases store a significant amount of the worlds data. However, accessing this data currently requires users to understand a query language such as SQL. We propose Seq2SQL, a deep neural network for translating natural language questions to corresponding SQL queries. Our model uses rewards from inthe-loop query execution over the database to learn a policy to generate the query, which contains unordered parts that are less suitable for optimization via cross entropy loss. Moreover, Seq2SQL leverages the structure of SQL to prune the space of generated queries and significantly simplify the generation problem. In addition to the model, we release WikiSQL, a dataset of 80654 hand-annotated examples of questions and SQL queries distributed across 24241 tables from Wikipedia that is an order of magnitude larger than comparable datasets. By applying policybased reinforcement learning with a query execution environment to WikiSQL, Seq2SQL outperforms a state-of-the-art semantic parser, improving execution accuracy from 35.9% to 59.4% and logical form accuracy from 23.4% to 48.3%.", "title": "" } ]
scidocsrr
bbe53e97217ac3ad077acae6c04db5fa
Efficient Markov Logic Inference for Natural Language Semantics
[ { "docid": "26a599c22c173f061b5d9579f90fd888", "text": "markov logic an interface layer for artificial markov logic an interface layer for artificial shinichi tsukada in size 22 syyjdjbook.buncivy yumina ooba in size 24 ajfy7sbook.ztoroy okimi in size 15 edemembookkey.16mb markov logic an interface layer for artificial intelligent systems (ai-2) ubc computer science interface layer for artificial intelligence daniel lowd essential principles for autonomous robotics markovlogic: aninterfacelayerfor arti?cialintelligence official encyclopaedia of sheffield united football club hot car hot car firext answers || 2007 acura tsx hitch manual course syllabus university of texas at dallas jump frog jump cafebr 1994 chevy silverado 1500 engine ekpbs readings in earth science alongs johnson owners manual pdf firext thomas rescues the diesels cafebr dead sea scrolls and the jewish origins of christianity install gimp help manual by iitsuka asao vox diccionario abreviado english spanis mdmtv nobutaka in size 26 bc13xqbookog.xxuz mechanisms in b cell neoplasia 1992 workshop at the spocks world diane duane nabbit treasury of saints fiores reasoning with probabilistic university of texas at austin gp1300r yamaha waverunner service manua by takisawa tomohide repair manual haier hpr10xc6 air conditioner birdz mexico icons mexico icons oobags asus z53 manual by hatsutori yoshino industrial level measurement by haruyuki morimoto", "title": "" } ]
[ { "docid": "ef15cf49c90ef4b115b42ee96fa24f93", "text": "Visual question answering (VQA) is challenging because it requires a simultaneous understanding of both the visual content of images and the textual content of questions. The approaches used to represent the images and questions in a fine-grained manner and questions and to fuse these multimodal features play key roles in performance. Bilinear pooling based models have been shown to outperform traditional linear models for VQA, but their high-dimensional representations and high computational complexity may seriously limit their applicability in practice. For multimodal feature fusion, here we develop a Multi-modal Factorized Bilinear (MFB) pooling approach to efficiently and effectively combine multi-modal features, which results in superior performance for VQA compared with other bilinear pooling approaches. For fine-grained image and question representation, we develop a ‘co-attention’ mechanism using an end-to-end deep network architecture to jointly learn both the image and question attentions. Combining the proposed MFB approach with co-attention learning in a new network architecture provides a unified model for VQA. Our experimental results demonstrate that the single MFB with co-attention model achieves new state-of-theart performance on the real-world VQA dataset. Code available at https://github.com/yuzcccc/mfb.", "title": "" }, { "docid": "400d7ef5f744b41091221c1aebc46cf0", "text": "This paper presents the design and analysis of a novel machine family-the enclosed-rotor Halbach-array permanent-magnet brushless dc motors for spacecraft applications. The initial design, selection of major parameters, and air-gap magnetic flux density are estimated using the analytical model of the machine. The proportion of the Halbach array in the machine is optimized using finite element analysis to obtain a near-trapezoidal flux pattern. The machine is found to provide uniform air-gap flux density along the radius, thus avoiding circulating currents in stator conductors and thereby reducing torque ripple. Furthermore, the design is validated with experimental results on a fabricated machine and is found to suit the design requirements of critical spacecraft applications.", "title": "" }, { "docid": "832e1a93428911406759f696eb9cb101", "text": "Reinforcement learning provides both qualitative and quantitative frameworks for understanding and modeling adaptive decision-making in the face of rewards and punishments. Here we review the latest dispatches from the forefront of this field, and map out some of the territories where lie monsters.", "title": "" }, { "docid": "7588bd6798d8c2fd891acaf3c64c675f", "text": "OBJECTIVE\nThis article presents a case report of a child with poor sensory processing and describes the disorders impact on the child's occupational behavior and the changes in occupational performance during 10 months of occupational therapy using a sensory integrative approach (OT-SI).\n\n\nMETHOD\nRetrospective chart review of assessment data and analysis of parent interview data are reviewed. Progress toward goals and objectives is measured using goal attainment scaling. Themes from parent interview regarding past and present occupational challenges are presented.\n\n\nRESULTS\nNotable improvements in occupational performance are noted on goal attainment scales, and these are consistent with improvements in behavior. Parent interview data indicate noteworthy progress in the child's ability to participate in home, school, and family activities.\n\n\nCONCLUSION\nThis case report demonstrates a model for OT-SI. The findings support the theoretical underpinnings of sensory integration theory: that improvement in the ability to process and integrate sensory input will influence adaptive behavior and occupational performance. Although these findings cannot be generalized, they provide preliminary evidence supporting the theory and the effectiveness of this approach.", "title": "" }, { "docid": "a39b11d66b368bd48b056612a3e268f7", "text": "The Unified Modeling Language (UML) is accepted today as an important standard for developing software. UML tools however provide little support for validating and checking models in early development phases. There is also no substantial support for the Object Constraint Language (OCL). We present an approach for the validation of UML models and OCL constraints based on animation and certification. The USE tool (UML-based Specification Environment) supports analysts, designers and developers in executing UML models and checking OCL constraints and thus enables them to employ model-driven techniques for software production.", "title": "" }, { "docid": "235f7fbae50e3952c74cbd67345acb74", "text": "The paper presents research results in the field of small antennas obtain ed at the Department of Wireless Communications, Faculty of Electrical Engineering and Computing, University of Zagreb. A study comparing the application of several miniaturization techniques on a shorted patch antenn is presented. Single and dual band shorted patch antennas with notches and/or slot are introduced. A PIFA d esigned for application in mobile GSM terminals is described. The application of stacked shorted patches as arr ay elements for a mobile communication base station as well as for electromagnetic field sensor is presented. The design of single and dual band folded monopoles is described. Prototypes of the presented antennas have be en manufactured and their characteristics were verified by measurements.", "title": "" }, { "docid": "fa151d877d387a250caa8d1c1da32a10", "text": "Recently, unikernels have emerged as an exploration of minimalist software stacks to improve the security of applications in the cloud. In this paper, we propose extending the notion of minimalism beyond an individual virtual machine to include the underlying monitor and the interface it exposes. We propose unikernel monitors . Each unikernel is bundled with a tiny, specialized monitor that only contains what the unikernel needs both in terms of interface and implementation. Unikernel monitors improve isolation through minimal interfaces, reduce complexity, and boot unikernels quickly. Our initial prototype,ukvm, is less than 5% the code size of a traditional monitor, and boots MirageOS unikernels in as little as 10ms (8× faster than a traditional monitor).", "title": "" }, { "docid": "51bc87524f064f715bb5876f21468d9d", "text": "Cloud computing provides an effective business model for the deployment of IT infrastructure, platform, and software services. Often, facilities are outsourced to cloud providers and this offers the service consumer virtualization technologies without the added cost burden of development. However, virtualization introduces serious threats to service delivery such as Denial of Service (DoS) attacks, Cross-VM Cache Side Channel attacks, Hypervisor Escape and Hyper-jacking. One of the most sophisticated forms of attack is the cross-VM cache side channel attack that exploits shared cache memory between VMs. A cache side channel attack results in side channel data leakage, such as cryptographic keys. Various techniques used by the attackers to launch cache side channel attack are presented, as is a critical analysis of countermeasures against cache side channel attacks.", "title": "" }, { "docid": "1ebf2152d5624261951bebd68c306d5e", "text": "A dual active bridge (DAB) is a zero-voltage switching (ZVS) high-power isolated dc-dc converter. The development of a 15-kV SiC insulated-gate bipolar transistor switching device has enabled a noncascaded medium voltage (MV) isolated dc-dc DAB converter. It offers simple control compared to a cascaded topology. However, a compact-size high frequency (HF) DAB transformer has significant parasitic capacitances for such voltage. Under high voltage and high dV/dT switching, the parasitics cause electromagnetic interference and switching loss. They also pose additional challenges for ZVS. The device capacitance and slowing of dV/dT play a major role in deadtime selection. Both the deadtime and transformer parasitics affect the ZVS operation of the DAB. Thus, for the MV-DAB design, the switching characteristics of the devices and MV HF transformer parasitics have to be closely coupled. For the ZVS mode, the current vector needs to be between converter voltage vectors with a certain phase angle defined by deadtime, parasitics, and desired converter duty ratio. This paper addresses the practical design challenges for an MV-DAB application.", "title": "" }, { "docid": "664b003cedbca63ebf775bd9f062b8f1", "text": "Since 1900, soil organic matter (SOM) in farmlands worldwide has declined drastically as a result of carbon turnover and cropping systems. Over the past 17 years, research trials were established to evaluate the efficacy of different commercial humates products on potato production. Data from humic acid (HA) trials showed that different cropping systems responded differently to different products in relation to yield and quality. Important qualifying factors included: source; concentration; processing; chelating or complexing capacity of the humic acid products; functional groups (Carboxyl; Phenol; Hydroxyl; Ketone; Ester; Ether; Amine), rotation and soil quality factors; consistency of the product in enhancing yield and quality of potato crops; mineralization effect; and influence on fertilizer use efficiency. Properties of humic substances, major constituents of soil organic matter, include chelation, mineralization, buffer effect, clay mineral-organic interaction, and cation exchange. Humates increase phosphorus availability by complexing ions into stable compounds, allowing the phosphorus ion to remain exchangeable for plants’ uptake. Collectively, the consistent use of good quality products in our replicated research plots in different years resulted in a yield increase from 11.4% to the maximum of 22.3%. Over the past decade, there has been a major increase in the quality of research and development of organic and humic acid products by some well-established manufacturers. Our experimentations with these commercial products showed an increase in the yield and quality of crops.", "title": "" }, { "docid": "271f3780fe6c1d58a8f5dffbd182e1ac", "text": "We are presenting the design of a high gain printed antenna array consisting of 420 identical patch antennas intended for FMCW radar at Ku band. The array exhibits 3 dB-beamwidths of 2° and 10° in H and E plane, respectively, side lobe suppression better than 20 dB, gain about 30 dBi and VSWR less than 2 in the frequency range 17.1 - 17.6 GHz. Excellent antenna efficiency that is between 60 and 70 % is achieved by proper impedance matching throughout the array and by using series feeding architecture with both resonant and traveling-wave feed. Enhanced cross polarization suppression is obtained by anti-phase feeding of the upper and the lower halves of the antenna. Overall antenna dimensions are 31 λ0 × 7.5 λ0.", "title": "" }, { "docid": "316d341dd5ea6ebd1d4618b5a1a1b812", "text": "OBJECTIVE\nBecause of poor overall survival in advanced ovarian malignancies, patients often turn to alternative therapies despite controversy surrounding their use. Currently, the majority of cancer patients combine some form of complementary and alternative medicine with conventional therapies. Of these therapies, antioxidants, added to chemotherapy, are a frequent choice.\n\n\nMETHODS\nFor this preliminary report, two patients with advanced epithelial ovarian cancer were studied. One patient had Stage IIIC papillary serous adenocarcinoma, and the other had Stage IIIC mixed papillary serous and seromucinous adenocarcinoma. Both patients were optimally cytoreduced prior to first-line carboplatinum/paclitaxel chemotherapy. Patient 2 had a delay in initiation of chemotherapy secondary to co-morbid conditions and had evidence for progression of disease prior to institution of therapy. Patient 1 began oral high-dose antioxidant therapy during her first month of therapy. This consisted of oral vitamin C, vitamin E, beta-carotene, coenzyme Q-10 and a multivitamin/mineral complex. In addition to the oral antioxidant therapy, patient 1 added parenteral ascorbic acid at a total dose of 60 grams given twice weekly at the end of her chemotherapy and prior to consolidation paclitaxel chemotherapy. Patient 2 added oral antioxidants just prior to beginning chemotherapy, including vitamin C, beta-carotene, vitamin E, coenzyme Q-10 and a multivitamin/mineral complex. Patient 2 received six cycles of paclitaxel/carboplatinum chemotherapy and refused consolidation chemotherapy despite radiographic evidence of persistent disease. Instead, she elected to add intravenous ascorbic acid at 60 grams twice weekly. Both patients gave written consent for the use of their records in this report.\n\n\nRESULTS\nPatient 1 had normalization of her CA-125 after the first cycle of chemotherapy and has remained normal, almost 3(1/2) years after diagnosis. CT scans of the abdomen and pelvis remain without evidence of recurrence. Patient 2 had normalization of her CA-125 after the first cycle of chemotherapy. After her first round of chemotherapy, the patient was noted to have residual disease in the pelvis. She declined further chemotherapy and added intravenous ascorbic acid. There is no evidence for recurrent disease by physical examination, and her CA-125 has remained normal three years after diagnosis.\n\n\nCONCLUSION\nAntioxidants, when added adjunctively, to first-line chemotherapy, may improve the efficacy of chemotherapy and may prove to be safe. A review of four common antioxidants follows. Because of the positive results found in these two patients, a randomized controlled trial is now underway at the University of Kansas Medical Center evaluating safety and efficacy of antioxidants when added to chemotherapy in newly diagnosed ovarian cancer.", "title": "" }, { "docid": "4a8c8c09fe94cddbc9cadefa014b1165", "text": "A solution to trajectory-tracking control problem for a four-wheel-steering vehicle (4WS) is proposed using sliding-mode approach. The advantage of this controller over current control procedure is that it is applicable to a large class of vehicles with single or double steering and to a tracking velocity that is not necessarily constant. The sliding-mode approach make the solutions robust with respect to errors and disturbances, as demonstrated by the simulation results.", "title": "" }, { "docid": "75c5a3f0d57a6a39868b28685d92d7b5", "text": "The complexity of the healthcare system is increasing, and the moral duty to provide quality patient care is threatened by the sky rocketing cost of healthcare. A major concern for both patients and the hospital’s economic bottom line are hospital-acquired infections (HAIs), including central line associated blood stream infections (CLABSIs). These often serious infections result in significantly increased patient morbidity, mortality, length of stay, and use of health care resources. Historically, most infection prevention and control measures have focused on aseptic technique of health care providers and in managing the environment. Emerging evidence for the role of host decontamination in preventing HAIs is shifting the paradigm and paving a new path for novel infection prevention interventions. Chlorhexidine gluconate has a long-standing track record of being a safe and effective product with broad antiseptic activity, and little evidence of emerging resistance. As the attention is directed toward control and prevention of HAIs, chlorhexidine-containing products may prove to be a vital tool in infection control. Increasing rates of multidrug-resistant organisms (MDROs), including methicillinresistant Staphylococcus aureus (MRSA), Acinetobacter baumanniic and vancomycin-resistant Enterococcus (VRE) demand that evidence-based research drive all interventions to prevent transmission of these organisms and the development of HAIs. This review of literature examines current evidence related to daily chlorhexidine gluconate bathing and its impact on CLABSI rates in the adult critically ill patient population.", "title": "" }, { "docid": "1b6e35187b561de95051f67c70025152", "text": "Ž . The technology acceptance model TAM proposes that ease of use and usefulness predict applications usage. The current research investigated TAM for work-related tasks with the World Wide Web as the application. One hundred and sixty-three subjects responded to an e-mail survey about a Web site they access often in their jobs. The results support TAM. They also Ž . Ž . demonstrate that 1 ease of understanding and ease of finding predict ease of use, and that 2 information quality predicts usefulness for revisited sites. In effect, the investigation applies TAM to help Web researchers, developers, and managers understand antecedents to users’ decisions to revisit sites relevant to their jobs. q 2000 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "3440de9ea0f76ba39949edcb5e2a9b54", "text": "This document is not intended to create, does not create, and may not be relied upon to create any rights, substantive or procedural, enforceable by law by any party in any matter civil or criminal. Findings and conclusions of the research reported here are those of the authors and do not necessarily reflect the official position or policies of the U.S. Department of Justice. The products, manufacturers, and organizations discussed in this document are presented for informational purposes only and do not constitute product approval or endorsement by the Much of crime mapping is devoted to detecting high-crime-density areas known as hot spots. Hot spot analysis helps police identify high-crime areas, types of crime being committed, and the best way to respond. This report discusses hot spot analysis techniques and software and identifies when to use each one. The visual display of a crime pattern on a map should be consistent with the type of hot spot and possible police action. For example, when hot spots are at specific addresses, a dot map is more appropriate than an area map, which would be too imprecise. In this report, chapters progress in sophis­ tication. Chapter 1 is for novices to crime mapping. Chapter 2 is more advanced, and chapter 3 is for highly experienced analysts. The report can be used as a com­ panion to another crime mapping report ■ Identifying hot spots requires multiple techniques; no single method is suffi­ cient to analyze all types of crime. ■ Current mapping technologies have sig­ nificantly improved the ability of crime analysts and researchers to understand crime patterns and victimization. ■ Crime hot spot maps can most effective­ ly guide police action when production of the maps is guided by crime theories (place, victim, street, or neighborhood).", "title": "" }, { "docid": "4845233571c0572570445f4e3ca4ebc2", "text": "This publication contains reprint articles for which IEEE does not hold copyright. You may purchase this article from the Ask*IEEE Document Delivery Service at http://www.ieee.org/services/askieee/", "title": "" }, { "docid": "9075e2ae2f1345b91738f3d8ac34cfb2", "text": "We explore how well the intersection between our own everyday memories and those captured by our smartphones can be used for what we call autobiographical authentication-a challenge-response authentication system that queries users about day-to-day experiences. Through three studies-two on MTurk and one field study-we found that users are good, but make systematic errors at answering autobiographical questions. Using Bayesian modeling to account for these systematic response errors, we derived a formula for computing a confidence rating that the attempting authenticator is the user from a sequence of question-answer responses. We tested our formula against five simulated adversaries based on plausible real-life counterparts. Our simulations indicate that our model of autobiographical authentication generally performs well in assigning high confidence estimates to the user and low confidence estimates to impersonating adversaries.", "title": "" }, { "docid": "840463688f36a5fd14efa8a1a35bfb8e", "text": "In this paper, we propose a new hybrid ant colony optimization (ACO) algorithm for feature selection (FS), called ACOFS, using a neural network. A key aspect of this algorithm is the selection of a subset of salient features of reduced size. ACOFS uses a hybrid search technique that combines the advantages of wrapper and filter approaches. In order to facilitate such a hybrid search, we designed new sets of rules for pheromone update and heuristic information measurement. On the other hand, the ants are guided in correct directions while constructing graph (subset) paths using a bounded scheme in each and every step in the algorithm. The above combinations ultimately not only provide an effective balance between exploration and exploitation of ants in the search, but also intensify the global search capability of ACO for a highquality solution in FS. We evaluate the performance of ACOFS on eight benchmark classification datasets and one gene expression dataset, which have dimensions varying from 9 to 2000. Extensive experiments were conducted to ascertain how AOCFS works in FS tasks. We also compared the performance of ACOFS with the results obtained from seven existing well-known FS algorithms. The comparison details show that ACOFS has a remarkable ability to generate reduced-size subsets of salient features while yielding significant classification accuracy. 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "3dfcb00385237c6cb481a5a79a02eb12", "text": "Genetic variability of DNA repair mechanisms influences chemotherapy treatment outcome of gastric cancer. We conducted a cohort study to investigate the role of ERCC1-ERCC2 gene polymorphisms in the chemotherapy response and clinic outcome of gastric cancer. Between March 2011 and March 2013, 228 gastric patients who were newly diagnosed with histopathology were enrolled in our study. Genotypes of ERCC1 rs11615, rs3212986, rs2298881 and ERCC2 rs3212986 were conducted by polymerase chain reaction restriction fragment length polymorphism (PCR-RFLP) assay. We found that individuals carrying TT genotype of ERCC1 rs11615 and CC genotype of ERCC1 rs2298881 were associated with better response to chemotherapy and longer survival time of gastric cancer. Moreover, individuals with AA genotype of ERCC2 rs1799793 were correlated with shorter survival of gastric cancer. In conclusion, ERCC1 rs11615, rs2298881 and ERCC2 rs1799793 polymorphism play an important role in the treatment outcome of gastric cancer.", "title": "" } ]
scidocsrr
21a1dea56077f4d18daabf859ea5e91a
Kello Depa Depa Warmth and Competence as Universal Dimensions of Social Perception : The Stereotype Content Model and the BIAS Map
[ { "docid": "c36fec7cebe04627ffcd9a689df8c5a2", "text": "In seems there are two dimensions that underlie most judgments of traits, people, groups, and cultures. Although the definitions vary, the first makes reference to attributes such as competence, agency, and individualism, and the second to warmth, communality, and collectivism. But the relationship between the two dimensions seems unclear. In trait and person judgment, they are often positively related; in group and cultural stereotypes, they are often negatively related. The authors report 4 studies that examine the dynamic relationship between these two dimensions, experimentally manipulating the location of a target of judgment on one and examining the consequences for the other. In general, the authors' data suggest a negative dynamic relationship between the two, moderated by factors the impact of which they explore.", "title": "" }, { "docid": "ae71548900779de3ee364a6027b75a02", "text": "The authors suggest that the traditional conception of prejudice--as a general attitude or evaluation--can problematically obscure the rich texturing of emotions that people feel toward different groups. Derived from a sociofunctional approach, the authors predicted that groups believed to pose qualitatively distinct threats to in-group resources or processes would evoke qualitatively distinct and functionally relevant emotional reactions. Participants' reactions to a range of social groups provided a data set unique in the scope of emotional reactions and threat beliefs explored. As predicted, different groups elicited different profiles of emotion and threat reactions, and this diversity was often masked by general measures of prejudice and threat. Moreover, threat and emotion profiles were associated with one another in the manner predicted: Specific classes of threat were linked to specific, functionally relevant emotions, and groups similar in the threat profiles they elicited were also similar in the emotion profiles they elicited.", "title": "" }, { "docid": "713010fe0ee95840e6001410f8a164cc", "text": "Three studies tested the idea that when social identity is salient, group-based appraisals elicit specific emotions and action tendencies toward out-groups. Participants' group memberships were made salient and the collective support apparently enjoyed by the in-group was measured or manipulated. The authors then measured anger and fear (Studies 1 and 2) and anger and contempt (Study 3), as well as the desire to move against or away from the out-group. Intergroup anger was distinct from intergroup fear, and the inclination to act against the out-group was distinct from the tendency to move away from it. Participants who perceived the in-group as strong were more likely to experience anger toward the out-group and to desire to take action against it. The effects of perceived in-group strength on offensive action tendencies were mediated by anger.", "title": "" } ]
[ { "docid": "7daf5ad71bda51eacc68f0a1482c3e7e", "text": "Nearly every modern mobile device includes two cameras. With advances in technology the resolution of these sensors has constantly increased. While this development provides great convenience for users, for example with video-telephony or as dedicated camera replacement, the security implications of including high resolution cameras on such devices has yet to be considered in greater detail. With this paper we demonstrate that an attacker may abuse the cameras in modern smartphones to extract valuable information from a victim. First, we consider exploiting a front-facing camera to capture a user’s keystrokes. By observing facial reflections, it is possible to capture user input with the camera. Subsequently, individual keystrokes can be extracted from the images acquired with the camera. Furthermore, we demonstrate that these cameras can be used by an attacker to extract and forge the fingerprints of a victim. This enables an attacker to perform a wide range of malicious actions, including authentication bypass on modern biometric systems and falsely implicating a person by planting fingerprints in a crime scene. Finally, we introduce several mitigation strategies for the identified threats.", "title": "" }, { "docid": "11f2adab1fb7a93e0c9009a702389af1", "text": "OBJECTIVE\nThe authors present clinical outcome data and satisfaction of patients who underwent minimally invasive vertebral body corpectomy and cage placement via a mini-open, extreme lateral, transpsoas approach and posterior short-segment instrumentation for lumbar burst fractures.\n\n\nMETHODS\nPatients with unstable lumbar burst fractures who underwent corpectomy and anterior column reconstruction via a mini-open, extreme lateral, transpsoas approach with short-segment posterior fixation were reviewed retrospectively. Demographic information, operative parameters, perioperative radiographic measurements, and complications were analyzed. Patient-reported outcome instruments (Oswestry Disability Index [ODI], 12-Item Short Form Health Survey [SF-12]) and an anterior scar-specific patient satisfaction questionnaire were recorded at the latest follow-up.\n\n\nRESULTS\nTwelve patients (7 men, 5 women, average age 42 years, range 22-68 years) met the inclusion criteria. Lumbar corpectomies with anterior column support were performed (L-1, n = 8; L-2, n = 2; L-3, n = 2) and supplemented with short-segment posterior instrumentation (4 open, 8 percutaneous). Four patients had preoperative neurological deficits, all of which improved after surgery. No new neurological complications were noted. The anterior incision on average was 6.4 cm (range 5-8 cm) in length, caused mild pain and disability, and was aesthetically acceptable to the large majority of patients. Three patients required chest tube placement for pleural violation, and 1 patient required reoperation for cage subsidence/hardware failure. Average clinical follow-up was 38 months (range 16-68 months), and average radiographic follow-up was 37 months (range 6-68 months). Preoperative lumbar lordosis and focal lordosis were significantly improved/maintained after surgery. Patients were satisfied with their outcomes, had minimal/moderate disability (average ODI score 20, range 0-52), and had good physical (SF-12 physical component score 41.7% ± 10.4%) and mental health outcomes (SF-12 mental component score 50.2% ± 11.6%) after surgery.\n\n\nCONCLUSIONS\nAnterior corpectomy and cage placement via a mini-open, extreme lateral, transpsoas approach supplemented by short-segment posterior instrumentation is a safe, effective alternative to conventional approaches in the treatment of single-level unstable burst fractures and is associated with excellent functional outcomes and patient satisfaction.", "title": "" }, { "docid": "b5fea029d64084089de8e17ae9debffc", "text": "While there has been increasing interest in the task of describing video with natural language, current computer vision algorithms are still severely limited in terms of the variability and complexity of the videos and their associated language that they can recognize. This is in part due to the simplicity of current benchmarks, which mostly focus on specific fine-grained domains with limited videos and simple descriptions. While researchers have provided several benchmark datasets for image captioning, we are not aware of any large-scale video description dataset with comprehensive categories yet diverse video content. In this paper we present MSR-VTT (standing for \"MSRVideo to Text\") which is a new large-scale video benchmark for video understanding, especially the emerging task of translating video to text. This is achieved by collecting 257 popular queries from a commercial video search engine, with 118 videos for each query. In its current version, MSR-VTT provides 10K web video clips with 41.2 hours and 200K clip-sentence pairs in total, covering the most comprehensive categories and diverse visual content, and representing the largest dataset in terms of sentence and vocabulary. Each clip is annotated with about 20 natural sentences by 1,327 AMT workers. We present a detailed analysis of MSR-VTT in comparison to a complete set of existing datasets, together with a summarization of different state-of-the-art video-to-text approaches. We also provide an extensive evaluation of these approaches on this dataset, showing that the hybrid Recurrent Neural Networkbased approach, which combines single-frame and motion representations with soft-attention pooling strategy, yields the best generalization capability on MSR-VTT.", "title": "" }, { "docid": "821c3c62ad0f36fc95692e4bc9db8953", "text": "Skin metastases occur in 0.6%-10.4% of all patients with cancer and represent 2% of all skin tumors. Skin metastases from visceral malignancies are important for dermatologists and dermatopathologists because of their variable clinical appearance and presentation, frequent delay and failure in their diagnosis, relative proportion of different internal malignancies metastasizing to the skin, and impact on morbidity, prognosis, and treatment. Another factor to take into account is that cutaneous metastasis may be the first sign of clinically silent visceral cancer. The relative frequencies of metastatic skin disease tend to correlate with the frequency of the different types of primary cancer in each sex. Thus, women with skin metastases have the following distribution in decreasing order of frequency of primary malignancies: breast, ovary, oral cavity, lung, and large intestine. In men, the distribution is as follows: lung, large intestine, oral cavity, kidney, breast, esophagus, pancreas, stomach, and liver. A wide morphologic spectrum of clinical appearances has been described in cutaneous metastases. This variable clinical morphology included nodules, papules, plaques, tumors, and ulcers. From a histopathologic point of view, there are 4 main morphologic patterns of cutaneous metastases involving the dermis, namely, nodular, infiltrative, diffuse, and intravascular. Generally, cutaneous metastases herald a poor prognosis. The average survival time of patients with skin metastases is a few months. In this article, we review the clinicopathologic and immunohistochemical characteristics of cutaneous metastases from internal malignancies, classify the most common cutaneous metastases, and identify studies that may assist in diagnosing the origin of a cutaneous metastasis.", "title": "" }, { "docid": "bd077cbf7785fc84e98724558832aaf6", "text": "Two process tracing techniques, explicit information search and verbal protocols, were used to examine the information processing strategies subjects use in reaching a decision. Subjects indicated preferences among apartments. The number of alternatives available and number of dimensions of information available was varied across sets of apartments. When faced with a two alternative situation, the subjects employed search strategies consistent with a compensatory decision process. In contrast, when faced with a more complex (multialternative) decision task, the subjects employed decision strategies designed to eliminate some of the available alternatives as quickly as possible and on the basis of a limited amount of information search and evaluation. The results demonstrate that the information processing leading to choice will vary as a function of task complexity. An integration of research in decision behavior with the methodology and theory of more established areas of cognitive psychology, such as human problem solving, is advocated.", "title": "" }, { "docid": "edeb56280e9645133b8ffbf40bcd9287", "text": "The design, architecture and VLSI implementation of an image compression algorithm for high-frame rate, multi-view wireless endoscopy is presented. By operating directly on Bayer color filter array image the algorithm achieves both high overall energy efficiency and low implementation cost. It uses two-dimensional discrete cosine transform to decorrelate image values in each $$4\\times 4$$ 4 × 4 block. Resulting coefficients are encoded by a new low-complexity yet efficient entropy encoder. An adaptive deblocking filter on the decoder side removes blocking effects and tiling artifacts on very flat image, which enhance the final image quality. The proposed compressor, including a 4 KB FIFO, a parallel to serial converter and a forward error correction encoder, is implemented in 180 nm CMOS process. It consumes 1.32 mW at 50 frames per second (fps) and only 0.68 mW at 25 fps at 3 MHz clock. Low silicon area 1.1 mm  $$\\times$$ ×  1.1 mm, high energy efficiency (27  $$\\upmu$$ μ J/frame) and throughput offer excellent scalability to handle image processing tasks in new, emerging, multi-view, robotic capsules.", "title": "" }, { "docid": "1a063741d53147eb6060a123bff96c27", "text": "OBJECTIVE\nThe assessment of cognitive functions of adults with attention deficit hyperactivity disorder (ADHD) comprises self-ratings of cognitive functioning (subjective assessment) as well as psychometric testing (objective neuropsychological assessment). The aim of the present study was to explore the utility of these assessment strategies in predicting neuropsychological impairments of adults with ADHD as determined by both approaches.\n\n\nMETHOD\nFifty-five adults with ADHD and 66 healthy participants were assessed with regard to cognitive functioning in several domains by employing subjective and objective measurement tools. Significance and effect sizes for differences between groups as well as the proportion of patients with impairments were analyzed. Furthermore, logistic regression analyses were carried out in order to explore the validity of subjective and objective cognitive measures in predicting cognitive impairments.\n\n\nRESULTS\nBoth subjective and objective assessment tools revealed significant cognitive dysfunctions in adults with ADHD. The majority of patients displayed considerable impairments in all cognitive domains assessed. A comparison of effect sizes, however, showed larger dysfunctions in the subjective assessment than in the objective assessment. Furthermore, logistic regression models indicated that subjective cognitive complaints could not be predicted by objective measures of cognition and vice versa.\n\n\nCONCLUSIONS\nSubjective and objective assessment tools were found to be sensitive in revealing cognitive dysfunctions of adults with ADHD. Because of the weak association between subjective and objective measurements, it was concluded that subjective and objective measurements are both important for clinical practice but may provide distinct types of information and capture different aspects of functioning.", "title": "" }, { "docid": "03e267aeeef5c59aab348775d264afce", "text": "Visual relations, such as person ride bike and bike next to car, offer a comprehensive scene understanding of an image, and have already shown their great utility in connecting computer vision and natural language. However, due to the challenging combinatorial complexity of modeling subject-predicate-object relation triplets, very little work has been done to localize and predict visual relations. Inspired by the recent advances in relational representation learning of knowledge bases and convolutional object detection networks, we propose a Visual Translation Embedding network (VTransE) for visual relation detection. VTransE places objects in a low-dimensional relation space where a relation can be modeled as a simple vector translation, i.e., subject + predicate &#x2248; object. We propose a novel feature extraction layer that enables object-relation knowledge transfer in a fully-convolutional fashion that supports training and inference in a single forward/backward pass. To the best of our knowledge, VTransE is the first end-toend relation detection network. We demonstrate the effectiveness of VTransE over other state-of-the-art methods on two large-scale datasets: Visual Relationship and Visual Genome. Note that even though VTransE is a purely visual model, it is still competitive to the Lu&#x2019;s multi-modal model with language priors [27].", "title": "" }, { "docid": "052a83669b39822eda51f2e7222074b4", "text": "A class-E synchronous rectifier has been designed and implemented using 0.13-μm CMOS technology. A design methodology based on the theory of time-reversal duality has been used where a class-E amplifier circuit is transformed into a class-E rectifier circuit. The methodology is distinctly different from other CMOS RF rectifier designs which use voltage multiplier techniques. Power losses in the rectifier are analyzed including saturation resistance in the switch, inductor losses, and current/voltage overlap losses. The rectifier circuit includes a 50-Ω single-ended RF input port with on-chip matching. The circuit is self-biased and completely powered from the RF input signal. Experimental results for the rectifier show a peak RF-to-dc conversion efficiency of 30% measured at a frequency of 2.4 GHz.", "title": "" }, { "docid": "f2fdd2f5a945d48c323ae6eb3311d1d0", "text": "Distributed computing systems such as clouds continue to evolve to support various types of scientific applications, especially scientific workflows, with dependable, consistent, pervasive, and inexpensive access to geographically-distributed computational capabilities. Scheduling multiple workflows on distributed computing systems like Infrastructure-as-a-Service (IaaS) clouds is well recognized as a fundamental NP-complete problem that is critical to meeting various types of Quality-of-Service (QoS) requirements. In this paper, we propose a multiobjective optimization workflow scheduling approach based on dynamic game-theoretic model aiming at reducing workflow make-spans, reducing total cost, and maximizing system fairness in terms of workload distribution among heterogeneous cloud virtual machines (VMs). We conduct extensive case studies as well based on various well-known scientific workflow templates and real-world third-party commercial IaaS clouds. Experimental results clearly suggest that our proposed approach outperform traditional ones by achieving lower workflow make-spans, lower cost, and better system fairness.", "title": "" }, { "docid": "8510bcbee74c99c39a5220d54ebf4d97", "text": "We propose a novel algorithm to detect visual saliency from video signals by combining both spatial and temporal information and statistical uncertainty measures. The main novelty of the proposed method is twofold. First, separate spatial and temporal saliency maps are generated, where the computation of temporal saliency incorporates a recent psychological study of human visual speed perception. Second, the spatial and temporal saliency maps are merged into one using a spatiotemporally adaptive entropy-based uncertainty weighting approach. The spatial uncertainty weighing incorporates the characteristics of proximity and continuity of spatial saliency, while the temporal uncertainty weighting takes into account the variations of background motion and local contrast. Experimental results show that the proposed spatiotemporal uncertainty weighting algorithm significantly outperforms state-of-the-art video saliency detection models.", "title": "" }, { "docid": "59bfb330b9ca7460280fecca78383857", "text": "Big data poses many facets and challenges when analyzing data, often described with the five big V’s of Volume, Variety, Velocity, Veracity, and Value. However, the most important V – Value can only be achieved when knowledge can be derived from the data. The volume of nowadays datasets make a manual investigation of all data records impossible and automated analysis techniques from data mining or machine learning often cannot be applied in a fully automated fashion to solve many real world analysis problems, and hence, need to be manually trained or adapted. Visual analytics aims to solve this problem with a “human-in-the-loop” approach that provides the analyst with a visual interface that tightly integrates automated analysis techniques with human interaction. However, a holistic understanding of these analytic processes is currently an under-explored research area. A major contribution of this dissertation is a conceptual model-driven approach to visual analytics that focuses on the human-machine interplay during knowledge generation. At its core, it presents the knowledge generation model which is subsequently specialized for human analytic behavior, visual interactive machine learning, and dimensionality reduction. These conceptual processes extend and combine existing conceptual works that aim to establish a theoretical foundation for visual analytics. In addition, this dissertation contributes novel methods to investigate and support human knowledge generation processes, such as semi-automation and recommendation, analytic behavior and trust building, or visual interaction with machine learning. These methods are investigated in close collaboration with real experts from different application domains (such as soccer analysis, linguistic intonation research, and criminal intelligence analysis) and hence, different data characteristics (geospatial movement, time series, and high-dimensional). The results demonstrate that this conceptual approach leads to novel, more tightly integrated, methods that support the analyst in knowledge generation. In a final broader discussion, this dissertation reflects the conceptual and methodological contributions and enumerates research areas at the intersection of data mining, machine learning, visualization, and human-computer interaction research, with the ultimate goal to make big data exploration more effective, efficient, and transparent.", "title": "" }, { "docid": "2dd9bb2536fdc5e040544d09fe3dd4fa", "text": "Low 1/f noise, low-dropout (LDO) regulators are becoming critical for the supply regulation of deep-submicron analog baseband and RF system-on-chip designs. A low-noise, high accuracy LDO regulator (LN-LDO) utilizing a chopper stabilized error amplifier is presented. In order to achieve fast response during load transients, a current-mode feedback amplifier (CFA) is designed as a second stage driving the regulation FET. In order to reduce clock feed-through and 1/f noise accumulation at the chopping frequency, a first-order digital SigmaDelta noise-shaper is used for chopping clock spectral spreading. With up to 1 MHz noise-shaped modulation clock, the LN-LDO achieves a noise spectral density of 32 nV/radic(Hz) and a PSR of 38 dB at 100 kHz. The proposed LDO is shown to reduce the phase noise of an integrated 32 MHz temperature compensated crystal oscillator (TCXO) at 10 kHz offset by 15 dB. Due to reduced 1/f noise requirements, the error amplifier silicon area is reduced by 75%, and the overall regulator area is reduced by 50% with respect to an equivalent noise static regulator. The current-mode feedback second stage buffer reduces regulator settling time by 60% in comparison to an equivalent power consumption voltage mode buffer, achieving 0.6 mus settling time for a 25-mA load step. The LN-LDO is designed and fabricated on a 0.25 mum CMOS process with five layers of metal, occupying 0.88 mm2.", "title": "" }, { "docid": "97b212bb8fde4859e368941a4e84ba90", "text": "What appears to be a simple pattern of results—distributed-study opportunities usually produce bettermemory thanmassed-study opportunities—turns out to be quite complicated.Many ‘‘impostor’’ effects such as rehearsal borrowing, strategy changes during study, recency effects, and item skipping complicate the interpretation of spacing experiments. We suggest some best practices for future experiments that diverge from the typical spacing experiments in the literature. Next, we outline themajor theories that have been advanced to account for spacing studies while highlighting the critical experimental evidence that a theory of spacingmust explain. We then propose a tentative verbal theory based on the SAM/REMmodel that utilizes contextual variability and study-phase retrieval to explain the major findings, as well as predict some novel results. Next, we outline the major phenomena supporting testing as superior to restudy on long-term retention tests, and review theories of the testing phenomenon, along with some possible boundary conditions. Finally, we suggest some ways that spacing and testing can be integrated into the classroom, and ask to what extent educators already capitalize on these phenomena. Along the way, we present several new experiments that shed light on various facets of the spacing and testing effects.", "title": "" }, { "docid": "26c003f70bbaade54b84dcb48d2a08c9", "text": "Tricaine methanesulfonate (TMS) is an anesthetic that is approved for provisional use in some jurisdictions such as the United States, Canada, and the United Kingdom (UK). Many hatcheries and research studies use TMS to immobilize fish for marking or transport and to suppress sensory systems during invasive procedures. Improper TMS use can decrease fish viability, distort physiological data, or result in mortalities. Because animals may be anesthetized by junior staff or students who may have little experience in fish anesthesia, training in the proper use of TMS may decrease variability in recovery, experimental results and increase fish survival. This document acts as a primer on the use of TMS for anesthetizing juvenile salmonids, with an emphasis on its use in surgical applications. Within, we briefly describe many aspects of TMS including the legal uses for TMS, and what is currently known about the proper storage and preparation of the anesthetic. We outline methods and precautions for administration and changes in fish behavior during progressively deeper anesthesia and discuss the physiological effects of TMS and its potential for compromising fish health. Despite the challenges of working with TMS, it is currently one of the few legal options available in the USA and in other countries until other anesthetics are approved and is an important tool for the intracoelomic implantation of electronic tags in fish.", "title": "" }, { "docid": "175733c4f95af7f68847acd393cb2a1d", "text": "This study presents an asymmetric broadside coupled balun with low-loss broadband characteristics for mixer designs. The correlation between balun impedance and a 3D multilayer CMOS structure are discussed and analyzed. Two asymmetric multilayer meander coupled lines are adopted to implement the baluns. Three balanced mixers that comprise three miniature asymmetric broadside coupled Marchand baluns are implemented to demonstrate the applicability to MOS technology. Both a single and dual balun occupy an area of only 0.06 mm2. The balun achieves a measured bandwidth of over 120%, an insertion loss of better than 4.1 dB (3 dB for an ideal balun) at the center frequency, an amplitude imbalance of less than 1 dB, and a phase imbalance of less than 5deg from 10 to 60 GHz. The first demonstrated circuit is a Ku-band mixer, which is implemented with a miniaturized balun to reduce the chip area by 80%. This 17-GHz mixer yields a conversion loss of better than 6.8 dB with a chip size of 0.24 mm2. The second circuit is a 15-60-GHz broadband single-balanced mixer, which achieves a conversion loss of better than 15 dB and occupies a chip area of 0.24 mm2. A three-conductor miniaturized dual balun is then developed for use in the third mixer. This star mixer incorporates two miniature dual baluns to achieve a conversion loss of better than 15 dB from 27 to 54 GHz, and occupies a chip area of 0.34 mm2.", "title": "" }, { "docid": "46632965f75d0b07c8f35db944277ab1", "text": "The aim of this cross-sectional study was to assess the complications associated with tooth supported fixed dental prosthesis amongst patients reporting at University College of Dentistry Lahore, Pakistan. An interview based questionnaire was used on 112 patients followed by clinical oral examination by two calibrated dentists. Approximately 95% participants were using porcelain fused to metal prosthesis with 60% of prosthesis being used in posterior segments of mouth. Complications like dental caries, coronal abutment fracture, radicular abutment fracture, occlusal interferences, root canal failures and decementations were more significantly associated with crowns than bridges (p=0.000). On the other hand esthetic issues, periapical lesions, periodontal problems, porcelain fractures and metal damage were more commonly associated with bridges (p=0.000). All cases of dental caries reported were associated with acrylic crown and bridges, whereas all coronal abutment fractures were associated with metal prosthesis (p=0.000). A significantly higher number of participants who got their fixed dental prosthesis from other sources i.e. Paramedics, technicians, dental assistants or unqualified dentists had periapical lesions, decementations, esthetic issues and periodontal diseases. This association was found to be statistically significant (p=0.000). Complications associated with fixed dental prosthesis like root canal failures, decementations, periapical lesions and periodontal disease were more significantly associated with prosthesis fabricated by other sources over the period of 5 to 10 years.", "title": "" }, { "docid": "af271bf4b478d6b46d53d9df716d75ee", "text": "The mobile technology is an ever evolving concept. The world has seen various generations of mobile technology be it 1G, 2G, 3G or 4G. The fifth generation of mobile technology i.e. 5G is seen as a futuristic notion that would help in solving the issues that are pertaining in the 4G. In this paper we have discussed various security issues of 4G with respect to Wi-max and long term evolution. These issues are discussed at MAC and physical layer level. The security issues are seen in terms of possible attacks, system vulnerabilities and privacy concerns. We have also highlighted how the notions of 5G can be tailored to provide a more secure mobile computing environment. We have considered the futuristic architectural framework for 5G networks in our discussion. The basic concepts and features of the fifth generation technology are explained here. We have also analyzed five pillars of strength for the 5G network security which would work in collaboration with each other to provide a secure mobile computing environment to the user.", "title": "" }, { "docid": "c7d2419eaec21acce9b9dbb3040ed647", "text": "Current text classification systems typically use term stems for representing document content. Ontologies allow the usage of features on a higher semantic level than single words for text classification purposes. In this paper we propose such an enhancement of the classical document representation through concepts extracted from background knowledge. Boosting, a successful machine learning technique is used for classification. Comparative experimental evaluations in three different settings support our approach through consistent improvement of the results. An analysis of the results shows that this improvement is due to two separate effects.", "title": "" }, { "docid": "873689a68ce8b52d6df381081088d48e", "text": "Natural Language Engineering encourages papers reporting research with a clear potential for practical application. Theoretical papers that consider techniques in sufficient detail to provide for practical implementation are also welcomed, as are shorter reports of on-going research, conference reports, comparative discussions of NLE products, and policy-oriented papers examining e.g. funding programmes or market opportunities. All contributions are peer reviewed and the review process is specifically designed to be fast, contributing to the rapid publication of accepted papers.", "title": "" } ]
scidocsrr
d88ecaa64bc7fd5c262e305d1953f7f4
Short Paper: Service-Oriented Sharding for Blockchains
[ { "docid": "6c09932a4747c7e2d15b06720b1c48d9", "text": "A distributed ledger made up of mutually distrusting nodes would allow for a single global database that records the state of deals and obligations between institutions and people. This would eliminate much of the manual, time consuming effort currently required to keep disparate ledgers synchronised with each other. It would also allow for greater levels of code sharing than presently used in the financial industry, reducing the cost of financial services for everyone. We present Corda, a platform which is designed to achieve these goals. This paper provides a high level introduction intended for the general reader. A forthcoming technical white paper elaborates on the design and fundamental architectural decisions.", "title": "" } ]
[ { "docid": "49e1d016e1aae07d5e3ae1ad0e96e662", "text": "Recently, various protocols have been proposed for securely outsourcing database storage to a third party server, ranging from systems with \"full-fledged\" security based on strong cryptographic primitives such as fully homomorphic encryption or oblivious RAM, to more practical implementations based on searchable symmetric encryption or even on deterministic and order-preserving encryption. On the flip side, various attacks have emerged that show that for some of these protocols confidentiality of the data can be compromised, usually given certain auxiliary information. We take a step back and identify a need for a formal understanding of the inherent efficiency/privacy trade-off in outsourced database systems, independent of the details of the system. We propose abstract models that capture secure outsourced storage systems in sufficient generality, and identify two basic sources of leakage, namely access pattern and ommunication volume. We use our models to distinguish certain classes of outsourced database systems that have been proposed, and deduce that all of them exhibit at least one of these leakage sources.\n We then develop generic reconstruction attacks on any system supporting range queries where either access pattern or communication volume is leaked. These attacks are in a rather weak passive adversarial model, where the untrusted server knows only the underlying query distribution. In particular, to perform our attack the server need not have any prior knowledge about the data, and need not know any of the issued queries nor their results. Yet, the server can reconstruct the secret attribute of every record in the database after about $N^4$ queries, where N is the domain size. We provide a matching lower bound showing that our attacks are essentially optimal. Our reconstruction attacks using communication volume apply even to systems based on homomorphic encryption or oblivious RAM in the natural way.\n Finally, we provide experimental results demonstrating the efficacy of our attacks on real datasets with a variety of different features. On all these datasets, after the required number of queries our attacks successfully recovered the secret attributes of every record in at most a few seconds.", "title": "" }, { "docid": "c6338205328828778a2036829f0bbb6c", "text": "In this study, the theory of technology analysis and decomposition of the 3-D (three dimensional) visualization of GIS (Geographic Information System) are analyzed, it divides the 3-D visualization of GIS into virtual reality technology, and it presents situation and development trend of 3-D visualization of GIS. It studies the urban model of 3-D data acquisition and processing, the classification of urban 3-D space information data and summarization of the characteristics of urban 3-D spatial data are made, and the three dimensional terrain data, building plane and building elevation data access, building surface texture are also analyzed. The high resolution satellite remote sensing data processing technology and aviation remote sensing data processing technology is studied, and the data acquisition and processing technology of airborne 3-D imager also are introduced This paper has solved the visualization of 3-D GIS data model and visual problem in the construction of the 3-D terrain and expression of choice of buildings, and it is to find suitable modeling route, and in order to provides a reference basis in realization of 3-D visualization of GIS. Visualization of 3-D model of the theory and method are studied in the urban construction, according to the 3D visualization in GIS and it proposed the two kinds of 3-D visualization model of GIS technology.", "title": "" }, { "docid": "7d84e574d2a6349a9fc2669fdbe08bba", "text": "Domain-specific languages (DSLs) provide high-level and domain-specific abstractions that allow expressive and concise algorithm descriptions. Since the description in a DSL hides also the properties of the target hardware, DSLs are a promising path to target different parallel and heterogeneous hardware from the same algorithm description. In theory, the DSL description can capture all characteristics of the algorithm that are required to generate highly efficient parallel implementations. However, most frameworks do not make use of this knowledge and the performance cannot reach that of optimized library implementations. In this article, we present the HIPAcc framework, a DSL and source-to-source compiler for image processing. We show that domain knowledge can be captured in the language and that this knowledge enables us to generate tailored implementations for a given target architecture. Back ends for CUDA, OpenCL, and Renderscript allow us to target discrete graphics processing units (GPUs) as well as mobile, embedded GPUs. Exploiting the captured domain knowledge, we can generate specialized algorithm variants that reach the maximal achievable performance due to the peak memory bandwidth. These implementations outperform state-of-the-art domain-specific languages and libraries significantly.", "title": "" }, { "docid": "dde4e45fd477808d40b3b06599d361ff", "text": "In this paper, we present the basic features of the flight control of the SkySails towing kite system. After introducing the coordinate definitions and the basic system dynamics, we introduce a novel model used for controller design and justify its main dynamics with results from system identification based on numerous sea trials. We then present the controller design, which we successfully use for operational flights for several years. Finally, we explain the generation of dynamical flight patterns.", "title": "" }, { "docid": "48a45f03f31d8fc0daede6603f3b693a", "text": "This paper presents GelClust, a new software that is designed for processing gel electrophoresis images and generating the corresponding phylogenetic trees. Unlike the most of commercial and non-commercial related softwares, we found that GelClust is very user-friendly and guides the user from image toward dendrogram through seven simple steps. Furthermore, the software, which is implemented in C# programming language under Windows operating system, is more accurate than similar software regarding image processing and is the only software able to detect and correct gel 'smile' effects completely automatically. These claims are supported with experiments.", "title": "" }, { "docid": "cf702356b3a8895f5a636cc05597b52a", "text": "This paper investigates non-fragile exponential <inline-formula> <tex-math notation=\"LaTeX\">$ {H_\\infty }$ </tex-math></inline-formula> control problems for a class of uncertain nonlinear networked control systems (NCSs) with randomly occurring information, such as the controller gain fluctuation and the uncertain nonlinearity, and short time-varying delay via output feedback controller. Using the nominal point technique, the NCS is converted into a novel time-varying discrete time model with norm-bounded uncertain parameters for reducing the conservativeness. Based on linear matrix inequality framework and output feedback control strategy, design methods for general and optimal non-fragile exponential <inline-formula> <tex-math notation=\"LaTeX\">$ {H_\\infty }$ </tex-math></inline-formula> controllers are presented. Meanwhile, these control laws can still be applied to linear NCSs and general fragile control NCSs while introducing random variables. Finally, three examples verify the correctness of the presented scheme.", "title": "" }, { "docid": "db6904a5aa2196dedf37b279e04b3ea8", "text": "The use of animation and multimedia for learning is now further extended by the provision of entire Virtual Reality Learning Environments (VRLE). This highlights a shift in Web-based learning from a conventional multimedia to a more immersive, interactive, intuitive and exciting VR learning environment. VRLEs simulate the real world through the application of 3D models that initiates interaction, immersion and trigger the imagination of the learner. The question of good pedagogy and use of technology innovations comes into focus once again. Educators attempt to find theoretical guidelines or instructional principles that could assist them in developing and applying a novel VR learning environment intelligently. This paper introduces the educational use of Web-based 3D technologies and highlights in particular VR features. It then identifies constructivist learning as the pedagogical engine driving the construction of VRLE and discusses five constructivist learning approaches. Furthermore, the authors provide two case studies to investigate VRLEs for learning purposes. The authors conclude with formulating some guidelines for the effective use of VRLEs, including discussion of the limitations and implications for the future study of VRLEs. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "9915a09a87126626633088cf4d6b9633", "text": "This paper introduces ICET, a new algorithm for cost-sensitive classification. ICET uses a genetic algorithm to evolve a population of biases for a decision tree induction algorithm. The fitness function of the genetic algorithm is the average cost of classification when using the decision tree, including both the costs of tests (features, measurements) and the costs of classification errors. ICET is compared here with three other algorithms for cost-sensitive classification — EG2, CS-ID3, and IDX — and also with C4.5, which classifies without regard to cost. The five algorithms are evaluated empirically on five realworld medical datasets. Three sets of experiments are performed. The first set examines the baseline performance of the five algorithms on the five datasets and establishes that ICET performs significantly better than its competitors. The second set tests the robustness of ICET under a variety of conditions and shows that ICET maintains its advantage. The third set looks at ICET’s search in bias space and discovers a way to improve the search.", "title": "" }, { "docid": "08585ddb6bfad07ce04cf85bf28f30ba", "text": "Users of search engines interact with the system using different size and type of queries. Current search engines perform well with keyword queries but are not for verbose queries which are too long, detailed, or are expressed in more words than are needed. The detection of verbose queries may help search engines to get pertinent results. To accomplish this goal it is important to make some appropriate preprocessing techniques in order to improve classifiers effectiveness. In this paper, we propose to use BabelNet as knowledge base in the preprocessing step and then make a comparative study between different algorithms to classify queries into two classes, verbose or succinct. Our Experimental results are conducted using the TREC Robust Track as data set and different classifiers such as, decision trees probabilistic methods, rule-based methods, instance-based methods, SVM and neural networks.", "title": "" }, { "docid": "0d40f7ddda91227fab3cc62a4ca2847c", "text": "Coherent texts are not just simple sequences of clauses and sentences, but rather complex artifacts that have highly elaborate rhetorical structure. This paper explores the extent to which well-formed rhetorical structures can be automatically derived by means of surface-form-based algorithms. These algorithms identify discourse usages of cue phrases and break sentences into clauses, hypothesize rhetorical relations that hold among textual units, and produce valid rhetorical structure trees for unrestricted natural language texts. The algorithms are empirically grounded in a corpus analysis of cue phrases and rely on a first-order formalization of rhetorical structure trees. The algorithms are evaluated both intrinsically and extrinsically. The intrinsic evaluation assesses the resemblance between automatically and manually constructed rhetorical structure trees. The extrinsic evaluation shows that automatically derived rhetorical structures can be successfully exploited in the context of text summarization.", "title": "" }, { "docid": "5455e7d53e6de4cbe97cbcdf6eea9806", "text": "OBJECTIVE\nTo evaluate the clinical and radiological results in the surgical treatment of moderate and severe hallux valgus by performing percutaneous double osteotomy.\n\n\nMATERIAL AND METHOD\nA retrospective study was conducted on 45 feet of 42 patients diagnosed with moderate-severe hallux valgus, operated on in a single centre and by the same surgeon from May 2009 to March 2013. Two patients were lost to follow-up. Clinical and radiological results were recorded.\n\n\nRESULTS\nAn improvement from 48.14 ± 4.79 points to 91.28 ± 8.73 points was registered using the American Orthopedic Foot and Ankle Society (AOFAS) scale. A radiological decrease from 16.88 ± 2.01 to 8.18 ± 3.23 was observed in the intermetatarsal angle, and from 40.02 ± 6.50 to 10.51 ± 6.55 in hallux valgus angle. There was one case of hallux varus, one case of non-union, a regional pain syndrome type I, an infection that resolved with antibiotics, and a case of loosening of the osteosynthesis that required an open surgical refixation.\n\n\nDISCUSSION\nPercutaneous distal osteotomy of the first metatarsal when performed as an isolated procedure, show limitations when dealing with cases of moderate and severe hallux valgus. The described technique adds the advantages of minimally invasive surgery by expanding applications to severe deformities.\n\n\nCONCLUSIONS\nPercutaneous double osteotomy is a reproducible technique for correcting severe deformities, with good clinical and radiological results with a complication rate similar to other techniques with the advantages of shorter surgical times and less soft tissue damage.", "title": "" }, { "docid": "53049f1514bc03368b8c2a0b18518100", "text": "The Protein Data Bank (PDB; http://www.rcsb.org/pdb/ ) is the single worldwide archive of structural data of biological macromolecules. This paper describes the goals of the PDB, the systems in place for data deposition and access, how to obtain further information, and near-term plans for the future development of the resource.", "title": "" }, { "docid": "8de1acc08d32f8840de8375078f2369a", "text": "Widespread acceptance of virtual reality has been partially handicapped by the inability of current systems to accommodate multiple viewpoints, thereby limiting their appeal for collaborative applications. We are exploring the ability to utilize passive, untracked participants in a powerwall environment. These participants see the same image as the active, immersive participant. This does present the passive user with a varying viewpoint that does not correspond to their current position. We demonstrate the impact this will have on the perceived image and show that human psychology is actually well adapted to compensating for what, on the surface, would seem to be a very drastic distortion. We present some initial guidelines for system design that minimize the negative impact of passive participation, allowing two or more collaborative participants. We then outline future experimentation to measure user compensation for these distorted viewpoints.", "title": "" }, { "docid": "03f2ba940cdde68e848d91bacbbb5f68", "text": "The glomerular basement membrane (GBM) is the central, non-cellular layer of the glomerular filtration barrier that is situated between the two cellular components—fenestrated endothelial cells and interdigitated podocyte foot processes. The GBM is composed primarily of four types of extracellular matrix macromolecule—laminin-521, type IV collagen α3α4α5, the heparan sulphate proteoglycan agrin, and nidogen—which produce an interwoven meshwork thought to impart both size-selective and charge-selective properties. Although the composition and biochemical nature of the GBM have been known for a long time, the functional importance of the GBM versus that of podocytes and endothelial cells for establishing the glomerular filtration barrier to albumin is still debated. Together with findings from genetic studies in mice, the discoveries of four human mutations affecting GBM components in two inherited kidney disorders, Alport syndrome and Pierson syndrome, support essential roles for the GBM in glomerular permselectivity. Here, we explain in detail the proposed mechanisms whereby the GBM can serve as the major albumin barrier and discuss possible approaches to circumvent GBM defects associated with loss of permselectivity.", "title": "" }, { "docid": "f489e2c0d6d733c9e2dbbdb1d7355091", "text": "In many signal processing applications, the signals provided by the sensors are mixtures of many sources. The problem of separation of sources is to extract the original signals from these mixtures. A new algorithm, based on ideas of backpropagation learning, is proposed for source separation. No a priori information on the sources themselves is required, and the algorithm can deal even with non-linear mixtures. After a short overview of previous works in that eld, we will describe the proposed algorithm. Then, some experimental results will be discussed.", "title": "" }, { "docid": "98d0a45eb8da2fa8541055014db6e238", "text": "OBJECTIVE\nThe Multicultural Quality of Life Index is a concise instrument for comprehensive, culture-informed, and self-rated assessment of health-related quality of life. It is composed of 10 items (from physical well-being to global perception of quality of life). Each item is rated on a 10-point scale. The objective was to evaluate the reliability (test-retest), internal structure, discriminant validity, and feasibility of the Multicultural Quality of Life Index in Lima, Peru.\n\n\nMETHOD\nThe reliability was studied in general medical patients (n = 30) hospitalized in a general medical ward. The Multicultural Quality of Life Index was administered in two occasions and the correlation coefficients (\"r\") between both interviews were calculated. Its discriminant validity was studied statistically comparing the average score in a group of patients with AIDS (with presumed lower quality of life, n = 50) and the average score in a group of dentistry students and professionals (with presumed higher quality of life, n = 50). Data on its applicability and internal structure were compiled from the 130 subjects.\n\n\nRESULTS\nA high reliability correlation coefficient (r = 0.94) was found for the total score. The discriminant validity study found a significant difference between mean total score in the samples of presumed higher (7.66) and lower (5.32) quality of life. The average time to complete the Multicultural Quality of Life Index was less than 4 minutes and was reported by the majority of subjects as easily applicable. A high Cronbach's a (0.88) was also documented.\n\n\nCONCLUSIONS\nThe results reported that the Multicultural Quality of Life Index is reliable, has a high internal consistency, is capable of discriminating groups of presumed different quality of life levels, is quite efficient, and easy to use.", "title": "" }, { "docid": "0305918adb88b4ca41b9257a556397a7", "text": "We present the development and evaluation of a semantic analysis task that lies at the intersection of two very trendy lines of research in contemporary computational linguistics: (i) sentiment analysis, and (ii) natural language processing of social media text. The task was part of SemEval, the International Workshop on Semantic Evaluation, a semantic evaluation forum previously known as SensEval. P. Nakov Qatar Computing Research Institute, HBKU Tornado Tower, floor 10, P.O. box 5825, Doha, Qatar E-mail: pnakov@qf.org.qa S. Rosenthal Columbia University E-mail: sara@cs.columbia.edu S. Kiritchenko National Research Council Canada, 1200 Montreal Rd., Ottawa, ON, Canada E-mail: Svetlana.Kiritchenko@nrc-cnrc.gc.ca S. Mohammad National Research Council Canada, 1200 Montreal Rd., Ottawa, ON, Canada E-mail: saif.mohammad@nrc-cnrc.gc.ca Z. Kozareva USC Information Sciences Institute, 4676 Admiralty Way, Marina del Rey, CA 90292-6695 E-mail: zornitsa@kozareva.com A. Ritter The Ohio State University E-mail: aritter@cs.washington.edu V. Stoyanov Facebook E-mail: vesko.st@gmail.com X. Zhu National Research Council Canada, 1200 Montreal Rd., Ottawa, ON, Canada E-mail: Xiaodan.Zhu@nrc-cnrc.gc.ca 2 Preslav Nakov et al. The task ran in 2013 and 2014, attracting the highest number of participating teams at SemEval in both years, and there is an ongoing edition in 2015. The task included the creation of a large contextual and message-level polarity corpus consisting of tweets, SMS messages, LiveJournal messages, and a special test set of sarcastic tweets. The evaluation attracted 44 teams in 2013 and 46 in 2014, who used a variety of approaches. The best teams were able to outperform several baselines by sizable margins with improvement across the two years the task has been run. We hope that the long-lasting role of this task and the accompanying datasets will be to serve as a test bed for comparing different approaches, thus facilitating research.", "title": "" }, { "docid": "4c7c4e56dc0831c282e41bfd31c7f3c7", "text": "Brown et al. (1993) introduced five unsupervised, word-based, generative and statistical models, popularized as IBM models, for translating a sentence into another. These models introduce alignments which maps each word in the source language to a word in the target language. In these models there is a crucial independence assumption that all lexical entries are seen independently of one another. We hypothesize that this independence assumption might be too strong, especially for languages with a large vocabulary, for example because of rich morphology. We investigate this independence assumption by implementing IBM models 1 and 2, the least complex IBM models, and also implementing a feature-rich version of these models. Through features, similarities between lexical entries in syntax and possibly even meaning can be captured. This feature-richness, however, requires a change in parameterization of the IBM model. We follow the approach of Berg-Kirkpatrick et al. (2010) and parameterize our IBM model with a log-linear parametric form. Finally, we compare the IBM models with their log-linear variants on word alignment. We evaluate our models on the quality of word alignments with two languages with a richer vocabulary than English. Our results do not fully support our hypothesis yet, but they are promising. We believe the hypothesis can be confirmed, however, there are still many technical challenges left before the log-linear variants can become competitive with the IBM models in terms of quality and speed.", "title": "" }, { "docid": "d95c080140dd50d8131bc7d43a4358e2", "text": "The link between affect, defined as the capacity for sentimental arousal on the part of a message, and virality, defined as the probability that it be sent along, is of significant theoretical and practical importance, e.g. for viral marketing. The basic measure of virality in Twitter is the probability of retweet and we are interested in which dimensions of the content of a tweet leads to retweeting. We hypothesize that negative news content is more likely to be retweeted, while for non-news tweets positive sentiments support virality. To test the hypothesis we analyze three corpora: A complete sample of tweets about the COP15 climate summit, a random sample of tweets, and a general text corpus including news. The latter allows us to train a classifier that can distinguish tweets that carry news and non-news information. We present evidence that negative sentiment enhances virality in the news segment, but not in the non-news segment. Our findings may be summarized ’If you want to be cited: Sweet talk your friends or serve bad news to the public’.", "title": "" }, { "docid": "21df2b20c9ecd6831788e00970b3ca79", "text": "Enterprises today face several challenges when hosting line-of-business applications in the cloud. Central to many of these challenges is the limited support for control over cloud network functions, such as, the ability to ensure security, performance guarantees or isolation, and to flexibly interpose middleboxes in application deployments. In this paper, we present the design and implementation of a novel cloud networking system called CloudNaaS. Customers can leverage CloudNaaS to deploy applications augmented with a rich and extensible set of network functions such as virtual network isolation, custom addressing, service differentiation, and flexible interposition of various middleboxes. CloudNaaS primitives are directly implemented within the cloud infrastructure itself using high-speed programmable network elements, making CloudNaaS highly efficient. We evaluate an OpenFlow-based prototype of CloudNaaS and find that it can be used to instantiate a variety of network functions in the cloud, and that its performance is robust even in the face of large numbers of provisioned services and link/device failures.", "title": "" } ]
scidocsrr
f8da14a9bbb705e37e93285a0b1f93ea
RankMBPR: Rank-Aware Mutual Bayesian Personalized Ranking for Item Recommendation
[ { "docid": "83a60460228ecc780848e40ab5286a31", "text": "A ranking approach, ListRank-MF, is proposed for collaborative filtering that combines a list-wise learning-to-rank algorithm with matrix factorization (MF). A ranked list of items is obtained by minimizing a loss function that represents the uncertainty between training lists and output lists produced by a MF ranking model. ListRank-MF enjoys the advantage of low complexity and is analytically shown to be linear with the number of observed ratings for a given user-item matrix. We also experimentally demonstrate the effectiveness of ListRank-MF by comparing its performance with that of item-based collaborative recommendation and a related state-of-the-art collaborative ranking approach (CoFiRank).", "title": "" }, { "docid": "d9615510bb6cf2cb2d8089be402c193c", "text": "Tagging plays an important role in many recent websites. Recommender systems can help to suggest a user the tags he might want to use for tagging a specific item. Factorization models based on the Tucker Decomposition (TD) model have been shown to provide high quality tag recommendations outperforming other approaches like PageRank, FolkRank, collaborative filtering, etc. The problem with TD models is the cubic core tensor resulting in a cubic runtime in the factorization dimension for prediction and learning.\n In this paper, we present the factorization model PITF (Pairwise Interaction Tensor Factorization) which is a special case of the TD model with linear runtime both for learning and prediction. PITF explicitly models the pairwise interactions between users, items and tags. The model is learned with an adaption of the Bayesian personalized ranking (BPR) criterion which originally has been introduced for item recommendation. Empirically, we show on real world datasets that this model outperforms TD largely in runtime and even can achieve better prediction quality. Besides our lab experiments, PITF has also won the ECML/PKDD Discovery Challenge 2009 for graph-based tag recommendation.", "title": "" }, { "docid": "d78acb79ccd229af7529dae1408dea6a", "text": "Making recommendations by learning to rank is becoming an increasingly studied area. Approaches that use stochastic gradient descent scale well to large collaborative filtering datasets, and it has been shown how to approximately optimize the mean rank, or more recently the top of the ranked list. In this work we present a family of loss functions, the k-order statistic loss, that includes these previous approaches as special cases, and also derives new ones that we show to be useful. In particular, we present (i) a new variant that more accurately optimizes precision at k, and (ii) a novel procedure of optimizing the mean maximum rank, which we hypothesize is useful to more accurately cover all of the user's tastes. The general approach works by sampling N positive items, ordering them by the score assigned by the model, and then weighting the example as a function of this ordered set. Our approach is studied in two real-world systems, Google Music and YouTube video recommendations, where we obtain improvements for computable metrics, and in the YouTube case, increased user click through and watch duration when deployed live on www.youtube.com.", "title": "" } ]
[ { "docid": "351e2afb110d9304b5d534be45bf2fba", "text": "BACKGROUND\nThe Lyon Diet Heart Study is a randomized secondary prevention trial aimed at testing whether a Mediterranean-type diet may reduce the rate of recurrence after a first myocardial infarction. An intermediate analysis showed a striking protective effect after 27 months of follow-up. This report presents results of an extended follow-up (with a mean of 46 months per patient) and deals with the relationships of dietary patterns and traditional risk factors with recurrence.\n\n\nMETHODS AND RESULTS\nThree composite outcomes (COs) combining either cardiac death and nonfatal myocardial infarction (CO 1), or the preceding plus major secondary end points (unstable angina, stroke, heart failure, pulmonary or peripheral embolism) (CO 2), or the preceding plus minor events requiring hospital admission (CO 3) were studied. In the Mediterranean diet group, CO 1 was reduced (14 events versus 44 in the prudent Western-type diet group, P=0.0001), as were CO 2 (27 events versus 90, P=0.0001) and CO 3 (95 events versus 180, P=0. 0002). Adjusted risk ratios ranged from 0.28 to 0.53. Among the traditional risk factors, total cholesterol (1 mmol/L being associated with an increased risk of 18% to 28%), systolic blood pressure (1 mm Hg being associated with an increased risk of 1% to 2%), leukocyte count (adjusted risk ratios ranging from 1.64 to 2.86 with count >9x10(9)/L), female sex (adjusted risk ratios, 0.27 to 0. 46), and aspirin use (adjusted risk ratios, 0.59 to 0.82) were each significantly and independently associated with recurrence.\n\n\nCONCLUSIONS\nThe protective effect of the Mediterranean dietary pattern was maintained up to 4 years after the first infarction, confirming previous intermediate analyses. Major traditional risk factors, such as high blood cholesterol and blood pressure, were shown to be independent and joint predictors of recurrence, indicating that the Mediterranean dietary pattern did not alter, at least qualitatively, the usual relationships between major risk factors and recurrence. Thus, a comprehensive strategy to decrease cardiovascular morbidity and mortality should include primarily a cardioprotective diet. It should be associated with other (pharmacological?) means aimed at reducing modifiable risk factors. Further trials combining the 2 approaches are warranted.", "title": "" }, { "docid": "4f747c2fb562be4608d1f97ead32e00b", "text": "With rapid development of the Internet, the web contents become huge. Most of the websites are publicly available and anyone can access the contents everywhere such as workplace, home and even schools. Nevertheless, not all the web contents are appropriate for all users, especially children. An example of these contents is pornography images which should be restricted to certain age group. Besides, these images are not safe for work (NSFW) in which employees should not be seen accessing such contents. Recently, convolutional neural networks have been successfully applied to many computer vision problems. Inspired by these successes, we propose a mixture of convolutional neural networks for adult content recognition. Unlike other works, our method is formulated on a weighted sum of multiple deep neural network models. The weights of each CNN models are expressed as a linear regression problem learnt using Ordinary Least Squares (OLS). Experimental results demonstrate that the proposed model outperforms both single CNN model and the average sum of CNN models in adult content recognition.", "title": "" }, { "docid": "da3e4903974879868b87b94d7cc0bf21", "text": "INTRODUCTION\nThe existence of maternal health service does not guarantee its use by women; neither does the use of maternal health service guarantee optimal outcomes for women. The World Health Organization recommends monitoring and evaluation of maternal satisfaction to improve the quality and efficiency of health care during childbirth. Thus, this study aimed at assessing maternal satisfaction on delivery service and factors associated with it.\n\n\nMETHODS\nCommunity based cross-sectional study was conducted in Debre Markos town from March to April 2014. Systematic random sampling technique were used to select 398 mothers who gave birth within one year. The satisfaction of mothers was measured using 19 questions which were adopted from Donabedian quality assessment framework. Binary logistic regression was fitted to identify independent predictors.\n\n\nRESULT\nAmong mothers, the overall satisfaction on delivery service was found to be 318 (81.7%). Having plan to deliver at health institution (AOR = 3.30, 95% CI: 1.38-7.9) and laboring time of less than six hours (AOR = 4.03, 95% CI: 1.66-9.79) were positively associated with maternal satisfaction on delivery service. Those mothers who gave birth using spontaneous vaginal delivery (AOR = 0.11, 95% CI: 0.023-0.51) were inversely related to maternal satisfaction on delivery service.\n\n\nCONCLUSION\nThis study revealed that the overall satisfaction of mothers on delivery service was found to be suboptimal. Reasons for delivery visit, duration of labor, and mode of delivery are independent predictors of maternal satisfaction. Thus, there is a need of an intervention on the independent predictors.", "title": "" }, { "docid": "a75a1d34546faa135f74aa5e6142de05", "text": "Boosting is a popular way to derive powerful learners from simpler hypothesis classes. Following previous work (Mason et al., 1999; Friedman, 2000) on general boosting frameworks, we analyze gradient-based descent algorithms for boosting with respect to any convex objective and introduce a new measure of weak learner performance into this setting which generalizes existing work. We present the weak to strong learning guarantees for the existing gradient boosting work for strongly-smooth, strongly-convex objectives under this new measure of performance, and also demonstrate that this work fails for non-smooth objectives. To address this issue, we present new algorithms which extend this boosting approach to arbitrary convex loss functions and give corresponding weak to strong convergence results. In addition, we demonstrate experimental results that support our analysis and demonstrate the need for the new algorithms we present.", "title": "" }, { "docid": "fba0ff24acbe07e1204b5fe4c492ab72", "text": "To ensure high quality software, it is crucial that non‐functional requirements (NFRs) are well specified and thoroughly tested in parallel with functional requirements (FRs). Nevertheless, in requirement specification the focus is mainly on FRs, even though NFRs have a critical role in the success of software projects. This study presents a systematic literature review of the NFR specification in order to identify the current state of the art and needs for future research. The systematic review summarizes the 51 relevant papers found and discusses them within seven major sub categories with “combination of other approaches” being the one with most prior results.", "title": "" }, { "docid": "33cd162dc2c0132dbd4153775a569c5d", "text": "The question whether preemptive systems are better than non-preemptive systems has been debated for a long time, but only partial answers have been provided in the real-time literature and still some issues remain open. In fact, each approach has advantages and disadvantages, and no one dominates the other when both predictability and efficiency have to be taken into account in the system design. In particular, limiting preemptions allows increasing program locality, making timing analysis more predictable with respect to the fully preemptive case. In this paper, we integrate the features of both preemptive and non-preemptive scheduling by considering that each task can switch to non-preemptive mode, at any time, for a bounded interval. Three methods (with different complexity and performance) are presented to calculate the longest non-preemptive interval that can be executed by each task, under fixed priorities, without degrading the schedulability of the task set, with respect to the fully preemptive case. The methods are also compared by simulations to evaluate their effectiveness in reducing the number of preemptions.", "title": "" }, { "docid": "ffffbbd82482e39a1a32bd1c5848a861", "text": "For a sustainable integration of wind power into the electricity grid, precise and robust predictions are required. With increasing installed capacity and changing energy markets, there is a growing demand for short-term predictions. Machine learning methods can be used as a purely data-driven, spatio-temporal prediction model that yields better results than traditional physical models based on weather simulations. However, there are two big challenges when applying machine learning techniques to the domain of wind power predictions. First, when applying state-of-the-art algorithms to big training data sets, the required computation times may increase to an unacceptable level. Second, the prediction performance and reliability have to be improved to cope with the requirements of the energy markets. This thesis proposes a robust and practical prediction framework based on heterogeneous machine learning ensembles. Ensemble models combine the predictions of numerous and preferably diverse models to reduce the prediction error. First, homogeneous ensemble regressors that employ a single base algorithm are analyzed. Further, the construction of heterogeneous ensembles is proposed. These models employ multiple base algorithms and benefit from a gain of diversity among the combined predictors. A comprehensive experimental evaluation shows that the combination of different techniques to an ensemble outperforms state-ofthe-art prediction models while requiring a shorter runtime. Finally, a framework for model selection based on evolutionary multi-objective optimization is presented. The method offers an efficient and comfortable balancing of a preferably low prediction error and a moderate computational cost.", "title": "" }, { "docid": "aa55e655c7fa8c86d189d03c01d5db87", "text": "Best practice reference models like COBIT, ITIL, and CMMI offer methodical support for the various tasks of IT management and IT governance. Observations reveal that the ways of using these models as well as the motivations and further aspects of their application differ significantly. Rather the models are used in individual ways due to individual interpretations. From an academic point of view we can state, that how these models are actually used as well as the motivations using them is not well understood. We develop a framework in order to structure different dimensions and modes of reference model application in practice. The development is based on expert interviews and a literature review. Hence we use design oriented and qualitative research methods to develop an artifact, a ‘framework of reference model application’. This framework development is the first step in a larger research program which combines different methods of research. The first goal is to deepen insight and improve understanding. In future research, the framework will be used to survey and analyze reference model application. The authors assume that “typical” application patterns exist beyond individual dimensions of application. The framework developed provides an opportunity of a systematically collection of data thereon. Furthermore, the so far limited knowledge of reference model application complicates their implementation as well as their use. Thus, detailed knowledge of different application patterns is required for effective support of enterprises using reference models. We assume that the deeper understanding of different patterns will support method development for implementation and use.", "title": "" }, { "docid": "30bc7923529eec5ac7d62f91de804f8e", "text": "In this paper, we consider the scene parsing problem and propose a novel MultiPath Feedback recurrent neural network (MPF-RNN) for parsing scene images. MPF-RNN can enhance the capability of RNNs in modeling long-range context information at multiple levels and better distinguish pixels that are easy to confuse. Different from feedforward CNNs and RNNs with only single feedback, MPFRNN propagates the contextual features learned at top layer through weighted recurrent connections to multiple bottom layers to help them learn better features with such “hindsight”. For better training MPF-RNN, we propose a new strategy that considers accumulative loss at multiple recurrent steps to improve performance of the MPF-RNN on parsing small objects. With these two novel components, MPF-RNN has achieved significant improvement over strong baselines (VGG16 and Res101) on five challenging scene parsing benchmarks, including traditional SiftFlow, Barcelona, CamVid, Stanford Background as well as the recently released large-scale ADE20K.", "title": "" }, { "docid": "37825cd0f6ae399204a392e3b32a667b", "text": "Abduction is inference to the best explanation. Abduction has long been studied intensively in a wide range of contexts, from artificial intelligence research to cognitive science. While recent advances in large-scale knowledge acquisition warrant applying abduction with large knowledge bases to real-life problems, as of yet no existing approach to abduction has achieved both the efficiency and formal expressiveness necessary to be a practical solution for large-scale reasoning on real-life problems. The contributions of our work are the following: (i) we reformulate abduction as an Integer Linear Programming (ILP) optimization problem, providing full support for first-order predicate logic (FOPL); (ii) we employ Cutting Plane Inference, which is an iterative optimization strategy developed in Operations Research for making abductive reasoning in full-fledged FOPL tractable, showing its efficiency on a real-life dataset; (iii) the abductive inference engine presented in this paper is made publicly available.", "title": "" }, { "docid": "b06a22f8d9eb96db06f22544d39a917a", "text": "Attaching meaning to arbitrary symbols (i.e. words) is a complex and lengthy process. In the case of numbers, it was previously suggested that this process is grounded on two early pre-verbal systems for numerical quantification: the approximate number system (ANS or 'analogue magnitude'), and the object tracking system (OTS or 'parallel individuation'), which children are equipped with before symbolic learning. Each system is based on dedicated neural circuits, characterized by specific computational limits, and each undergoes a separate developmental trajectory. Here, I review the available cognitive and neuroscientific data and argue that the available evidence is more consistent with a crucial role for the ANS, rather than for the OTS, in the acquisition of abstract numerical concepts that are uniquely human.", "title": "" }, { "docid": "a889235a17e8688773ef2dd242bc4a15", "text": "Software for safety-critical systems has to deal w ith the hazards identified by safety analysis in order to make the system safe, risk-free and fai l-safe. Software safety is a composite of many factors. Problem statement: Existing software quality models like McCall’s and Boehm’s and ISO 9126 were inadequate in addressing the software saf ety issues of real time safety-critical embedded systems. At present there does not exist any standa rd framework that comprehensively addresses the Factors, Criteria and Metrics (FCM) approach of the quality models in respect of software safety. Approach: We proposed a new model for software safety based on the McCall’s software quality model that specifically identifies the criteria cor responding to software safety in safety critical applications. The criteria in the proposed software safety model pertains to system hazard analysis, completeness of requirements, identification of sof tware-related safety-critical requirements, safetyconstraints based design, run-time issues managemen t and software safety-critical testing. Results: This model was applied to a prototype safety-critical so ftware-based Railroad Crossing Control System (RCCS). The results showed that all critical operat ions were safe and risk-free, capable of handling contingency situations. Conclusion: Development of a safety-critical system based on ou r proposed software safety model significantly enhanced the sa f operation of the overall system.", "title": "" }, { "docid": "02cd879a83070af9842999c7215e7f92", "text": "Automatic genre classification of music is an important topic in Music Information Retrieval with many interesting applications. A solution to genre classification would allow for machine tagging of songs, which could serve as metadata for building song recommenders. In this paper, we investigate the following question: Given a song, can we automatically detect its genre? We look at three characteristics of a song to determine its genre: timbre, chord transitions, and lyrics. For each method, we develop multiple data models and apply supervised machine learning algorithms including k-means, k-NN, multi-class SVM and Naive Bayes. We are able to accurately classify 65− 75% of the songs from each genre in a 5-genre classification problem between Rock, Jazz, Pop, Hip-Hop, and Metal music.", "title": "" }, { "docid": "2ccbe363a448e796ad7a93d819d12444", "text": "With the ever-growing performance gap between memory systems and disks, and rapidly improving CPU performance, virtual memory (VM) management becomes increasingly important for overall system performance. However, one of its critical components, the page replacement policy, is still dominated by CLOCK, a replacement policy developed almost 40 years ago. While pure LRU has an unaffordable cost in VM, CLOCK simulates the LRU replacement algorithm with a low cost acceptable in VM management. Over the last three decades, the inability of LRU as well as CLOCK to handle weak locality accesses has become increasingly serious, and an effective fix becomes increasingly desirable. Inspired by our I/O buffer cache replacement algorithm, LIRS [13], we propose an improved CLOCK replacement policy, called CLOCK-Pro. By additionally keeping track of a limited number of replaced pages, CLOCK-Pro works in a similar fashion as CLOCK with a VM-affordable cost. Furthermore, it brings all the much-needed performance advantages from LIRS into CLOCK. Measurements from an implementation of CLOCK-Pro in Linux Kernel 2.4.21 show that the execution times of some commonly used programs can be reduced by up to 47%.", "title": "" }, { "docid": "d8d91ea6fe6ce56a357a9b716bdfe849", "text": "Over the last years, automatic music classification has become a standard benchmark problem in the machine learning community. This is partly due to its inherent difficulty, and also to the impact that a fully automated classification system can have in a commercial application. In this paper we test the efficiency of a relatively new learning tool, Extreme Learning Machines (ELM), for several classification tasks on publicly available song datasets. ELM is gaining increasing attention, due to its versatility and speed in adapting its internal parameters. Since both of these attributes are fundamental in music classification, ELM provides a good alternative to standard learning models. Our results support this claim, showing a sustained gain of ELM over a feedforward neural network architecture. In particular, ELM provides a great decrease in computational training time, and has always higher or comparable results in terms of efficiency.", "title": "" }, { "docid": "3101cfeb496db290c82b6c6650cb4a02", "text": "Autophagy, a catabolic pathway that delivers cellular components to lysosomes for degradation, can be activated by stressful conditions such as nutrient starvation and endoplasmic reticulum (ER) stress. We report that thapsigargin, an ER stressor widely used to induce autophagy, in fact blocks autophagy. Thapsigargin does not affect autophagosome formation but leads to accumulation of mature autophagosomes by blocking autophagosome fusion with the endocytic system. Strikingly, thapsigargin has no effect on endocytosis-mediated degradation of epidermal growth factor receptor. Molecularly, while both Rab7 and Vps16 are essential regulatory components for endocytic fusion with lysosomes, we found that Rab7 but not Vps16 is required for complete autophagy flux, and that thapsigargin blocks recruitment of Rab7 to autophagosomes. Therefore, autophagosomal-lysosomal fusion must be governed by a distinct molecular mechanism compared to general endocytic fusion.", "title": "" }, { "docid": "82031adaa42f7043a6bf5e44bfa72597", "text": "In this paper, we study the problem of non-Bayesian learning over social networks by taking an axiomatic approach. As our main behavioral assumption, we postulate that agents follow social learning rules that satisfy imperfect recall, according to which they treat the current beliefs of their neighbors as sufficient statistics for all the information available to them. We establish that as long as imperfect recall represents the only point of departure from Bayesian rationality, agents’ social learning rules take a log-linear form. Our approach also enables us to provide a taxonomy of behavioral assumptions that underpin various non-Bayesian models of learning, including the canonical model of DeGroot. We then show that for a fairly large class of learning rules, the form of bounded rationality represented by imperfect recall is not an impediment to asymptotic learning, as long as agents assign weights of equal orders of magnitude to every independent piece of information. Finally, we show how the dispersion of information among different individuals in the social network determines the rate of learning.", "title": "" }, { "docid": "a3aad879ca5f7e7683c1377e079c4726", "text": "Semantic word embeddings represent the meaning of a word via a vector, and are created by diverse methods including Vector Space Methods (VSMs) such as Latent Semantic Analysis (LSA), generative text models such as topic models, matrix factorization, neural nets, and energy-based models. Many of these use nonlinear operations on co-occurrence statistics, such as computing Pairwise Mutual Information (PMI). Some use hand-tuned hyperparameters and term reweighting. Often a generative model can help provide theoretical insight into such modeling choices, but there appears to be no such model to “explain” the above nonlinear models. For example, we know of no generative model for which the correct solution is the usual (dimension-restricted) PMI model. This paper gives a new generative model, a dynamic version of the loglinear topic model of Mnih and Hinton (2007), as well as a pair of training objectives called RAND-WALK to compute word embeddings. The methodological novelty is to use the prior to compute closed form expressions for word statistics. These provide an explanation for the PMI model and other recent models, as well as hyperparameter choices. Experimental support is provided for the generative model assumptions, the most important of which is that latent word vectors are spatially isotropic. The model also helps explain why linear algebraic structure arises in low-dimensional semantic embeddings. Such structure has been used to solve analogy tasks by Mikolov et al. (2013a) and many subsequent papers. This theoretical explanation is to give an improved analogy solving method that improves success rates on analogy solving by a few percent.", "title": "" }, { "docid": "ec5bdd52fa05364923cb12b3ff25a49f", "text": "A system to prevent subscription fraud in fixed telecommunications with high impact on long-distance carriers is proposed. The system consists of a classification module and a prediction module. The classification module classifies subscribers according to their previous historical behavior into four different categories: subscription fraudulent, otherwise fraudulent, insolvent and normal. The prediction module allows us to identify potential fraudulent customers at the time of subscription. The classification module was implemented using fuzzy rules. It was applied to a database containing information of over 10,000 real subscribers of a major telecom company in Chile. In this database, a subscription fraud prevalence of 2.2% was found. The prediction module was implemented as a multilayer perceptron neural network. It was able to identify 56.2% of the true fraudsters, screening only 3.5% of all the subscribers in the test set. This study shows the feasibility of significantly preventing subscription fraud in telecommunications by analyzing the application information and the customer antecedents at the time of application. q 2005 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "cb8dbf14b79edd2a3ee045ad08230a30", "text": "Observational data suggest a link between menaquinone (MK, vitamin K2) intake and cardiovascular (CV) health. However, MK intervention trials with vascular endpoints are lacking. We investigated long-term effects of MK-7 (180 µg MenaQ7/day) supplementation on arterial stiffness in a double-blind, placebo-controlled trial. Healthy postmenopausal women (n=244) received either placebo (n=124) or MK-7 (n=120) for three years. Indices of local carotid stiffness (intima-media thickness IMT, Diameter end-diastole and Distension) were measured by echotracking. Regional aortic stiffness (carotid-femoral and carotid-radial Pulse Wave Velocity, cfPWV and crPWV, respectively) was measured using mechanotransducers. Circulating desphospho-uncarboxylated matrix Gla-protein (dp-ucMGP) as well as acute phase markers Interleukin-6 (IL-6), high-sensitive C-reactive protein (hsCRP), tumour necrosis factor-α (TNF-α) and markers for endothelial dysfunction Vascular Cell Adhesion Molecule (VCAM), E-selectin, and Advanced Glycation Endproducts (AGEs) were measured. At baseline dp-ucMGP was associated with IMT, Diameter, cfPWV and with the mean z-scores of acute phase markers (APMscore) and of markers for endothelial dysfunction (EDFscore). After three year MK-7 supplementation cfPWV and the Stiffness Index βsignificantly decreased in the total group, whereas distension, compliance, distensibility, Young's Modulus, and the local carotid PWV (cPWV) improved in women having a baseline Stiffness Index β above the median of 10.8. MK-7 decreased dp-ucMGP by 50 % compared to placebo, but did not influence the markers for acute phase and endothelial dysfunction. In conclusion, long-term use of MK-7 supplements improves arterial stiffness in healthy postmenopausal women, especially in women having a high arterial stiffness.", "title": "" } ]
scidocsrr
21b49bdfb29c3c05db340d50e98e7fb6
SWRL Rule Editor - A Web Application as Rich as Desktop Business Rule Editors
[ { "docid": "0cf7ebc02a8396a615064892d9ee6f22", "text": "With the wider use of ontologies in the Semantic Web and as part of production systems, multiple scenarios for ontology maintenance and evolution are emerging. For example, successive ontology versions can be posted on the (Semantic) Web, with users discovering the new versions serendipitously; ontology-development in a collaborative environment can be synchronous or asynchronous; managers of projects may exercise quality control, examining changes from previous baseline versions and accepting or rejecting them before a new baseline is published, and so on. In this paper, we present different scenarios for ontology maintenance and evolution that we have encountered in our own projects and in those of our collaborators. We define several features that categorize these scenarios. For each scenario, we discuss the high-level tasks that an editing environment must support. We then present a unified comprehensive set of tools to support different scenarios in a single framework, allowing users to switch between different modes easily. 1 Evolution of Ontology Evolution Acceptance of ontologies as an integral part of knowledge-intensive applications has been growing steadily. The word ontology became a recognized substrate in fields outside the computer science, from bioinformatics to intelligence analysis. With such acceptance, came the use of ontologies in industrial systems and active publishing of ontologies on the (Semantic) Web. More and more often, developing an ontology is not a project undertaken by a single person or a small group of people in a research laboratory, but rather it is a large project with numerous participants, who are often geographically distributed, where the resulting ontologies are used in production environments with paying customers counting on robustness and reliability of the system. The Protégé ontology-development environment1 has become a widely used tool for developing ontologies, with more than 50,000 registered users. The Protégé group works closely with some of the tool’s users and we have a continuous stream of requests from them on the features that they would like to have supported in terms of managing and developing ontologies collaboratively. The configurations for collaborative development differ significantly however. For instance, Perot Systems2 uses a client–server mode of Protégé with multiple users simultaneously accessing the same copy of the ontology on the server. The NCI Center for Bioinformatics, which develops the NCI The1 http://protege.stanford.edu 2 http://www.perotsystems.com saurus3 has a different configuration: a baseline version of the Thesaurus is published regularly and between the baselines, multiple editors work asynchronously on their own versions. At the end of the cycle, the changes are reconciled. In the OBO project,4 ontology developers post their ontologies on a sourceforge site, using the sourceforge version-control system to publish successive versions. In addition to specific requirements to support each of these collaboration models, users universally request the ability to annotate their changes, to hold discussions about the changes, to see the change history with respective annotations, and so on. When developing tool support for all the different modes and tasks in the process of ontology evolution, we started with separate and unrelated sets of Protégé plugins that supported each of the collaborative editing modes. This approach, however, was difficult to maintain; besides, we saw that tools developed for one mode (such as change annotation) will be useful in other modes. Therefore, we have developed a single unified framework that is flexible enough to work in either synchronous or asynchronous mode, in those environments where Protégé and our plugins are used to track changes and in those environments where there is no record of the change steps. At the center of the system is a Change and Annotation Ontology (CHAO) with instances recording specific changes and meta-information about them (author, timestamp, annotations, acceptance status, etc.). When Protégé and its change-management plugins are used for ontology editing, these tools create CHAO instances as a side product of the editing process. Otherwise, the CHAO instances are created from a structural diff produced by comparing two versions. The CHAO instances then drive the user interface that displays changes between versions to a user, allows him to accept and reject changes, to view concept history, to generate a new baseline, to publish a history of changes that other applications can use, and so on. This paper makes the following contributions: – analysis and categorization of different scenarios for ontology maintenance and evolution and their functional requirements (Section 2) – development of a comprehensive solution that addresses most of the functional requirements from the different scenarios in a single unified framework (Section 3) – implementation of the solution as a set of open-source Protégé plugins (Section 4) 2 Ontology-Evolution Scenarios and Tasks We will now discuss different scenarios for ontology maintenance and evolution, their attributes, and functional requirements.", "title": "" } ]
[ { "docid": "4d0b163e7c4c308696fa5fd4d93af894", "text": "Modeling agent behavior is central to understanding the emergence of complex phenomena in multiagent systems. Prior work in agent modeling has largely been task-specific and driven by handengineering domain-specific prior knowledge. We propose a general learning framework for modeling agent behavior in any multiagent system using only a handful of interaction data. Our framework casts agent modeling as a representation learning problem. Consequently, we construct a novel objective inspired by imitation learning and agent identification and design an algorithm for unsupervised learning of representations of agent policies. We demonstrate empirically the utility of the proposed framework in (i) a challenging highdimensional competitive environment for continuous control and (ii) a cooperative environment for communication, on supervised predictive tasks, unsupervised clustering, and policy optimization using deep reinforcement learning.", "title": "" }, { "docid": "c122a50d90e9f4834f36a19ba827fa9f", "text": "Cancers are able to grow by subverting immune suppressive pathways, to prevent the malignant cells as being recognized as dangerous or foreign. This mechanism prevents the cancer from being eliminated by the immune system and allows disease to progress from a very early stage to a lethal state. Immunotherapies are newly developing interventions that modify the patient's immune system to fight cancer, by either directly stimulating rejection-type processes or blocking suppressive pathways. Extracellular adenosine generated by the ectonucleotidases CD39 and CD73 is a newly recognized \"immune checkpoint mediator\" that interferes with anti-tumor immune responses. In this review, we focus on CD39 and CD73 ectoenzymes and encompass aspects of the biochemistry of these molecules as well as detailing the distribution and function on immune cells. Effects of CD39 and CD73 inhibition in preclinical and clinical studies are discussed. Finally, we provide insights into potential clinical application of adenosinergic and other purinergic-targeting therapies and forecast how these might develop in combination with other anti-cancer modalities.", "title": "" }, { "docid": "94784bc9f04dbe5b83c2a9f02e005825", "text": "The optical code division multiple access (OCDMA), the most advanced multiple access technology in optical communication has become significant and gaining popularity because of its asynchronous access capability, faster speed, efficiency, security and unlimited bandwidth. Many codes are developed in spectral amplitude coding optical code division multiple access (SAC-OCDMA) with zero or minimum cross-correlation properties to reduce the multiple access interference (MAI) and Phase Induced Intensity Noise (PIIN). This paper compares two novel SAC-OCDMA codes in terms of their performances such as bit error rate (BER), number of active users that is accommodated with minimum cross-correlation property, high data rate that is achievable and the minimum power that the OCDMA system supports to achieve a minimum BER value. One of the proposed novel codes referred in this work as modified random diagonal code (MRDC) possesses cross-correlation between zero to one and the second novel code referred in this work as modified new zero cross-correlation code (MNZCC) possesses cross-correlation zero to further minimize the multiple access interference, which are found to be more scalable compared to the other existing SAC-OCDMA codes. In this work, the proposed MRDC and MNZCC codes are implemented in an optical system using the optisystem version-12 software for the SAC-OCDMA scheme. Simulation results depict that the OCDMA system based on the proposed novel MNZCC code exhibits better performance compared to the MRDC code and former existing SAC-OCDMA codes. The proposed MNZCC code accommodates maximum number of simultaneous users with higher data rate transmission, lower BER and longer traveling distance without any signal quality degradation as compared to the former existing SAC-OCDMA codes.", "title": "" }, { "docid": "4253afeaeb2f238339611e5737ed3e06", "text": "Over the past decade there has been a growing public fascination with the complex connectedness of modern society. This connectedness is found in many incarnations: in the rapid growth of the Internet, in the ease with which global communication takes place, and in the ability of news and information as well as epidemics and financial crises to spread with surprising speed and intensity. These are phenomena that involve networks, incentives, and the aggregate behavior of groups of people; they are based on the links that connect us and the ways in which our decisions can have subtle consequences for others. This introductory undergraduate textbook takes an interdisciplinary look at economics, sociology, computing and information science, and applied mathematics to understand networks and behavior. It describes the emerging field of study that is growing at the interface of these areas, addressing fundamental questions about how the social, economic, and technological worlds are connected.", "title": "" }, { "docid": "470810494ae81cc2361380c42116c8d7", "text": "Sustainability is significantly important for fashion business due to consumers’ increasing awareness of environment. When a fashion company aims to promote sustainability, the main linkage is to develop a sustainable supply chain. This paper contributes to current knowledge of sustainable supply chain in the textile and clothing industry. We first depict the structure of sustainable fashion supply chain including eco-material preparation, sustainable manufacturing, green distribution, green retailing, and ethical consumers based on the extant literature. We study the case of the Swedish fast fashion company, H&M, which has constructed its sustainable supply chain in developing eco-materials, providing safety training, monitoring sustainable manufacturing, reducing carbon emission in distribution, and promoting eco-fashion. Moreover, based on the secondary data and analysis, we learn the lessons of H&M’s sustainable fashion supply chain from the country perspective: (1) the H&M’s sourcing managers may be more likely to select suppliers in the countries with lower degrees of human wellbeing; (2) the H&M’s supply chain manager may set a higher level of inventory in a country with a higher human wellbeing; and (3) the H&M CEO may consider the degrees of human wellbeing and economic wellbeing, instead of environmental wellbeing when launching the online shopping channel in a specific country.", "title": "" }, { "docid": "b3cca9ebe524e4d0252289ecca8528b7", "text": "Convolutional neural nets (CNNs) have become a practical means to perform vision tasks, particularly in the area of image classification. FPGAs are well known to be able to perform convolutions efficiently, however, most recent efforts to run CNNs on FPGAs have shown limited advantages over other devices such as GPUs. Previous approaches on FPGAs have often been memory bound due to the limited external memory bandwidth on the FPGA device. We show a novel architecture written in OpenCL, which we refer to as a Deep Learning Accelerator (DLA), that maximizes data reuse and minimizes external memory bandwidth. Furthermore, we show how we can use the Winograd transform to significantly boost the performance of the FPGA. As a result, when running our DLA on Intel’s Arria 10 device we can achieve a performance of 1020img/s, or 23img/s/W when running the AlexNet CNN benchmark. This comes to 1382 GFLOPs and is 10x faster with 8.4x more GFLOPS and 5.8x better efficiency than the state-of-the-art on FPGAs. Additionally, 23 img/s/W is competitive against the best publicly known implementation of AlexNet on nVidia’s TitanX GPU.", "title": "" }, { "docid": "f3e5941be4543d5900d56c1a7d93d0ea", "text": "These working notes summarize the different approaches we have explored in order to classify a corpus of tweets related to the 2015 Spanish General Election (COSET 2017 task from IberEval 2017). Two approaches were tested during the COSET 2017 evaluations: Neural Networks with Sentence Embeddings (based on TensorFlow) and N-gram Language Models (based on SRILM). Our results with these approaches were modest: both ranked above the “Most frequent baseline”, but below the “Bag-of-words + SVM” baseline. A third approach was tried after the COSET 2017 evaluation phase was over: Advanced Linear Models (based on fastText). Results measured over the COSET 2017 Dev and Test show that this approach is well above the “TF-IDF+RF” baseline.", "title": "" }, { "docid": "1c9dd9b98b141e87ca7b74e995630456", "text": "Transportation systems in mega-cities are often affected by various kinds of events such as natural disasters, accidents, and public gatherings. Highly dense and complicated networks in the transportation systems propagate confusion in the network because they offer various possible transfer routes to passengers. Visualization is one of the most important techniques for examining such cascades of unusual situations in the huge networks. This paper proposes visual integration of traffic analysis and social media analysis using two forms of big data: smart card data on the Tokyo Metro and social media data on Twitter. Our system provides multiple coordinated views to visually, intuitively, and simultaneously explore changes in passengers' behavior and abnormal situations extracted from smart card data and situational explanations from real voices of passengers such as complaints about services extracted from social media data. We demonstrate the possibilities and usefulness of our novel visualization environment using a series of real data case studies and domain experts' feedbacks about various kinds of events.", "title": "" }, { "docid": "d8748f3c6192e0e2fe3cdb9b745ef703", "text": "In this paper, we consider a method for computing the similarity of executable files, based on opcode graphs. We apply this technique to the challenging problem of metamorphic malware detection and compare the results to previous work based on hidden Markov models. In addition, we analyze the effect of various morphing techniques on the success of our proposed opcode graph-based detection scheme.", "title": "" }, { "docid": "13fbd264cf1f515c0ad6ebb30644e32e", "text": "This article presents a new model that accounts for working memory spans in adults, the time-based resource-sharing model. The model assumes that both components (i.e., processing and maintenance) of the main working memory tasks require attention and that memory traces decay as soon as attention is switched away. Because memory retrievals are constrained by a central bottleneck and thus totally capture attention, it was predicted that the maintenance of the items to be recalled depends on both the number of memory retrievals required by the intervening treatment and the time allowed to perform them. This number of retrievals:time ratio determines the cognitive load of the processing component. The authors show in 7 experiments that working memory spans vary as a function of this cognitive load.", "title": "" }, { "docid": "f8b201105e3b92ed4ef2a884cb626c0d", "text": "Several years of academic and industrial research efforts have converged to a common understanding on fundamental security building blocks for the upcoming vehicular communication (VC) systems. There is a growing consensus toward deploying a special-purpose identity and credential management infrastructure, i.e., a vehicular public-key infrastructure (VPKI), enabling pseudonymous authentication, with standardization efforts toward that direction. In spite of the progress made by standardization bodies (IEEE 1609.2 and ETSI) and harmonization efforts [Car2Car Communication Consortium (C2C-CC)], significant questions remain unanswered toward deploying a VPKI. Deep understanding of the VPKI, a central building block of secure and privacy-preserving VC systems, is still lacking. This paper contributes to the closing of this gap. We present SECMACE, a VPKI system, which is compatible with the IEEE 1609.2 and ETSI standards specifications. We provide a detailed description of our state-of-the-art VPKI that improves upon existing proposals in terms of security and privacy protection, and efficiency. SECMACE facilitates multi-domain operations in the VC systems and enhances user privacy, notably preventing linking pseudonyms based on timing information and offering increased protection even against honest-but-curious VPKI entities. We propose multiple policies for the vehicle–VPKI interactions and two large-scale mobility trace data sets, based on which we evaluate the full-blown implementation of SECMACE. With very little attention on the VPKI performance thus far, our results reveal that modest computing resources can support a large area of vehicles with very few delays and the most promising policy in terms of privacy protection can be supported with moderate overhead.", "title": "" }, { "docid": "ee4c10d53be10ed1a68e85e6a8a14f31", "text": "1 Center for Manufacturing Research, Tennessee Technological University (TTU), Cookeville, TN 38505, USA 2 Department of Electrical and Computer Engineering, Tennessee Technological University (TTU), Cookeville, TN 38505, USA 3 Panasonic Princeton Laboratory (PPRL), Panasonic R&D Company of America, 2 Research Way, Princeton, NJ 08540, USA 4 Network Development Center, Matsushita Electric Industrial Co., Ltd., 4-12-4 Higashi-shinagawa, Shinagawa-ku, Tokyo 140-8587, Japan", "title": "" }, { "docid": "2157b222a73c176ca9e54258b3a531fe", "text": "A switched-capacitor bias that provides a constant Gm-C characteristic over process and temperature variation is presented. The bias can be adapted for use with subthreshold circuits, or circuits in strong inversion. It uses eight transistors, five switches, and three capacitors, and performs with supply voltages less than 0.9 V. Theoretical output current is derived, and stability analysis is performed. Simulated results showing an op-amp with very consistent pulse response are presented", "title": "" }, { "docid": "acc26655abb2a181034db8571409d0a5", "text": "In this paper, a new optimization approach is designed for convolutional neural network (CNN) which introduces explicit logical relations between filters in the convolutional layer. In a conventional CNN, the filters’ weights in convolutional layers are separately trained by their own residual errors, and the relations of these filters are not explored for learning. Different from the traditional learning mechanism, the proposed correlative filters (CFs) are initiated and trained jointly in accordance with predefined correlations, which are efficient to work cooperatively and finally make a more generalized optical system. The improvement in CNN performance with the proposed CF is verified on five benchmark image classification datasets, including CIFAR-10, CIFAR-100, MNIST, STL-10, and street view house number. The comparative experimental results demonstrate that the proposed approach outperforms a number of state-of-the-art CNN approaches.", "title": "" }, { "docid": "5259c7d1c7b05050596f6667aa262e11", "text": "We propose a novel approach to automatic detection and tracking of people taking different poses in cluttered and dynamic environments using a single RGB-D camera. The original RGB-D pixels are transformed to a novel point ensemble image (PEI), and we demonstrate that human detection and tracking in 3D space can be performed very effectively with this new representation. The detector in the first phase quickly locates human physiquewise plausible candidates, which are then further carefully filtered in a supervised learning and classification second phase. Joint statistics of color and height are computed for data association to generate final 3D motion trajectories of tracked individuals. Qualitative and quantitative experimental results obtained on the publicly available office dataset, mobile camera dataset and the real-world clothing store dataset we created show very promising results. © 2014 Elsevier B.V. All rights reserved. d T b r a e w c t e i c a i c p p g w e h", "title": "" }, { "docid": "13bfb20823bb45feeac5fbcc9a552eaa", "text": "Facial landmark localisation in images captured in-the-wild is an important and challenging problem. The current state-of-the-art revolves around certain kinds of Deep Convolutional Neural Networks (DCNNs) such as stacked U-Nets and Hourglass networks. In this work, we innovatively propose stacked dense U-Nets for this task. We design a novel scale aggregation network topology structure and a channel aggregation building block to improve the model’s capacity without sacrificing the computational complexity and model size. With the assistance of deformable convolutions inside the stacked dense U-Nets and coherent loss for outside data transformation, our model obtains the ability to be spatially invariant to arbitrary input face images. Extensive experiments on many in-the-wild datasets, validate the robustness of the proposed method under extreme poses, exaggerated expressions and heavy occlusions. Finally, we show that accurate 3D face alignment can assist pose-invariant face recognition where we achieve a new stateof-the-art accuracy on CFP-FP (98.514%).", "title": "" }, { "docid": "b4c25df52a0a5f6ab23743d3ca9a3af2", "text": "Measuring similarity between texts is an important task for several applications. Available approaches to measure document similarity are inadequate for document pairs that have non-comparable lengths, such as a long document and its summary. This is because of the lexical, contextual and the abstraction gaps between a long document of rich details and its concise summary of abstract information. In this paper, we present a document matching approach to bridge this gap, by comparing the texts in a common space of hidden topics. We evaluate the matching algorithm on two matching tasks and find that it consistently and widely outperforms strong baselines. We also highlight the benefits of incorporating domain knowledge to text matching.", "title": "" }, { "docid": "209842e00957d1d1786008d943895dc9", "text": "The impact that urban green spaces have on sustainability and quality of life is phenomenal. This is also true for the local South African environment. However, in reality green spaces in urban environments are decreasing due to growing populations, increasing urbanization and development pressure. This further impacts on the provision of child-friendly spaces, a concept that is already limited in local context. Child-friendly spaces are described as environments in which people (children) feel intimately connected to, influencing the physical, social, emotional, and ecological health of individuals and communities. The benefits of providing such spaces for the youth are well documented in literature. This research therefore aimed to investigate the concept of childfriendly spaces and its applicability to the South African planning context, in order to guide the planning of such spaces for future communities and use. Child-friendly spaces in the urban environment of the city of Durban, was used as local case study, along with two international case studies namely Mullerpier public playground in Rotterdam, the Netherlands, and Kadidjiny Park in Melville, Australia. The aim was to determine how these spaces were planned and developed and to identify tools that were used to accomplish the goal of providing successful child-friendly green spaces within urban areas. The need and significance of planning for such spaces was portrayed within the international case studies. It is confirmed that minimal provision is made for green space planning within the South African context, when there is reflected on the international examples. As a result international examples and disciples of providing child-friendly green spaces should direct planning guidelines within local context. The research concluded that childfriendly green spaces have a positive impact on the urban environment and assist in a child’s development and interaction with the natural environment. Regrettably, the planning of these childfriendly spaces is not given priority within current spatial plans, despite the proven benefits of such. Keywords—Built environment, child-friendly spaces, green spaces. public places, urban area. E. J. Cilliers is a Professor at the North West University, Unit for Environmental Sciences and Management, Urban and Regional Planning, Potchestroom, 2531, South Africa (e-mail: juanee.cilliers@nwu.ac.za). Z. Goosen is a PhD student with the North West University, Unit for Environmental Sciences and Management, Urban and Regional Planning, Potchestroom, 2531, South Africa (e-mail: goosenzhangoosen@gmail.com). This research (or parts thereof) was made possible by the financial contribution of the NRF (National Research Foundation) South Africa. The opinions, findings and conclusions or recommendations expressed in this material are those of the authors and therefore the NRF does not accept any liability in regard thereto.", "title": "" }, { "docid": "319285416d58c9b2da618bb6f0c8021c", "text": "Facial expression analysis is one of the popular fields of research in human computer interaction (HCI). It has several applications in next generation user interfaces, human emotion analysis, behavior and cognitive modeling. In this paper, a facial expression classification algorithm is proposed which uses Haar classifier for face detection purpose, Local Binary Patterns(LBP) histogram of different block sizes of a face image as feature vectors and classifies various facial expressions using Principal Component Analysis (PCA). The algorithm is implemented in real time for expression classification since the computational complexity of the algorithm is small. A customizable approach is proposed for facial expression analysis, since the various expressions and intensity of expressions vary from person to person. The system uses grayscale frontal face images of a person to classify six basic emotions namely happiness, sadness, disgust, fear, surprise and anger.", "title": "" }, { "docid": "f40125e7cc8279a5514deaf1146684de", "text": "Summary Several models explain how a complex integrated system like the rodent mandible can arise from multiple developmental modules. The models propose various integrating mechanisms, including epigenetic effects of muscles on bones. We test five for their ability to predict correlations found in the individual (symmetric) and fluctuating asymmetric (FA) components of shape variation. We also use exploratory methods to discern patterns unanticipated by any model. Two models fit observed correlation matrices from both components: (1) parts originating in same mesenchymal condensation are integrated, (2) parts developmentally dependent on the same muscle form an integrated complex as do those dependent on teeth. Another fits the correlations observed in FA: each muscle insertion site is an integrated unit. However, no model fits well, and none predicts the complex structure found in the exploratory analyses, best described as a reticulated network. Furthermore, no model predicts the correlation between proximal parts of the condyloid and coronoid, which can exceed the correlations between proximal and distal parts of the same process. Additionally, no model predicts the correlation between molar alveolus and ramus and/or angular process, one of the highest correlations found in the FA component. That correlation contradicts the basic premise of all five developmental models, yet it should be anticipated from the epigenetic effects of mastication, possibly the primary morphogenetic process integrating the jaw coupling forces generated by muscle contraction with those experienced at teeth.", "title": "" } ]
scidocsrr
4e85e23c295c2b4231d8cc5413816cff
Image Processing Techniques for Detection of Leaf Disease
[ { "docid": "058515182c568c8df202542f28c15203", "text": "Plant diseases have turned into a dilemma as it can cause significant reduction in both quality and quantity of agricultural products. Automatic detection of plant diseases is an essential research topic as it may prove benefits in monitoring large fields of crops, and thus automatically detect the symptoms of diseases as soon as they appear on plant leaves. The proposed system is a software solution for automatic detection and classification of plant leaf diseases. The developed processing scheme consists of four main steps, first a color transformation structure for the input RGB image is created, then the green pixels are masked and removed using specific threshold value followed by segmentation process, the texture statistics are computed for the useful segments, finally the extracted features are passed through the classifier. The proposed algorithm’s efficiency can successfully detect and classify the examined diseases with an accuracy of 94%. Experimental results on a database of about 500 plant leaves confirm the robustness of the proposed approach.", "title": "" }, { "docid": "9aa3a9b8fb22ba929146298386ca9e57", "text": "Since current grading of plant diseases is mainly based on eyeballing, a new method is developed based on computer image processing. All influencing factors existed in the process of image segmentation was analyzed and leaf region was segmented by using Otsu method. In the HSI color system, H component was chosen to segment disease spot to reduce the disturbance of illumination changes and the vein. Then, disease spot regions were segmented by using Sobel operator to examine disease spot edges. Finally, plant diseases are graded by calculating the quotient of disease spot and leaf areas. Researches indicate that this method to grade plant leaf spot diseases is fast and accurate.", "title": "" }, { "docid": "1b60ded506c85edd798fe0759cce57fa", "text": "The studies of plant trait/disease refer to the studies of visually observable patterns of a particular plant. Nowadays crops face many traits/diseases. Damage of the insect is one of the major trait/disease. Insecticides are not always proved efficient because insecticides may be toxic to some kind of birds. It also damages natural animal food chains. A common practice for plant scientists is to estimate the damage of plant (leaf, stem) because of disease by an eye on a scale based on percentage of affected area. It results in subjectivity and low throughput. This paper provides a advances in various methods used to study plant diseases/traits using image processing. The methods studied are for increasing throughput & reducing subjectiveness arising from human experts in detecting the plant diseases.", "title": "" } ]
[ { "docid": "a9ea1f1f94a26181addac948837c3030", "text": "Crime tends to clust er geographi cally. This has led to the wide usage of hotspot analysis to identify and visualize crime. Accurately identified crime hotspots can greatly benefit the public by creating accurate threat visualizations, more efficiently allocating police resources, and predicting crime. Yet existing mapping methods usually identify hotspots without considering the underlying correlates of crime. In this study, we introduce a spatial data mining framework to study crime hotspots through their related variables. We use Geospatial Discriminative Patterns (GDPatterns) to capture the significant difference between two classes (hotspots and normal areas) in a geo-spatial dataset. Utilizing GDPatterns, we develop a novel model—Hotspot Optimization Tool (HOT)—to improve the identification of crime hotspots. Finally, based on a similarity measure, we group GDPattern clusters and visualize the distribution and characteristics of crime related variables. We evaluate our approach using a real world dataset collected from a northeast city in the United States. 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "8f91beade67a248cc0c063db42caabec", "text": "c:nt~ now, true videwon-dernaad can ody be atievsd hg a dedicated data flow for web service request. This brute force approach is probibitivdy &\\Tensive. Using mtiticast w si@cantly reduce the system rest. This solution, however, mu~t dday services in order to serve many requ~s as a hztch. h this paper, we consider a third alternative ded Pat&ing. h our technique, an e*mg mtiticast m expand dynarnidy to serve new &ents. ~otig new &ents to join an existiig rutiticast improves the ficiency of the rntiti-.. ~hermor~ since W requ~s can be served immediatdy, the &ents experience no service dday md true vide+on-dem~d ~ be achieve~ A si~cant contribution of tkis work, is making mdtiwork for true vide~ on-demand ssrvicw. h fact, we are able to tiate the service latency and improve the efficiency of mtiticast at the same time To assms the ben~t of this sdetne, w perform simdations to compare its performance +th that of standard rntiti-. Our simtiation rats indicate convincingly that Patching offers .wbstanti~y better perforrnace.", "title": "" }, { "docid": "add2f0b6aeb19e01ec4673b6f391cc61", "text": "Accurate localization of landmarks in the vicinity of a robot is a first step towards solving the SLAM problem. In this work, we propose algorithms to accurately estimate the 3D location of the landmarks from the robot only from a single image taken from its on board camera. Our approach differs from previous efforts in this domain in that it first reconstructs accurately the 3D environment from a single image, then it defines a coordinate system over the environment, and later it performs the desired localization with respect to this coordinate system using the environment's features. The ground plane from the given image is accurately estimated and this precedes segmentation of the image into ground and vertical regions. A Markov Random Field (MRF) based 3D reconstruction is performed to build an approximate depth map of the given image. This map is robust against texture variations due to shadows, terrain differences, etc. A texture segmentation algorithm is also applied to determine the ground plane accurately. Once the ground plane is estimated, we use the respective camera's intrinsic and extrinsic calibration information to calculate accurate 3D information about the features in the scene.", "title": "" }, { "docid": "8a42bc2dec684cf087d19bbbd2e815f8", "text": "Carefully managing the presentation of self via technology is a core practice on all modern social media platforms. Recently, selfies have emerged as a new, pervasive genre of identity performance. In many ways unique, selfies bring us fullcircle to Goffman—blending the online and offline selves together. In this paper, we take an empirical, Goffman-inspired look at the phenomenon of selfies. We report a large-scale, mixed-method analysis of the categories in which selfies appear on Instagram—an online community comprising over 400M people. Applying computer vision and network analysis techniques to 2.5M selfies, we present a typology of emergent selfie categories which represent emphasized identity statements. To the best of our knowledge, this is the first large-scale, empirical research on selfies. We conclude, contrary to common portrayals in the press, that selfies are really quite ordinary: they project identity signals such as wealth, health and physical attractiveness common to many online media, and to offline life.", "title": "" }, { "docid": "ce9084c2ac96db6bca6ddebe925c3d42", "text": "Tactical driving decision making is crucial for autonomous driving systems and has attracted considerable interest in recent years. In this paper, we propose several practical components that can speed up deep reinforcement learning algorithms towards tactical decision making tasks: 1) nonuniform action skipping as a more stable alternative to action-repetition frame skipping, 2) a counterbased penalty for lanes on which ego vehicle has less right-of-road, and 3) heuristic inference-time action masking for apparently undesirable actions. We evaluate the proposed components in a realistic driving simulator and compare them with several baselines. Results show that the proposed scheme provides superior performance in terms of safety, efficiency, and comfort.", "title": "" }, { "docid": "4f0d34e830387947f807213599d47652", "text": "An essential feature of large scale free graphs, such as the Web, protein-to-protein interaction, brain connectivity, and social media graphs, is that they tend to form recursive communities. The latter are densely connected vertex clusters exhibiting quick local information dissemination and processing. Under the fuzzy graph model vertices are fixed while each edge exists with a given probability according to a membership function. This paper presents Fuzzy Walktrap and Fuzzy Newman-Girvan, fuzzy versions of two established community discovery algorithms. The proposed algorithms have been applied to a synthetic graph generated by the Kronecker model with different termination criteria and the results are discussed. Keywords-Fuzzy graphs; Membership function; Community detection; Termination criteria; Walktrap algorithm; NewmanGirvan algorithm; Edge density; Kronecker model; Large graph analytics; Higher order data", "title": "" }, { "docid": "9a2a126eecb116f04b501028f92b7736", "text": "Sleep bruxism (SB) is a common sleep-related motor disorder characterized by tooth grinding and clenching. SB diagnosis is made on history of tooth grinding and confirmed by polysomnographic recording of electromyographic (EMG) episodes in the masseter and temporalis muscles. The typical EMG activity pattern in patients with SB is known as rhythmic masticatory muscle activity (RMMA). The authors observed that most RMMA episodes occur in association with sleep arousal and are preceded by physiologic activation of the central nervous and sympathetic cardiac systems. This article provides a comprehensive review of the cause, pathophysiology, assessment, and management of SB.", "title": "" }, { "docid": "c5bbb45cc61de12d0eac19d1e59752fb", "text": "'No-shows' or missed appointments result in under-utilized clinic capacity. We develop a logistic regression model using electronic medical records to estimate patients' no-show probabilities and illustrate the use of the estimates in creating clinic schedules that maximize clinic capacity utilization while maintaining small patient waiting times and clinic overtime costs. This study used information on scheduled outpatient appointments collected over a three-year period at a Veterans Affairs medical center. The call-in process for 400 clinic days was simulated and for each day two schedules were created: the traditional method that assigned one patient per appointment slot, and the proposed method that scheduled patients according to their no-show probability to balance patient waiting, overtime and revenue. Combining patient no-show models with advanced scheduling methods would allow more patients to be seen a day while improving clinic efficiency. Clinics should consider the benefits of implementing scheduling software that includes these methods relative to the cost of no-shows.", "title": "" }, { "docid": "3ad124875f073ff961aaf61af2832815", "text": "EVERY HUMAN CULTURE HAS SOME FORM OF MUSIC WITH A BEAT\na perceived periodic pulse that structures the perception of musical rhythm and which serves as a framework for synchronized movement to music. What are the neural mechanisms of musical beat perception, and how did they evolve? One view, which dates back to Darwin and implicitly informs some current models of beat perception, is that the relevant neural mechanisms are relatively general and are widespread among animal species. On the basis of recent neural and cross-species data on musical beat processing, this paper argues for a different view. Here we argue that beat perception is a complex brain function involving temporally-precise communication between auditory regions and motor planning regions of the cortex (even in the absence of overt movement). More specifically, we propose that simulation of periodic movement in motor planning regions provides a neural signal that helps the auditory system predict the timing of upcoming beats. This \"action simulation for auditory prediction\" (ASAP) hypothesis leads to testable predictions. We further suggest that ASAP relies on dorsal auditory pathway connections between auditory regions and motor planning regions via the parietal cortex, and suggest that these connections may be stronger in humans than in non-human primates due to the evolution of vocal learning in our lineage. This suggestion motivates cross-species research to determine which species are capable of human-like beat perception, i.e., beat perception that involves accurate temporal prediction of beat times across a fairly broad range of tempi.", "title": "" }, { "docid": "37feedcb9e527601cb28fe59b2526ab3", "text": "In this paper we present a covariance based tracking algorithm for intelligent video analysis to assist marine biologists in understanding the complex marine ecosystem in the Ken-Ding sub-tropical coral reef in Taiwan by processing underwater real-time videos recorded in open ocean. One of the most important aspects of marine biology research is the investigation of fish trajectories to identify events of interest such as fish preying, mating, schooling, etc. This task, of course, requires a reliable tracking algorithm able to deal with 1) the difficulties of following fish that have multiple degrees of freedom and 2) the possible varying conditions of the underwater environment. To accommodate these needs, we have developed a tracking algorithm that exploits covariance representation to describe the object’s appearance and statistical information and also to join different types of features such as location, color intensities, derivatives, etc. The accuracy of the algorithm was evaluated by using hand-labeled ground truth data on 30000 frames belonging to ten different videos, achieving an average performance of about 94%, estimated using multiple ratios that provide indication on how good is a tracking algorithm both globally (e.g. counting objects in a fixed range of time) and locally (e.g. in distinguish occlusions among objects).", "title": "" }, { "docid": "a6e6cf1473adb05f33b55cb57d6ed6d3", "text": "In machine learning, data augmentation is the process of creating synthetic examples in order to augment a dataset used to learn a model. One motivation for data augmentation is to reduce the variance of a classifier, thereby reducing error. In this paper, we propose new data augmentation techniques specifically designed for time series classification, where the space in which they are embedded is induced by Dynamic Time Warping (DTW). The main idea of our approach is to average a set of time series and use the average time series as a new synthetic example. The proposed methods rely on an extension of DTW Barycentric Averaging (DBA), the averaging technique that is specifically developed for DTW. In this paper, we extend DBA to be able to calculate a weighted average of time series under DTW. In this case, instead of each time series contributing equally to the final average, some can contribute more than others. This extension allows us to generate an infinite number of new examples from any set of given time series. To this end, we propose three methods that choose the weights associated to the time series of the dataset. We carry out experiments on the 85 datasets of the UCR archive and demonstrate that our method is particularly useful when the number of available examples is limited (e.g. 2 to 6 examples per class) using a 1-NN DTW classifier. Furthermore, we show that augmenting full datasets is beneficial in most cases, as we observed an increase of accuracy on 56 datasets, no effect on 7 and a slight decrease on only 22.", "title": "" }, { "docid": "8cdbbbfa00dfd08119e1802e9498df20", "text": "Background:Cetuximab is the only targeted agent approved for the treatment of head and neck squamous cell carcinomas (HNSCC), but low response rates and disease progression are frequently reported. As the phosphoinositide 3-kinase (PI3K) and the mammalian target of rapamycin (mTOR) pathways have an important role in the pathogenesis of HNSCC, we investigated their involvement in cetuximab resistance.Methods:Different human squamous cancer cell lines sensitive or resistant to cetuximab were tested for the dual PI3K/mTOR inhibitor PF-05212384 (PKI-587), alone and in combination, both in vitro and in vivo.Results:Treatment with PKI-587 enhances sensitivity to cetuximab in vitro, even in the condition of epidermal growth factor receptor (EGFR) resistance. The combination of the two drugs inhibits cells survival, impairs the activation of signalling pathways and induces apoptosis. Interestingly, although significant inhibition of proliferation is observed in all cell lines treated with PKI-587 in combination with cetuximab, activation of apoptosis is evident in sensitive but not in resistant cell lines, in which autophagy is pre-eminent. In nude mice xenografted with resistant Kyse30 cells, the combined treatment significantly reduces tumour growth and prolongs mice survival.Conclusions:Phosphoinositide 3-kinase/mammalian target of rapamycin inhibition has an important role in the rescue of cetuximab resistance. Different mechanisms of cell death are induced by combined treatment depending on basal anti-EGFR responsiveness.", "title": "" }, { "docid": "eb23e4dedc5444faff49fa46b9866a15", "text": "People with severe neurological impairments face many challenges in sensorimotor functions and communication with the environment; therefore they have increased demand for advanced, adaptive and personalized rehabilitation. During the last several decades, numerous studies have developed brain-computer interfaces (BCIs) with the goals ranging from providing means of communication to functional rehabilitation. Here we review the research on non-invasive, electroencephalography (EEG)-based BCI systems for communication and rehabilitation. We focus on the approaches intended to help severely paralyzed and locked-in patients regain communication using three different BCI modalities: slow cortical potentials, sensorimotor rhythms and P300 potentials, as operational mechanisms. We also review BCI systems for restoration of motor function in patients with spinal cord injury and chronic stroke. We discuss the advantages and limitations of these approaches and the challenges that need to be addressed in the future.", "title": "" }, { "docid": "97c81cfa85ff61b999ae8e565297a16e", "text": "This paper describes the complete implementation of a blind image denoising algorithm, that takes any digital image as input. In a first step the algorithm estimates a Signal and Frequency Dependent (SFD) noise model. In a second step, the image is denoised by a multiscale adaptation of the Non-local Bayes denoising method. We focus here on a careful analysis of the denoising step and present a detailed discussion of the influence of its parameters. Extensive commented tests of the blind denoising algorithm are presented, on real JPEG images and on scans of old photographs. Source Code The source code (ANSI C), its documentation, and the online demo are accessible at the IPOL web page of this article1.", "title": "" }, { "docid": "44e28ba2149dce27fd0ccc9ed2065feb", "text": "Flip chip assembly technology is an attractive solution for high I/O density and fine-pitch microelectronics packaging. Recently, high efficient GaN-based light-emitting diodes (LEDs) have undergone a rapid development and flip chip bonding has been widely applied to fabricate high-brightness GaN micro-LED arrays [1]. The flip chip GaN LED has some advantages over the traditional top-emission LED, including improved current spreading, higher light extraction efficiency, better thermal dissipation capability and the potential of further optical component integration [2, 3]. With the advantages of flip chip assembly, micro-LED (μLED) arrays with high I/O density can be performed with improved luminous efficiency than conventional p-side-up micro-LED arrays and are suitable for many potential applications, such as micro-displays, bio-photonics and visible light communications (VLC), etc. In particular, μLED array based selif-emissive micro-display has the promising to achieve high brightness and contrast, reliability, long-life and compactness, which conventional micro-displays like LCD, OLED, etc, cannot compete with. In this study, GaN micro-LED array device with flip chip assembly package process was presented. The bonding quality of flip chip high density micro-LED array is tested by daisy chain test. The p-n junction tests of the devices are measured for electrical characteristics. The illumination condition of each micro-diode pixel was examined under a forward bias. Failure mode analysis was performed using cross sectioning and scanning electron microscopy (SEM). Finally, the fully packaged micro-LED array device is demonstrated as a prototype of dice projector system.", "title": "" }, { "docid": "247534c6b5416e4330a84e10daf2bc0c", "text": "The aim of the present study was to determine metabolic responses, movement patterns and distance covered at running speeds corresponding to fixed blood lactate concentrations (FBLs) in young soccer players during a match play. A further aim of the study was to evaluate the relationships between FBLs, maximal oxygen consumption (VO2max) and distance covered during a game. A multistage field test was administered to 32 players to determine FBLs and VO2max. Blood lactate (LA), heart rate (HR) and rate of perceived exertion (RPE) responses were obtained from 36 players during tournament matches filmed using six fixed cameras. Images were transferred to a computer, for calibration and synchronization. In all players, values for LA and HR were higher and RPE lower during the 1(st) half compared to the 2(nd) half of the matches (p < 0.01). Players in forward positions had higher LA levels than defenders, but HR and RPE values were similar between playing positions. Total distance and distance covered in jogging, low-moderate-high intensity running and low intensity sprint were higher during the 1(st) half (p < 0.01). In the 1(st) half, players also ran longer distances at FBLs [p<0.01; average running speed at 2mmol·L(-1) (FBL2): 3.32 ± 0.31m·s(-1) and average running speed at 4mmol·L(-1) (FBL4): 3.91 ± 0.25m·s(-1)]. There was a significant difference between playing positions in distance covered at different running speeds (p < 0.05). However, when distance covered was expressed as FBLs, the players ran similar distances. In addition, relationships between FBLs and total distance covered were significant (r = 0.482 to 0.570; p < 0.01). In conclusion, these findings demonstrated that young soccer players experienced higher internal load during the 1(st) half of a game compared to the 2(nd) half. Furthermore, although movement patterns of players differed between playing positions, all players experienced a similar physiological stress throughout the game. Finally, total distance covered was associated to fixed blood lactate concentrations during play. Key pointsBased on LA, HR and RPE responses, young top soccer players experienced a higher physiological stress during the 1(st) half of the matches compared to the 2(nd) half.Movement patterns differed in accordance with the players' positions but that all players experienced a similar physiological stress during match play.Approximately one quarter of total distance was covered at speeds that exceeded the 4 mmol·L(-1) fixed LA threshold.Total distance covered was influenced by running speeds at fixed lactate concentrations in young soccer players during match play.", "title": "" }, { "docid": "7177503e5a6dffcaab46009673af5eed", "text": "This paper describes a heart attack self-test application for a mobile phone that allows potential victims, without the intervention of a medical specialist, to quickly assess whether they are having a heart attack. Heart attacks can occur anytime and anywhere. Using pervasive technology such as a mobile phone and a small wearable ECG sensor it is possible to collect the user's symptoms and to detect the onset of a heart attack by analysing the ECG recordings. If the application assesses that the user is at risk, it will urge the user to call the emergency services immediately. If the user has a cardiac arrest the application will automatically determine the current location of the user and alert the ambulance services and others to the person's location.", "title": "" }, { "docid": "5dc78e62ca88a6a5f253417093e2aa4d", "text": "This paper surveys the scientific and trade literature on cybersecurity for unmanned aerial vehicles (UAV), concentrating on actual and simulated attacks, and the implications for small UAVs. The review is motivated by the increasing use of small UAVs for inspecting critical infrastructures such as the electric utility transmission and distribution grid, which could be a target for terrorism. The paper presents a modified taxonomy to organize cyber attacks on UAVs and exploiting threats by Attack Vector and Target. It shows that, by Attack Vector, there has been one physical attack and ten remote attacks. By Target, there have been six attacks on GPS (two jamming, four spoofing), two attacks on the control communications stream (a deauthentication attack and a zero-day vulnerabilities attack), and two attacks on data communications stream (two intercepting the data feed, zero executing a video replay attack). The paper also divides and discusses the findings by large or small UAVs, over or under 25 kg, but concentrates on small UAVs. The survey concludes that UAV-related research to counter cybersecurity threats focuses on GPS Jamming and Spoofing, but ignores attacks on the controls and data communications stream. The gap in research on attacks on the data communications stream is concerning, as an operator can see a UAV flying off course due to a control stream attack but has no way of detecting a video replay attack (substitution of a video feed).", "title": "" }, { "docid": "ecbdb56c52a59f26cf8e33fc533d608f", "text": "The ethical nature of transformational leadership has been hotly debated. This debate is demonstrated in the range of descriptors that have been used to label transformational leaders including narcissistic, manipulative, and self-centred, but also ethical, just and effective. Therefore, the purpose of the present research was to address this issue directly by assessing the statistical relationship between perceived leader integrity and transformational leadership using the Perceived Leader Integrity Scale (PLIS) and the Multi-Factor Leadership Questionnaire (MLQ). In a national sample of 1354 managers a moderate to strong positive relationship was found between perceived integrity and the demonstration of transformational leadership behaviours. A similar relationship was found between perceived integrity and developmental exchange leadership. A systematic leniency bias was identified when respondents rated subordinates vis-à-vis peer ratings. In support of previous findings, perceived integrity was also found to correlate positively with leader and organisational effectiveness measures.", "title": "" }, { "docid": "0b86a006b1f8e3a5e940daef25fe7d58", "text": "While drug toxicity (especially hepatotoxicity) is the most frequent reason cited for withdrawal of an approved drug, no simple solution exists to adequately predict such adverse events. Simple cytotoxicity assays in HepG2 cells are relatively insensitive to human hepatotoxic drugs in a retrospective analysis of marketed pharmaceuticals. In comparison, a panel of pre-lethal mechanistic cellular assays hold the promise to deliver a more sensitive approach to detect endpoint-specific drug toxicities. The panel of assays covered by this review includes steatosis, cholestasis, phospholipidosis, reactive intermediates, mitochondria membrane function, oxidative stress, and drug interactions. In addition, the use of metabolically competent cells or the introduction of major human hepatocytes in these in vitro studies allow a more complete picture of potential drug side effect. Since inter-individual therapeutic index (TI) may differ from patient to patient, the rational use of one or more of these cellular assay and targeted in vivo exposure data may allow pharmaceutical scientists to select drug candidates with a higher TI potential in the drug discovery phase.", "title": "" } ]
scidocsrr
5c834f5f0c836067419cae60d9fbdede
Stance Classification in Rumours as a Sequential Task Exploiting the Tree Structure of Social Media Conversations
[ { "docid": "4ac3c3fb712a1121e0990078010fe4b0", "text": "1.1 Introduction Relational data has two characteristics: first, statistical dependencies exist between the entities we wish to model, and second, each entity often has a rich set of features that can aid classification. For example, when classifying Web documents, the page's text provides much information about the class label, but hyperlinks define a relationship between pages that can improve classification [Taskar et al., 2002]. Graphical models are a natural formalism for exploiting the dependence structure among entities. Traditionally, graphical models have been used to represent the joint probability distribution p(y, x), where the variables y represent the attributes of the entities that we wish to predict, and the input variables x represent our observed knowledge about the entities. But modeling the joint distribution can lead to difficulties when using the rich local features that can occur in relational data, because it requires modeling the distribution p(x), which can include complex dependencies. Modeling these dependencies among inputs can lead to intractable models, but ignoring them can lead to reduced performance. A solution to this problem is to directly model the conditional distribution p(y|x), which is sufficient for classification. This is the approach taken by conditional random fields [Lafferty et al., 2001]. A conditional random field is simply a conditional distribution p(y|x) with an associated graphical structure. Because the model is", "title": "" }, { "docid": "e4dd72a52d4961f8d4d8ee9b5b40d821", "text": "Social media users spend several hours a day to read, post and search for news on microblogging platforms. Social media is becoming a key means for discovering news. However, verifying the trustworthiness of this information is becoming even more challenging. In this study, we attempt to address the problem of rumor detection and belief investigation on Twitter. Our definition of rumor is an unverifiable statement, which spreads misinformation or disinformation. We adopt a supervised rumors classification task using the standard dataset. By employing the Tweet Latent Vector (TLV) feature, which creates a 100-d vector representative of each tweet, we increased the rumor retrieval task precision up to 0.972. We also introduce the belief score and study the belief change among the rumor posters between 2010 and 2016.", "title": "" }, { "docid": "7641f8f3ed2afd0c16665b44c1216e79", "text": "In this article we explore the behavior of Twitter users under an emergency situation. In particular, we analyze the activity related to the 2010 earthquake in Chile and characterize Twitter in the hours and days following this disaster. Furthermore, we perform a preliminary study of certain social phenomenons, such as the dissemination of false rumors and confirmed news. We analyze how this information propagated through the Twitter network, with the purpose of assessing the reliability of Twitter as an information source under extreme circumstances. Our analysis shows that the propagation of tweets that correspond to rumors differs from tweets that spread news because rumors tend to be questioned more than news by the Twitter community. This result shows that it is posible to detect rumors by using aggregate analysis on tweets.", "title": "" }, { "docid": "f2478e4b1156e112f84adbc24a649d04", "text": "Community Question Answering (cQA) provides new interesting research directions to the traditional Question Answering (QA) field, e.g., the exploitation of the interaction between users and the structure of related posts. In this context, we organized SemEval2015 Task 3 on Answer Selection in cQA, which included two subtasks: (a) classifying answers as good, bad, or potentially relevant with respect to the question, and (b) answering a YES/NO question with yes, no, or unsure, based on the list of all answers. We set subtask A for Arabic and English on two relatively different cQA domains, i.e., the Qatar Living website for English, and a Quran-related website for Arabic. We used crowdsourcing on Amazon Mechanical Turk to label a large English training dataset, which we released to the research community. Thirteen teams participated in the challenge with a total of 61 submissions: 24 primary and 37 contrastive. The best systems achieved an official score (macro-averaged F1) of 57.19 and 63.7 for the English subtasks A and B, and 78.55 for the Arabic subtask A.", "title": "" } ]
[ { "docid": "bdadf0088654060b3f1c749ead0eea6e", "text": "This article gives an introduction and overview of the field of pervasive gaming, an emerging genre in which traditional, real-world games are augmented with computing functionality, or, depending on the perspective, purely virtual computer entertainment is brought back to the real world.The field of pervasive games is diverse in the approaches and technologies used to create new and exciting gaming experiences that profit by the blend of real and virtual game elements. We explicitly look at the pervasive gaming sub-genres of smart toys, affective games, tabletop games, location-aware games, and augmented reality games, and discuss them in terms of their benefits and critical issues, as well as the relevant technology base.", "title": "" }, { "docid": "9bdee31e49213cd33d157b61ea788230", "text": "Situational understanding (SU) requires a combination of insight — the ability to accurately perceive an existing situation — and foresight — the ability to anticipate how an existing situation may develop in the future. SU involves information fusion as well as model representation and inference. Commonly, heterogenous data sources must be exploited in the fusion process: often including both hard and soft data products. In a coalition context, data and processing resources will also be distributed and subjected to restrictions on information sharing. It will often be necessary for a human to be in the loop in SU processes, to provide key input and guidance, and to interpret outputs in a way that necessitates a degree of transparency in the processing: systems cannot be “black boxes”. In this paper, we characterize the Coalition Situational Understanding (CSU) problem in terms of fusion, temporal, distributed, and human requirements. There is currently significant interest in deep learning (DL) approaches for processing both hard and soft data. We analyze the state-of-the-art in DL in relation to these requirements for CSU, and identify areas where there is currently considerable promise, and key gaps.", "title": "" }, { "docid": "9592fc0ec54a5216562478414dc68eb4", "text": "We consider the problem of finding the best arm in a stochastic multi-armed bandit game. The regret of a forecaster is here defined by the gap between the mean reward of the optimal arm and the mean reward of the ultimately chosen arm. We propose a highly exploring UCB policy and a new algorithm based on successive rejects. We show that these algorithms are essentially optimal since their regret decreases exponentially at a rate which is, up to a logarithmic factor, the best possible. However, while the UCB policy needs the tuning of a parameter depending on the unobservable hardness of the task, the successive rejects policy benefits from being parameter-free, and also independent of the scaling of the rewards. As a by-product of our analysis, we show that identifying the best arm (when it is unique) requires a number of samples of order (up to a log(K) factor) ∑ i 1/∆ 2 i , where the sum is on the suboptimal arms and ∆i represents the difference between the mean reward of the best arm and the one of arm i. This generalizes the well-known fact that one needs of order of 1/∆ samples to differentiate the means of two distributions with gap ∆.", "title": "" }, { "docid": "1ca692464d5d7f4e61647bf728941519", "text": "During natural vision, the entire visual field is stimulated by images rich in spatiotemporal structure. Although many visual system studies restrict stimuli to the classical receptive field (CRF), it is known that costimulation of the CRF and the surrounding nonclassical receptive field (nCRF) increases neuronal response sparseness. The cellular and network mechanisms underlying increased response sparseness remain largely unexplored. Here we show that combined CRF + nCRF stimulation increases the sparseness, reliability, and precision of spiking and membrane potential responses in classical regular spiking (RS(C)) pyramidal neurons of cat primary visual cortex. Conversely, fast-spiking interneurons exhibit increased activity and decreased selectivity during CRF + nCRF stimulation. The increased sparseness and reliability of RS(C) neuron spiking is associated with increased inhibitory barrages and narrower visually evoked synaptic potentials. Our experimental observations were replicated with a simple computational model, suggesting that network interactions among neuronal subtypes ultimately sharpen recurrent excitation, producing specific and reliable visual responses.", "title": "" }, { "docid": "83c0e0c81a809314e93471e9bcd6aabe", "text": "A rail-to-rail amplifier with an offset cancellation, which is suitable for high color depth and high-resolution liquid crystal display (LCD) drivers, is proposed. The amplifier incorporates dual complementary differential pairs, which are classified as main and auxiliary transconductance amplifiers, to obtain a full input voltage swing and an offset canceling capability. Both offset voltage and injection-induced error, due to the device mismatch and charge injection, respectively, are greatly reduced. The offset cancellation and charge conservation, which is used to reduce the dynamic power consumption, are operated during the same time slot so that the driving period does not need to increase. An experimental prototype amplifier is implemented with 0.35m CMOS technology. The circuit draws 7.5 A static current and exhibits the settling time of 3 s, for a voltage swing of 5 V under a 3.4 k resistance, and a 140 pF capacitance load with a power supply of 5 V. The offset voltage of the amplifier with offset cancellation is 0.48 mV.", "title": "" }, { "docid": "f1773b7fcd2ab70273f096b6da77b7a4", "text": "The senses we call upon when interacting with technology are restricted. We mostly rely on vision and hearing, and increasingly touch, but taste and smell remain largely unused. Although our knowledge about sensory systems and devices has grown rapidly over the past few decades, there is still an unmet challenge in understanding people's multisensory experiences in HCI. The goal is that by understanding the ways in which our senses process information and how they relate to one another, it will be possible to create richer experiences for human-­‐ technology interactions. To meet this challenge, we need specific actions within the HCI community. First, we must determine which tactile, gustatory, and olfactory experiences we can design for, and how to meaningfully stimulate them when people interact with technology. Second, we need to build on previous frameworks for multisensory design while also creating new ones. Third, we need to design interfaces that allow the stimulation of unexplored sensory inputs (e.g., digital smell), as well as interfaces that take into account the relationships between the senses (e.g., integration of taste and smell into flavor). Finally, it is vital to understand what limitations come into play when users need to monitor information from more than one sense simultaneously. Though much development is needed, in recent years we have witnessed progress in multisensory experiences involving touch. It is key for HCI to leverage the full range of tactile sensations (vibrations, pressure, force, balance, heat, coolness/wetness, electric shocks, pain and itch, etc.), taking into account the active and passive modes of touch and its integration with the other senses. This will undoubtedly provide new tools for interactive experience design, and will help to uncover the fine granularity of sensory stimulation and emotional responses.", "title": "" }, { "docid": "d00691959822087a1bddc3b411d27239", "text": "We consider the lattice Boltzmann method for immiscible multiphase flow simulations. Classical lattice Boltzmann methods for this problem, e.g. the colour gradient method or the free energy approach, can only be applied when density and viscosity ratios are small. Moreover, they use additional fields defined on the whole domain to describe the different phases and model phase separation by special interactions at each node. In contrast, our approach simulates the flow using a single field and separates the fluid phases by a free moving interface. The scheme is based on the lattice Boltzmann method and uses the level set method to compute the evolution of the interface. To couple the fluid phases, we develop new boundary conditions which realise the macroscopic jump conditions at the interface and incorporate surface tension in the lattice Boltzmann framework. Various simulations are presented to validate the numerical scheme, e.g. two-phase channel flows, the Young-Laplace law for a bubble and viscous fingering in a Hele-Shaw cell. The results show that the method is feasible over a wide range of density and viscosity differences.", "title": "" }, { "docid": "6e00567c5c33d899af9b5a67e37711a3", "text": "The adoption of cloud computing facilities and programming models differs vastly between different application domains. Scalable web applications, low-latency mobile backends and on-demand provisioned databases are typical cases for which cloud services on the platform or infrastructure level exist and are convincing when considering technical and economical arguments. Applications with specific processing demands, including high-performance computing, high-throughput computing and certain flavours of scientific computing, have historically required special configurations such as computeor memory-optimised virtual machine instances. With the rise of function-level compute instances through Function-as-a-Service (FaaS) models, the fitness of generic configurations needs to be re-evaluated for these applications. We analyse several demanding computing tasks with regards to how FaaS models compare against conventional monolithic algorithm execution. Beside the comparison, we contribute a refined FaaSification process for legacy software and provide a roadmap for future work. 1 Research Direction The ability to turn programmed functions or methods into ready-to-use cloud services is leading to a seemingly serverless development and deployment experience for application software engineers [1]. Without the necessity to allocate resources beforehand, prototyping new features and workflows becomes faster and more convenient to application service providers. These advantages have given boost to an industry trend consequently called Serverless Computing. The more precise, almost overlapping term in accordance with Everything-asa-Service (XaaS) cloud computing taxonomies is Function-as-a-Service (FaaS) [4]. In the FaaS layer, functions, either on the programming language level or as abstract concept around binary implementations, are executed synchronously or asynchronously through multi-protocol triggers. Function instances are provisioned on demand through coldstart or warmstart of the implementation in conjunction with an associated configuration in few milliseconds, elastically scaled as needed, and charged per invocation and per product of period of time and resource usage, leading to an almost perfect pay-as-you-go utility pricing model [11]. FaaS is gaining traction primarily in three areas. First, in Internet-of-Things applications where connected devices emit data sporadically. Second, for web applications with light-weight backend tasks. Third, as glue code between other cloud computing services. In contrast to the industrial popularity, no work is known to us which explores its potential for scientific and high-performance computing applications with more demanding execution requirements. From a cloud economics and strategy perspective, FaaS is a refinement of the platform layer (PaaS) with particular tools and interfaces. Yet from a software engineering and deployment perspective, functions are complementing other artefact types which are deployed into PaaS or underlying IaaS environments. Fig. 1 explains this positioning within the layered IaaS, PaaS and SaaS service classes, where the FaaS runtime itself is subsumed under runtime stacks. Performing experimental or computational science research with FaaS implies that the two roles shown, end user and application engineer, are adopted by a single researcher or a team of researchers, which is the setting for our research. Fig. 1. Positioning of FaaS in cloud application development The necessity to conduct research on FaaS for further application domains stems from the unique execution characteristics. Service instances are heuristically stateless, ephemeral, and furthermore limited in resource allotment and execution time. They are moreover isolated from each other and from the function management and control plane. In public commercial offerings, they are billed in subsecond intervals and terminated after few minutes, but as with any cloud application, private deployments are also possible. Hence, there is a trade-off between advantages and drawbacks which requires further analysis. For example, existing parallelisation frameworks cannot easily be used at runtime as function instances can only, in limited ways, invoke other functions without the ability to configure their settings. Instead, any such parallelisation needs to be performed before deployment with language-specific tools such as Pydron for Python [10] or Calvert’s compiler for Java [3]. For resourceand time-demanding applications, no special-purpose FaaS instances are offered by commercial cloud providers. This is a surprising observation given the multitude of options in other cloud compute services beyond general-purpose offerings, especially on the infrastructure level (IaaS). These include instance types optimised for data processing (with latest-generation processors and programmable GPUs), for memory allocation, and for non-volatile storage (with SSDs). Amazon Web Services (AWS) alone offers 57 different instance types. Our work is therefore concerned with the assessment of how current generic one-size-fits-all FaaS offerings handle scientific computing workloads, whether the proliferation of specialised FaaS instance types can be expected and how they would differ from commonly offered IaaS instance types. In this paper, we contribute specifically (i) a refined view on how software can be made fitting into special-purpose FaaS contexts with a high degree of automation through a process named FaaSification, and (ii) concepts and tools to execute such functions in constrained environments. In the remainder of the paper, we first present background information about FaaS runtimes, including our own prototypes which allow for providerindependent evaluations. Subsequently, we present four domain-specific scientific experiments conducted using FaaS to gain broad knowledge about resource requirements beyond general-purpose instances. We summarise the findings and reason about the implications for future scientific computing infrastructures. 2 Background on Function-as-a-Service 2.1 Programming Models and Runtimes The characteristics of function execution depend primarily on the FaaS runtime in use. There are broadly three categories of runtimes: 1. Proprietary commercial services, such as AWS Lambda, Google Cloud Functions, Azure Functions and Oracle Functions. 2. Open source alternatives with almost matching interfaces and functionality, such as Docker-LambCI, Effe, Google Cloud Functions Emulator and OpenLambda [6], some of which focus on local testing rather than operation. 3. Distinct open source implementations with unique designs, such as Apache OpenWhisk, Kubeless, IronFunctions and Fission, some of which are also available as commercial services, for instance IBM Bluemix OpenWhisk [5]. The uniqueness is a consequence of the integration with other cloud stacks (Kubernetes, OpenStack), the availability of web and command-line interfaces, the set of triggers and the level of isolation in multi-tenant operation scenarios, which is often achieved through containers. In addition, due to the often non-trivial configuration of these services, a number of mostly service-specific abstraction frameworks have become popular among developers, such as PyWren, Chalice, Zappa, Apex and the Serverless Framework [8]. The frameworks and runtimes differ in their support for programming languages, but also in the function signatures, parameters and return values. Hence, a comparison of the entire set of offerings requires a baseline. The research in this paper is congruously conducted with the mentioned commercial FaaS providers as well as with our open-source FaaS tool Snafu which allows for managing, executing and testing functions across provider-specific interfaces [14]. The service ecosystem relationship between Snafu and the commercial FaaS providers is shown in Fig. 2. Snafu is able to import services from three providers (AWS Lambda, IBM Bluemix OpenWhisk, Google Cloud Functions) and furthermore offers a compatible control plane to all three of them in its current implementation version. At its core, it contains a modular runtime environment with prototypical maturity for functions implemented in JavaScript, Java, Python and C. Most importantly, it enables repeatable research as it can be deployed as a container, in a virtual machine or on a bare metal workstation. Notably absent from the categories above are FaaS offerings in e-science infrastructures and research clouds, despite the programming model resembling widely used job submission systems. We expect our practical research contributions to overcome this restriction in a vendor-independent manner. Snafu, for instance, is already available as an alpha-version launch profile in the CloudLab testbed federated across several U.S. installations with a total capacity of almost 15000 cores [12], as well as in EGI’s federated cloud across Europe. Fig. 2. Snafu and its ecosystem and tooling Using Snafu, it is possible to adhere to the diverse programming conventions and execution conditions at commercial services while at the same time controlling and lifting the execution restrictions as necessary. In particular, it is possible to define memory-optimised, storage-optimised and compute-optimised execution profiles which serve to conduct the anticipated research on generic (general-purpose) versus specialised (special-purpose) cloud offerings for scientific computing. Snafu can execute in single process mode as well as in a loadbalancing setup where each request is forwarded by the master instance to a slave instance which in turn executes the function natively, through a languagespecific interpreter or through a container. Table 1 summarises the features of selected FaaS runtimes. Table 1. FaaS runtimes and their features Runtime Languages Programming model Import/Export AWS Lambda JavaScript, Python, Java, C# Lambda – Google Cloud Functions JavaScrip", "title": "" }, { "docid": "088cb7992c1d7910151b1008a70e5cd1", "text": "Cable-actuated parallel manipulators (CPMs) rely on cables instead of rigid links to manipulate the moving platform in the taskspace. Upper and lower bounds imposed on the cable tensions limit the force capability in CPMs and render certain forces infeasible at the end effector. This paper presents a geometrical analysis of the problems to 1) determine whether a CPM is capable of balancing a given wrench within the cable tension limits (feasibility check); 2) minimize the 2-norm of the cable tensions that balance feasible wrenches; and 3) check for the existence of an all-positive nullspace vector, which is a necessary condition to have a wrench-closure configuration in CPMs. The unified approach used in this analysis is systematic and geometrically intuitive that is based on the formulation of the static force equilibrium problem as an intersection between two convex sets and the application of Dykstra's alternating projection algorithm to find the projection of a point onto that intersection. In the case of infeasible wrenches, the algorithm can determine whether the infeasibility is because of the cable tension limits or the non-wrench-closure configuration. For the former case, a method was developed by which this algorithm can be used to extend the cable tension limits to balance infeasible wrenches. In addition, the performance of the algorithm is explained in the case of incompletely restrained cable-driven manipulators and the case of manipulators at singular poses. This paper also discusses the algorithm convergence and termination rule. This geometrical and systematic approach is intended for use as a convenient tool for cable tension analysis during design.", "title": "" }, { "docid": "0472c8c606024aaf2700dee3ad020c07", "text": "Any discussion on exchange rate movements and forecasting should include explanatory variables from both the current account and the capital account of the balance of payments. In this paper, we include such factors to forecast the value of the Indian rupee vis a vis the US Dollar. Further, factors reflecting political instability and lack of mechanism for enforcement of contracts that can affect both direct foreign investment and also portfolio investment, have been incorporated. The explanatory variables chosen are the 3 month Rupee Dollar futures exchange rate (FX4), NIFTY returns (NIFTYR), Dow Jones Industrial Average returns (DJIAR), Hang Seng returns (HSR), DAX returns (DR), crude oil price (COP), CBOE VIX (CV) and India VIX (IV). To forecast the exchange rate, we have used two different classes of frameworks namely, Artificial Neural Network (ANN) based models and Time Series Econometric models. Multilayer Feed Forward Neural Network (MLFFNN) and Nonlinear Autoregressive models with Exogenous Input (NARX) Neural Network are the approaches that we have used as ANN models. Generalized Autoregressive Conditional Heteroskedastic (GARCH) and Exponential Generalized Autoregressive Conditional Heteroskedastic (EGARCH) techniques are the ones that we have used as Time Series Econometric methods. Within our framework, our results indicate that, although the two different approaches are quite efficient in forecasting the exchange rate, MLFNN and NARX are the most efficient. Journal of Insurance and Financial Management ARTICLE INFO JEL Classification: C22 C45 C63 F31 F47", "title": "" }, { "docid": "d6b87f5b6627f1a1ac5cc951c7fe0f28", "text": "Despite a strong nonlinear behavior and a complex design, the interior permanent-magnet (IPM) machine is proposed as a good candidate among the PM machines owing to its interesting peculiarities, i.e., higher torque in flux-weakening operation, higher fault tolerance, and ability to adopt low-cost PMs. A second trend in designing PM machines concerns the adoption of fractional-slot (FS) nonoverlapped coil windings, which reduce the end winding length and consequently the Joule losses and the cost. Therefore, the adoption of an IPM machine with an FS winding aims to combine both advantages: high torque and efficiency in a wide operating region. However, the combination of an anisotropic rotor and an FS winding stator causes some problems. The interaction between the magnetomotive force harmonics due to the stator current and the rotor anisotropy causes a very high torque ripple. This paper illustrates a procedure in designing an IPM motor with the FS winding exhibiting a low torque ripple. The design strategy is based on two consecutive steps: at first, the winding is optimized by taking a multilayer structure, and then, the rotor geometry is optimized by adopting a nonsymmetric structure. As an example, a 12-slot 10-pole IPM machine is considered, achieving a torque ripple lower than 1.5% at full load.", "title": "" }, { "docid": "66c57a94a5531b36199bd52521a56ccb", "text": "This project describes design and experimental analysis of composite leaf spring made of glass fiber reinforced polymer. The objective is to compare the load carrying capacity, stiffness and weight savings of composite leaf spring with that of steel leaf spring. The design constraints are stresses and deflections. The dimensions of an existing conventional steel leaf spring of a light commercial vehicle are taken. Same dimensions of conventional leaf spring are used to fabricate a composite multi leaf spring using E-Glass/Epoxy unidirectional laminates. Static analysis of 2-D model of conventional leaf spring is also performed using ANSYS 10 and compared with experimental results. Finite element analysis with full load on 3-D model of composite multi leaf spring is done using ANSYS 10 and the analytical results are compared with experimental results. Compared to steel spring, the composite leaf spring is found to have 67.35% lesser stress, 64.95% higher stiffness and 126.98% higher natural frequency than that of existing steel leaf spring. A weight reduction of 76.4% is achieved by using optimized composite leaf spring.", "title": "" }, { "docid": "b07f858d08f40f61f3ed418674948f12", "text": "Nowadays, due to the great distance between design and implementation worlds, different skills are necessary to create a game system. To solve this problem, a lot of strategies for game development, trying to increase the abstraction level necessary for the game production, were proposed. In this way, a lot of game engines, game frameworks and others, in most cases without any compatibility or reuse criteria between them, were developed. This paper presents a new generative programming approach, able to increase the production of a digital game by the integration of different game development artifacts, following a system family strategy focused on variable and common aspects of a computer game. As result, high level abstractions of games, based on a common language, can be used to configure met programming transformations during the game production, providing a great compatibility level between game domain and game implementation artifacts.", "title": "" }, { "docid": "d63946a096b9e8a99be6d5ddfe4097da", "text": "While the first open comparative challenges in the field of paralinguistics targeted more ‘conventional’ phenomena such as emotion, age, and gender, there still exists a multiplicity of not yet covered, but highly relevant speaker states and traits. The INTERSPEECH 2011 Speaker State Challenge thus addresses two new sub-challenges to overcome the usually low compatibility of results: In the Intoxication Sub-Challenge, alcoholisation of speakers has to be determined in two classes; in the Sleepiness Sub-Challenge, another two-class classification task has to be solved. This paper introduces the conditions, the Challenge corpora “Alcohol Language Corpus” and “Sleepy Language Corpus”, and a standard feature set that may be used. Further, baseline results are given.", "title": "" }, { "docid": "0b6ce2e4f3ef7f747f38068adef3da54", "text": "Network throughput can be increased by allowing multipath, adaptive routing. Adaptive routing allows more freedom in the paths taken by messages, spreading load over physical channels more evenly. The flexibility of adaptive routing introduces new possibilities of deadlock. Previous deadlock avoidance schemes in k-ary n-cubes require an exponential number of virtual channels, independent of network size and dimension. Planar adaptive routing algorithms reduce the complexity of deadlock prevention by reducing the number of choices at each routing step. In the fault-free case, planar-adaptive networks are guaranteed to be deadlock-free. In the presence of network faults, the planar-adaptive router can be extended with misrouting to produce a working network which remains provably deadlock free and is provably livelock free. In addition, planar adaptive networks can simultaneously support both in-order and adaptive, out-of-order packet delivery.\nPlanar-adaptive routing is of practical significance. It provides the simplest known support for deadlock-free adaptive routing in k-ary n-cubes of more than two dimensions (with k > 2). Restricting adaptivity reduces the hardware complexity, improving router speed or allowing additional performance-enhancing network features. The structure of planar-adaptive routers is amenable to efficient implementation.", "title": "" }, { "docid": "e858a3bda1ac2568afa328cd4352c804", "text": "Bilingual advantages in executive control tasks are well documented, but it is not yet clear what degree or type of bilingualism leads to these advantages. To investigate this issue, we compared the performance of two bilingual groups and monolingual speakers in task-switching and language-switching paradigms. Spanish-English bilinguals, who reported switching between languages frequently in daily life, exhibited smaller task-switching costs than monolinguals after controlling for between-group differences in speed and parent education level. By contrast, Mandarin-English bilinguals, who reported switching languages less frequently than Spanish-English bilinguals, did not exhibit a task-switching advantage relative to monolinguals. Comparing the two bilingual groups in language-switching, Spanish-English bilinguals exhibited smaller costs than Mandarin-English bilinguals, even after matching for fluency in the non-dominant language. These results demonstrate an explicit link between language-switching and bilingual advantages in task-switching, while also illustrating some limitations on bilingual advantages.", "title": "" }, { "docid": "8109594325601247cdb253dbb76b9592", "text": "Disturbance compensation is one of the major problems in control system design. Due to external disturbance or model uncertainty that can be treated as disturbance, all control systems are subject to disturbances. When it comes to networked control systems, not only disturbances but also time delay is inevitable where controllers are remotely connected to plants through communication network. Hence, simultaneous compensation for disturbance and time delay is important. Prior work includes a various combinations of smith predictor, internal model control, and disturbance observer tailored to simultaneous compensation of both time delay and disturbance. In particular, simplified internal model control simultaneously compensates for time delay and disturbances. But simplified internal model control is not applicable to the plants that have two poles at the origin. We propose a modified simplified internal model control augmented with disturbance observer which simultaneously compensates time delay and disturbances for the plants with two poles at the origin. Simulation results are provided.", "title": "" }, { "docid": "81126b57a29b4c9aee46ecb04c7f43ca", "text": "Within the field of bibliometrics, there is sustained interest in how nations “compete” in terms of academic disciplines, and what determinants explain why countries may have a specific advantage in one discipline over another. However, this literature has not, to date, presented a comprehensive structured model that could be used in the interpretation of a country’s research profile and aca‐ demic output. In this paper, we use frameworks from international business and economics to pre‐ sent such a model. Our study makes four major contributions. First, we include a very wide range of countries and disci‐ plines, explicitly including the Social Sciences, which unfortunately are excluded in most bibliometrics studies. Second, we apply theories of revealed comparative advantage and the competitive ad‐ vantage of nations to academic disciplines. Third, we cluster our 34 countries into five different groups that have distinct combinations of revealed comparative advantage in five major disciplines. Finally, based on our empirical work and prior literature, we present an academic diamond that de‐ tails factors likely to explain a country’s research profile and competitiveness in certain disciplines.", "title": "" }, { "docid": "5aee510b62d8792a38044fc8c68a57e4", "text": "In this paper we present a novel method for jointly extracting beats and downbeats from audio signals. A recurrent neural network operating directly on magnitude spectrograms is used to model the metrical structure of the audio signals at multiple levels and provides an output feature that clearly distinguishes between beats and downbeats. A dynamic Bayesian network is then used to model bars of variable length and align the predicted beat and downbeat positions to the global best solution. We find that the proposed model achieves state-of-the-art performance on a wide range of different musical genres and styles.", "title": "" }, { "docid": "87a11f6097cb853b7c98e17cdf97801e", "text": "Recent work has shown that recurrent neural networks (RNNs) can implicitly capture and exploit hierarchical information when trained to solve common natural language processing tasks (Blevins et al., 2018) such as language modeling (Linzen et al., 2016; Gulordava et al., 2018) and neural machine translation (Shi et al., 2016). In contrast, the ability to model structured data with non-recurrent neural networks has received little attention despite their success in many NLP tasks (Gehring et al., 2017; Vaswani et al., 2017). In this work, we compare the two architectures—recurrent versus non-recurrent—with respect to their ability to model hierarchical structure and find that recurrency is indeed important for this purpose. The code and data used in our experiments is available at https://github.com/", "title": "" } ]
scidocsrr
bcbe99733d48107626df7954b4ef2526
Smart tourism: foundations and developments
[ { "docid": "72221bf6d95f297449fd2c7b646488e9", "text": "Recent changes in service environments have changed the preconditions of their production and consumption. These changes include unbundling services from production processes, growth of the information-rich economy and society, the search for creativity in service production and consumption and continuing growth of digital technologies. These contextual changes affect city governments because they provide a range of infrastructure and welfare services to citizens. Concepts such as ‘smart city’, ‘intelligent city’ and ‘knowledge city’ build new horizons for cities in undertaking their challenging service functions in an increasingly cost-conscious, competitive and environmentally oriented setting. What is essential in practically all of them is that they paint a picture of cities with smooth information processes, facilitation of creativity and innovativeness, and smart and sustainable solutions promoted through service platforms. This article discusses this topic, starting from the nature of services and the new service economy as the context of smart local public services. On this basis, we build an overall framework for understanding the basic forms and dimensions of smart public services. The focus is on conceptual systematisation of the key dimensions of smart services and the conceptual modelling of smart service platforms through which digital technology is increasingly embedded in social creativity. We provide examples of real-life smart service applications within the European context.", "title": "" }, { "docid": "503ddcf57b4e7c1ddc4f4646fb6ca3db", "text": "Merging the virtual World Wide Web with nearby physical devices that are part of the Internet of Things gives anyone with a mobile device and the appropriate authorization the power to monitor or control anything.", "title": "" }, { "docid": "d81e35229c0fc0b9c7d498a254a4d6be", "text": "Recent advances in the field of technology have led to the emergence of innovative technological smart solutions providing unprecedented opportunities for application in the tourism and hospitality industry. With intensified competition in the tourism market place, it has become paramount for businesses to explore the potential of technologies, not only to optimize existing processes but facilitate the creation of more meaningful and personalized services and experiences. This study aims to bridge the current knowledge gap between smart technologies and experience personalization to understand how smart mobile technologies can facilitate personalized experiences in the context of the hospitality industry. By adopting a qualitative case study approach, this paper makes a two-fold contribution; it a) identifies the requirements of smart technologies for experience creation, including information aggregation, ubiquitous mobile connectedness and real time synchronization and b) highlights how smart technology integration can lead to two distinct levels of personalized tourism experiences. The paper concludes with the development of a model depicting the dynamic process of experience personalization and a discussion of the strategic implications for tourism and hospitality management and research.", "title": "" } ]
[ { "docid": "e33080761e4ece057f455148c7329d5e", "text": "This paper compares the utilization of ConceptNet and WordNet in query expansion. Spreading activation selects candidate terms for query expansion from these two resources. Three measures including discrimination ability, concept diversity, and retrieval performance are used for comparisons. The topics and document collections in the ad hoc track of TREC-6, TREC-7 and TREC-8 are adopted in the experiments. The results show that ConceptNet and WordNet are complementary. Queries expanded with WordNet have higher discrimination ability. In contrast, queries expanded with ConceptNet have higher concept diversity. The performance of queries expanded by selecting the candidate terms from ConceptNet and WordNet outperforms that of queries without expansion, and queries expanded with a single resource.", "title": "" }, { "docid": "00ff2d5e2ca1d913cbed769fe59793d4", "text": "In recent work, we showed that putatively adaptive emotion regulation strategies, such as reappraisal and acceptance, have a weaker association with psychopathology than putatively maladaptive strategies, such as rumination, suppression, and avoidance (e.g., Aldao & Nolen-Hoeksema, 2010; Aldao, Nolen-Hoeksema, & Schweizer, 2010). In this investigation, we examined the interaction between adaptive and maladaptive emotion regulation strategies in the prediction of psychopathology symptoms (depression, anxiety, and alcohol problems) concurrently and prospectively. We assessed trait emotion regulation and psychopathology symptoms in a sample of community residents at Time 1 (N = 1,317) and then reassessed psychopathology at Time 2 (N = 1,132). Cross-sectionally, we found that the relationship between adaptive strategies and psychopathology symptoms was moderated by levels of maladaptive strategies: adaptive strategies had a negative association with psychopathology symptoms only at high levels of maladaptive strategies. In contrast, adaptive strategies showed no prospective relationship to psychopathology symptoms either alone or in interaction with maladaptive strategies. We discuss the implications of this investigation for future work on the contextual factors surrounding the deployment of emotion regulation strategies.", "title": "" }, { "docid": "27e10b0ba009a8b86431a808e712d761", "text": "In this work, we propose using camera arrays coupled with coherent illumination as an effective method of improving spatial resolution in long distance images by a factor often and beyond. Recent advances in ptychography have demonstrated that one can image beyond the diffraction limit of the objective lens in a microscope. We demonstrate a similar imaging system to image beyond the diffraction limit in long range imaging. We emulate a camera array with a single camera attached to an XY translation stage. We show that an appropriate phase retrieval based reconstruction algorithm can be used to effectively recover the lost high resolution details from the multiple low resolution acquired images. We analyze the effects of noise, required degree of image overlap, and the effect of increasing synthetic aperture size on the reconstructed image quality. We show that coherent camera arrays have the potential to greatly improve imaging performance. Our simulations show resolution gains of 10× and more are achievable. Furthermore, experimental results from our proof-of-concept systems show resolution gains of 4 × -7× for real scenes. All experimental data and code is made publicly available on the project webpage. Finally, we introduce and analyze in simulation a new strategy to capture macroscopic Fourier Ptychography images in a single snapshot, albeit using a camera array.", "title": "" }, { "docid": "6de8ae942642948928028da20dd548d5", "text": "This paper describes the design, construction, and operation of a closed-loop spherical induction motor (SIM) ball wheel for a balancing mobile robot (ballbot). Following earlier work, this new design has a smaller rotor and higher torques due to the use of six stators in a skewed layout. Actuation and sensing kinematics as well as control methods are presented. In its current implementation, torques of up to 8 Nm are produced by the motor with rise and decay times of 100 ms. Results are presented supporting its potential as a prime mover for mobile robots.", "title": "" }, { "docid": "001764b6037862def1e37fec85984293", "text": "We present a basic technique to fill-in missing parts of a video sequence taken from a static camera. Two important cases are considered. The first case is concerned with the removal of non-stationary objects that occlude stationary background. We use a priority based spatio-temporal synthesis scheme for inpainting the stationary background. The second and more difficult case involves filling-in moving objects when they are partially occluded. For this, we propose a priority scheme to first inpaint the occluded moving objects and then fill-in the remaining area with stationary background using the method proposed for the first case. We use as input an optical-flow based mask, which tells if an undamaged pixel is moving or is stationary. The moving object is inpainted by copying patches from undamaged frames, and this copying is independent of the background of the moving object in either frame. This work has applications in a variety of different areas, including video special effects and restoration and enhancement of damaged videos. The examples shown in the paper illustrate these ideas.", "title": "" }, { "docid": "7355bf66dac6e027c1d6b4c2631d8780", "text": "Cannabidiol is a component of marijuana that does not activate cannabinoid receptors, but moderately inhibits the degradation of the endocannabinoid anandamide. We previously reported that an elevation of anandamide levels in cerebrospinal fluid inversely correlated to psychotic symptoms. Furthermore, enhanced anandamide signaling let to a lower transition rate from initial prodromal states into frank psychosis as well as postponed transition. In our translational approach, we performed a double-blind, randomized clinical trial of cannabidiol vs amisulpride, a potent antipsychotic, in acute schizophrenia to evaluate the clinical relevance of our initial findings. Either treatment was safe and led to significant clinical improvement, but cannabidiol displayed a markedly superior side-effect profile. Moreover, cannabidiol treatment was accompanied by a significant increase in serum anandamide levels, which was significantly associated with clinical improvement. The results suggest that inhibition of anandamide deactivation may contribute to the antipsychotic effects of cannabidiol potentially representing a completely new mechanism in the treatment of schizophrenia.", "title": "" }, { "docid": "ea8c0a7516b180a6a542a852b62e6497", "text": "Genetic growth curves of boars in a test station were predicted on daily weight records collected by automated weighing scales. The data contained 121 865 observations from 1477 Norwegian Landrace boars and 108 589 observations from 1300 Norwegian Duroc boars. Random regression models using Legendre polynomials up to second order for weight at different ages were compared for best predicting ability and Bayesian information criterion (BIC) for both breeds. The model with second-order polynomials had best predictive ability and BIC. The heritability for weight, based on this model, was found to vary along the growth trajectory between 0.32-0.35 for Duroc and 0.17-0.25 for Landrace. By varying test length possibility to use shorter test time and pre-selection was tested. Test length was varied and compared with average termination at 100 kg, termination of the test at 90 kg gives, e.g. 2% reduction in accuracy of estimated breeding values (EBV) for both breeds and termination at 80 kg gives 5% reduction in accuracy of EBVs for Landrace and 3% for Duroc. A shorter test period can decrease test costs per boar, but also gives possibilities to increase selection intensity as there will be room for testing more boars.", "title": "" }, { "docid": "44d8cb42bd4c2184dc226cac3adfa901", "text": "Several descriptions of redundancy are presented in the literature , often from widely dif ferent perspectives . Therefore , a discussion of these various definitions and the salient points would be appropriate . In particular , any definition and redundancy needs to cover the following issues ; the dif ference between multiple solutions and an infinite number of solutions ; degenerate solutions to inverse kinematics ; task redundancy ; and the distinction between non-redundant , redundant and highly redundant manipulators .", "title": "" }, { "docid": "917458b0c9e26b878676d1edf542b5ea", "text": "The purpose of this paper is to provide a comprehensive presentation and interpretation of the Ensemble Kalman Filter (EnKF) and its numerical implementation. The EnKF has a large user group, and numerous publications have discussed applications and theoretical aspects of it. This paper reviews the important results from these studies and also presents new ideas and alternative interpretations which further explain the success of the EnKF. In addition to providing the theoretical framework needed for using the EnKF, there is also a focus on the algorithmic formulation and optimal numerical implementation. A program listing is given for some of the key subroutines. The paper also touches upon specific issues such as the use of nonlinear measurements, in situ profiles of temperature and salinity, and data which are available with high frequency in time. An ensemble based optimal interpolation (EnOI) scheme is presented as a cost-effective approach which may serve as an alternative to the EnKF in some applications. A fairly extensive discussion is devoted to the use of time correlated model errors and the estimation of model bias.", "title": "" }, { "docid": "9c799b4d771c724969be7b392697ebee", "text": "Search engines need to model user satisfaction to improve their services. Since it is not practical to request feedback on searchers' perceptions and search outcomes directly from users, search engines must estimate satisfaction from behavioral signals such as query refinement, result clicks, and dwell times. This analysis of behavior in the aggregate leads to the development of global metrics such as satisfied result clickthrough (typically operationalized as result-page clicks with dwell time exceeding a particular threshold) that are then applied to all searchers' behavior to estimate satisfac-tion levels. However, satisfaction is a personal belief and how users behave when they are satisfied can also differ. In this paper we verify that searcher behavior when satisfied and dissatisfied is indeed different among individual searchers along a number of dimensions. As a result, we introduce and evaluate learned models of satisfaction for individual searchers and searcher cohorts. Through experimentation via logs from a large commercial Web search engine, we show that our proposed models can predict search satisfaction more accurately than a global baseline that applies the same satisfaction model across all users. Our findings have implications for the study and application of user satisfaction in search systems.", "title": "" }, { "docid": "712335f6cbe0d00fce07d6bb6d600759", "text": "Narrowband Internet of Things (NB-IoT) is a new radio access technology, recently standardized in 3GPP to enable support for IoT devices. NB-IoT offers a range of flexible deployment options and provides improved coverage and support for a massive number of devices within a cell. In this paper, we provide a detailed evaluation of the coverage performance of NBIoT and show that it achieves a coverage enhancement of up to 20 dB when compared with existing LTE technology.", "title": "" }, { "docid": "06b1a00a97eea61ada0d92469254ddbd", "text": "We propose a model for clustering data with spatiotemporal intervals. This model is used to effectively evaluate clusters of spatiotemporal interval data. A new energy function is used to measure similarity and balance between clusters in spatial and temporal dimensions. We employ as a case study a large collection of parking data from a real CBD area. The proposed model is applied to existing traditional algorithms to address spatiotemporal interval data clustering problem. Results from traditional clustering algorithms are compared and analysed using the proposed energy function.", "title": "" }, { "docid": "802f77b4e2b8c8cdfb68f80fe31d7494", "text": "In this article, we use three clustering methods (K-means, self-organizing map, and fuzzy K-means) to find properly graded stock market brokerage commission rates based on the 3-month long total trades of two different transaction modes (representative assisted and online trading system). Stock traders for both modes are classified in terms of the amount of the total trade as well as the amount of trade of each transaction mode, respectively. Results of our empirical analysis indicate that fuzzy K-means cluster analysis is the most robust approach for segmentation of customers of both transaction modes. We then propose a decision tree based rule to classify three groups of customers and suggest different brokerage commission rates of 0.4, 0.45, and 0.5% for representative assisted mode and 0.06, 0.1, and 0.18% for online trading system, respectively. q 2003 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "a57bdfa9c48a76d704258f96874ea700", "text": "BACKGROUND\nPrevious state-of-the-art systems on Drug Name Recognition (DNR) and Clinical Concept Extraction (CCE) have focused on a combination of text \"feature engineering\" and conventional machine learning algorithms such as conditional random fields and support vector machines. However, developing good features is inherently heavily time-consuming. Conversely, more modern machine learning approaches such as recurrent neural networks (RNNs) have proved capable of automatically learning effective features from either random assignments or automated word \"embeddings\".\n\n\nOBJECTIVES\n(i) To create a highly accurate DNR and CCE system that avoids conventional, time-consuming feature engineering. (ii) To create richer, more specialized word embeddings by using health domain datasets such as MIMIC-III. (iii) To evaluate our systems over three contemporary datasets.\n\n\nMETHODS\nTwo deep learning methods, namely the Bidirectional LSTM and the Bidirectional LSTM-CRF, are evaluated. A CRF model is set as the baseline to compare the deep learning systems to a traditional machine learning approach. The same features are used for all the models.\n\n\nRESULTS\nWe have obtained the best results with the Bidirectional LSTM-CRF model, which has outperformed all previously proposed systems. The specialized embeddings have helped to cover unusual words in DrugBank and MedLine, but not in the i2b2/VA dataset.\n\n\nCONCLUSIONS\nWe present a state-of-the-art system for DNR and CCE. Automated word embeddings has allowed us to avoid costly feature engineering and achieve higher accuracy. Nevertheless, the embeddings need to be retrained over datasets that are adequate for the domain, in order to adequately cover the domain-specific vocabulary.", "title": "" }, { "docid": "5c512bf8cb37f3937b27855e03e111d6", "text": "Tensor CANDECOMP/PARAFAC (CP) decomposition has wide applications in statistical learning of latent variable models and in data mining. In this paper, we propose fast and randomized tensor CP decomposition algorithms based on sketching. We build on the idea of count sketches, but introduce many novel ideas which are unique to tensors. We develop novel methods for randomized computation of tensor contractions via FFTs, without explicitly forming the tensors. Such tensor contractions are encountered in decomposition methods such as tensor power iterations and alternating least squares. We also design novel colliding hashes for symmetric tensors to further save time in computing the sketches. We then combine these sketching ideas with existing whitening and tensor power iterative techniques to obtain the fastest algorithm on both sparse and dense tensors. The quality of approximation under our method does not depend on properties such as sparsity, uniformity of elements, etc. We apply the method for topic modeling and obtain competitive results.", "title": "" }, { "docid": "3dd1755e44ecefbc1cc12ad172cec9dd", "text": "s from the hardware actually present. This abstraction happens between the hardware and the software layer of a system, as indicated in Fig. 2.8; which shows two virtual machines mapped to the same hardware and encapsulated by individual containers. Note that virtual machines (VM) can run distinct operating systems atop the same hardware. Virtualization typically simplifies the administration of a system, and it can help increase system security; a crash of a virtual machine has no impact on other virtual machines. Technically a virtual machine is nothing but a file. Virtualization is implemented using a Virtual Machine Monitor or Hypervisor which takes care of resource mapping and management. We finally mention another precursor to cloud computing, which can be observed during the past 25 years as a major paradigm shift in software development, namely a departure from large and monolithic software applications to light-weight services which ultimately can be composed and orchestrated into more powerful services that finally carry entire application scenarios. Service-orientation especially in the form of service calls to an open application programming interface (API) that can be contacted over the Web as long as the correct input parameters are delivered have not only become very popular, but are also exploited these days in numerous ways, for the particular reason of giving users an increased level of functionality from a single source. A benefit of the service approach to software development has so far been the fact that platform development especially on the Web has received a high amount of attention in recent years. Yet it has also contributed to the fact that services which a provider delivers behind the scenes to some well-defined interface can be enhanced and modified and even permanently corrected and updated without the user even noticing, and it has triggered the development of the SOA (Service-Oriented Architecture) concept that was mentioned in the previous section. Operating System App. 1 App. 2 Virtualization Layer Operating System App. 3 App. 4 Hardware VM Container VM Container Fig. 2.8 Virtualized infrastructure 2.2 Virtualization and Cloud Computing 73", "title": "" }, { "docid": "e4817273d4601c309a0a5577fafb651f", "text": "This study investigated performance and physiology to understand pacing strategies in elite Paralympic athletes with cerebral palsy (CP). Six Paralympic athletes with CP and 13 able-bodied (AB) athletes performed two trials of eight sets of 10 shuttles (total 1600m). One trial was distance-deceived (DEC, 1000 m + 600 m) one trial was nondeceived (N-DEC, 1600 m). Time (s), heart rate (HR, bpm), ratings of perceived exertion (RPE, units), and electromyography of five bilateral muscles (EMG) were recorded for each set of both trials. The CP group ran slower than the AB group, and pacing differences were seen in the CP DEC trial, presenting as a flat pacing profile over the trial (P < 0.05). HR was higher and RPE was lower in the CP group in both trials (P < 0.05). EMG showed small differences between groups, sides, and trials. The present study provides evidence for a possible pacing strategy underlying exercise performance and fatigue in CP. The results of this study show (1) underperformance of the CP group, and (2) altered pacing strategy utilization in the CP group. We proposed that even at high levels of performance, the residual effects of CP may negatively affect performance through selection of conservative pacing strategies during exercise.", "title": "" }, { "docid": "ad8ebb2f4ec3350a2486a63019557633", "text": "Building a persona-based conversation agent is challenging owing to the lack of large amounts of speaker-specific conversation data for model training. This paper addresses the problem by proposing a multi-task learning approach to training neural conversation models that leverages both conversation data across speakers and other types of data pertaining to the speaker and speaker roles to be modeled. Experiments show that our approach leads to significant improvements over baseline model quality, generating responses that capture more precisely speakers’ traits and speaking styles. The model offers the benefits of being algorithmically simple and easy to implement, and not relying on large quantities of data representing specific individual speakers.", "title": "" }, { "docid": "b7e78ca489cdfb8efad03961247e12f2", "text": "ASR short for Automatic Speech Recognition is the process of converting a spoken speech into text that can be manipulated by a computer. Although ASR has several applications, it is still erroneous and imprecise especially if used in a harsh surrounding wherein the input speech is of low quality. This paper proposes a post-editing ASR error correction method and algorithm based on Bing’s online spelling suggestion. In this approach, the ASR recognized output text is spell-checked using Bing’s spelling suggestion technology to detect and correct misrecognized words. More specifically, the proposed algorithm breaks down the ASR output text into several word-tokens that are submitted as search queries to Bing search engine. A returned spelling suggestion implies that a query is misspelled; and thus it is replaced by the suggested correction; otherwise, no correction is performed and the algorithm continues with the next token until all tokens get validated. Experiments carried out on various speeches in different languages indicated a successful decrease in the number of ASR errors and an improvement in the overall error correction rate. Future research can improve upon the proposed algorithm so much so that it can be parallelized to take advantage of multiprocessor computers. KeywordsSpeech Recognition; Error Correction; Bing Spelling", "title": "" }, { "docid": "457efc3b22084fd7221637bd574ff075", "text": "Group-based trajectory models are used to investigate population differences in the developmental courses of behaviors or outcomes . This article demonstrates a new Stata command, traj, for fitting to longitudinal data finite (discrete) mixture models designed to identify clusters of individuals following similar progressions of some behavior or outcome over age or time. Censored normal, Poisson, zero-inflated Poisson, and Bernoulli distributions are supported. Applications to psychometric scale data, count data, and a dichotomous prevalence measure are illustrated. Introduction A developmental trajectory measures the course of an outcome over age or time. The study of developmental trajectories is a central theme of developmental and abnormal psychology and psychiatry, of life course studies in sociology and criminology, of physical and biological outcomes in medicine and gerontology. A wide variety of statistical methods are used to study these phenomena. This article demonstrates a Stata plugin for estimating group-based trajectory models. The Stata program we demonstrate adapts a well-established SAS-based procedure for estimating group-based trajectory model (Jones, Nagin, and Roeder, 2001; Jones and Nagin, 2007) to the Stata platform. Group-based trajectory modeling is a specialized form of finite mixture modeling. The method is designed identify groups of individuals following similarly developmental trajectories. For a recent review of applications of group-based trajectory modeling see Nagin and Odgers (2010) and for an extended discussion of the method, including technical details, see Nagin (2005). A Brief Overview of Group-Based Trajectory Modeling Using finite mixtures of suitably defined probability distributions, the group-based approach for modeling developmental trajectories is intended to provide a flexible and easily applied method for identifying distinctive clusters of individual trajectories within the population and for profiling the characteristics of individuals within the clusters. Thus, whereas the hierarchical and latent curve methodologies model population variability in growth with multivariate continuous distribution functions, the group-based approach utilizes a multinomial modeling strategy. Technically, the group-based trajectory model is an example of a finite mixture model. Maximum likelihood is used for the estimation of the model parameters. The maximization is performed using a general quasi-Newton procedure (Dennis, Gay, and Welsch 1981; Dennis and Mei 1979). The fundamental concept of interest is the distribution of outcomes conditional on age (or time); that is, the distribution of outcome trajectories denoted by ), | ( i i Age Y P where the random vector Yi represents individual i’s longitudinal sequence of behavioral outcomes and the vector Agei represents individual i’s age when each of those measurements is recorded. The group-based trajectory model assumes that the population distribution of trajectories arises from a finite mixture of unknown order J. The likelihood for each individual i, conditional on the number of groups J, may be written as 1 Trajectories can also be defined by time (e.g., time from treatment). 1 ( | ) ( | , ; ) (1), J j j i i i i j P Y Age P Y Age j       where  is the probability of membership in group j, and the conditional distribution of Yi given membership in j is indexed by the unknown parameter vector  which among other things determines the shape of the group-specific trajectory. The trajectory is modeled with up to a 5 order polynomial function of age (or time). For given j, conditional independence is assumed for the sequential realizations of the elements of Yi , yit, over the T periods of measurement. Thus, we may write    T i t j it it j i i j age y p j Age Y P ), 2 ( ) ; , | ( ) ; , | (   where p(.) is the distribution of yit conditional on membership in group j and the age of individual i at time t. 2 The software provides three alternative specifications of p(.): the censored normal distribution also known as the Tobit model, the zero-inflated Poisson distribution, and the binary logit distribution. The censored normal distribution is designed for the analysis of repeatedly measured, (approximately) continuous scales which may be censored by either a scale minimum or maximum or both (e.g., longitudinal data on a scale of depression symptoms). A special case is a scale or other outcome variable with no minimum or maximum. The zero-inflated Poisson distribution is designed for the analysis of longitudinal count data (e.g., arrests by age) and binary logit distribution for the analysis of longitudinal data on a dichotomous outcome variable (e.g., whether hospitalized in year t or not). The model also provides capacity for analyzing the effect of time-stable covariate effects on probability of group membership and the effect of time dependent covariates on the trajectory itself. Let i x denote a vector of time stable covariates thought to be associated with probability of trajectory group membership. Effects of time-stable covariates are modeled with a generalized logit function where without loss of generality :", "title": "" } ]
scidocsrr
7a66359b7fd45cc847d2f450c94d0a22
DOM tree based approach for Web content extraction
[ { "docid": "37b5a10646e741f8b7430a2037f6a472", "text": "Web pages often contain clutter (such as pop-up ads, unnecessary images and extraneous links) around the body of an article that distracts a user from actual content. Extraction of \"useful and relevant\" content from web pages has many applications, including cell phone and PDA browsing, speech rendering for the visually impaired, and text summarization. Most approaches to removing clutter or making content more readable involve changing font size or removing HTML and data components such as images, which takes away from a webpage's inherent look and feel. Unlike \"Content Reformatting\", which aims to reproduce the entire webpage in a more convenient form, our solution directly addresses \"Content Extraction\". We have developed a framework that employs easily extensible set of techniques that incorporate advantages of previous work on content extraction. Our key insight is to work with the DOM trees, rather than with raw HTML markup. We have implemented our approach in a publicly available Web proxy to extract content from HTML web pages.", "title": "" }, { "docid": "ef14d26a613cec20b7ea36e24c197da1", "text": "In this paper, we propose a new approach to discover informative contents from a set of tabular documents (or Web pages) of a Web site. Our system, InfoDiscoverer, first partitions a page into several content blocks according to HTML tag <TABLE> in a Web page. Based on the occurrence of the features (terms) in the set of pages, it calculates entropy value of each feature. According to the entropy value of each feature in a content block, the entropy value of the block is defined. By analyzing the information measure, we propose a method to dynamically select the entropy-threshold that partitions blocks into either informative or redundant. Informative content blocks are distinguished parts of the page, whereas redundant content blocks are common parts. Based on the answer set generated from 13 manually tagged news Web sites with a total of 26,518 Web pages, experiments show that both recall and precision rates are greater than 0.956. That is, using the approach, informative blocks (news articles) of these sites can be automatically separated from semantically redundant contents such as advertisements, banners, navigation panels, news categories, etc. By adopting InfoDiscoverer as the preprocessor of information retrieval and extraction applications, the retrieval and extracting precision will be increased, and the indexing size and extracting complexity will also be reduced.", "title": "" } ]
[ { "docid": "cb95c63a4c3c350253416a22e347ce46", "text": "In recent times, with the increasing interest in conversational agents for smart homes, task-oriented dialog systems are being actively researched. However, most of these studies are focused on the individual modules of such a system, and there is an evident lack of research on a dialog framework that can integrate and manage the entire dialog system. Therefore, in this study, we propose a framework that enables the user to effectively develop an intelligent dialog system. The proposed framework ontologically expresses the knowledge required for the task-oriented dialog system's process and can build a dialog system by editing the dialog knowledge. In addition, the framework provides a module router that can indirectly run externally developed modules. Further, it enables a more intelligent conversation by providing a hierarchical argument structure (HAS) to manage the various argument representations included in natural language sentences. To verify the practicality of the framework, an experiment was conducted in which developers without any previous experience in developing a dialog system developed task-oriented dialog systems using the proposed framework. The experimental results show that even beginner dialog system developers can develop a high-level task-oriented dialog system.", "title": "" }, { "docid": "3cf7fc89e6a9b7295079dd74014f166b", "text": "BACKGROUND\nHigh-resolution MRI has been shown to be capable of identifying plaque constituents, such as the necrotic core and intraplaque hemorrhage, in human carotid atherosclerosis. The purpose of this study was to evaluate differential contrast-weighted images, specifically a multispectral MR technique, to improve the accuracy of identifying the lipid-rich necrotic core and acute intraplaque hemorrhage in vivo.\n\n\nMETHODS AND RESULTS\nEighteen patients scheduled for carotid endarterectomy underwent a preoperative carotid MRI examination in a 1.5-T GE Signa scanner using a protocol that generated 4 contrast weightings (T1, T2, proton density, and 3D time of flight). MR images of the vessel wall were examined for the presence of a lipid-rich necrotic core and/or intraplaque hemorrhage. Ninety cross sections were compared with matched histological sections of the excised specimen in a double-blinded fashion. Overall accuracy (95% CI) of multispectral MRI was 87% (80% to 94%), sensitivity was 85% (78% to 92%), and specificity was 92% (86% to 98%). There was good agreement between MRI and histological findings, with a value of kappa=0.69 (0.53 to 0.85).\n\n\nCONCLUSIONS\nMultispectral MRI can identify the lipid-rich necrotic core in human carotid atherosclerosis in vivo with high sensitivity and specificity. This MRI technique provides a noninvasive tool to study the pathogenesis and natural history of carotid atherosclerosis. Furthermore, it will permit a direct assessment of the effect of pharmacological therapy, such as aggressive lipid lowering, on plaque lipid composition.", "title": "" }, { "docid": "fa2c3c8946ebb97e119ba25cab52ff5c", "text": "The digital era arrives with a whole set of disruptive technologies that creates both risk and opportunity for open sources analysis. Although the sheer quantity of online conversations makes social media a huge source of information, their analysis is still a challenging task and many of traditional methods and research methodologies for data mining are not fit for purpose. Social data mining revolves around subjective content analysis, which deals with the computational processing of texts conveying people's evaluations, beliefs, attitudes and emotions. Opinion mining and sentiment analysis are the main paradigm of social media exploration and both concepts are often interchangeable. This paper investigates the use of appraisal categories to explore data gleaned for social media, going beyond the limitations of traditional sentiment and opinion-oriented approaches. Categories of appraisal are grounded on cognitive foundations of the appraisal theory, according to which people's emotional response are based on their own evaluative judgments or appraisals of situations, events or objects. A formal model is developed to describe and explain the way language is used in the cyberspace to evaluate, express mood and subjective states, construct personal standpoints and manage interpersonal interactions and relationships. A general processing framework is implemented to illustrate how the model is used to analyze a collection of tweets related to extremist attitudes.", "title": "" }, { "docid": "964f4f8c14432153d6001d961a1b5294", "text": "Although there are numerous search engines in the Web environment, no one could claim producing reliable results in all conditions. This problem is becoming more serious considering the exponential growth of the number of Web resources. In the response to these challenges, the meta-search engines are introduced to enhance the search process by devoting some outstanding search engines as their information resources. In recent years, some approaches are proposed to handle the result combination problem which is the fundamental problem in the meta-search environment. In this paper, a new merging/re-ranking method is introduced which uses the characteristics of the Web co-citation graph that is constructed from search engines and returned lists. The information extracted from the co-citation graph, is combined and enriched by the userspsila click-through data as their implicit feedback in an adaptive framework. Experimental results show a noticeable improvement against the basic method as well as some well-known meta-search engines.", "title": "" }, { "docid": "fce754c728d17319bae7ebe8f532dfe1", "text": "As previous OS abstractions and structures fail to explicitly consider the separation between resource users an d providers, the shift toward server-side computing poses se rious challenges to OS structures, which is aggravated by the increasing many-core scale and workload diversity. This paper presents the horizontal OS model. We propose a new OS abstraction—subOS—an independent OS instance owning physical resources that can be created, destroyed, a nd resized swiftly. We horizontally decompose the OS into the s upervisor for the resource provider and several subOSes for r esource users. The supervisor discovers, monitors, and prov isions resources for subOSes, while each subOS independentl y runs applications. We confine state sharing among subOSes, but allow on-demand state sharing if necessary. We present the first implementation—RainForest, which supports unmodified Linux applications binaries. Our comprehensive evaluations using six benchmark suites quantit atively show RainForest outperforms Linux with three differ ent kernels, LXC, and XEN. The RainForest source code is soon available.", "title": "" }, { "docid": "f7d023abf0f651177497ae38d8494efc", "text": "Developing Question Answering systems has been one of the important research issues because it requires insights from a variety of disciplines, including, Artificial Intelligence, Information Retrieval, Information Extraction, Natural Language Processing, and Psychology. In this paper we realize a formal model for a lightweight semantic–based open domain yes/no Arabic question answering system based on paragraph retrieval (with variable length). We propose a constrained semantic representation. Using an explicit unification framework based on semantic similarities and query expansion (synonyms and antonyms). This frequently improves the precision of the system. Employing the passage retrieval system achieves a better precision by retrieving more paragraphs that contain relevant answers to the question; It significantly reduces the amount of text to be processed by the system.", "title": "" }, { "docid": "b3f423e513c543ecc9fe7003ff9880ea", "text": "Increasing attention has been paid to air quality monitoring with a rapid development in industry and transportation applications in the modern society. However, the existing air quality monitoring systems cannot provide satisfactory spatial and temporal resolutions of the air quality information with low costs in real time. In this paper, we propose a new method to implement the air quality monitoring system based on state-of-the-art Internet-of-Things (IoT) techniques. In this system, portable sensors collect the air quality information timely, which is transmitted through a low power wide area network. All air quality data are processed and analyzed in the IoT cloud. The completed air quality monitoring system, including both hardware and software, is developed and deployed successfully in urban environments. Experimental results show that the proposed system is reliable in sensing the air quality, which helps reveal the change patterns of air quality to some extent.", "title": "" }, { "docid": "d79117efb3d77cab5a245648b295fccf", "text": "We analyze a jump linear Markov system being stabilized using a linear controller. We consider the case when the Markov state is associated with the probability distribution of a measured variable. We assume that the Markov state is not known, but rather is being estimated based on the observations of the variable. We present conditions for the stability of such a system and also solve the optimal LQR control problem for the case when the state estimate update uses only the last observation value. In particular we consider a suboptimal version of the causal Viterbi estimation algorithm and show that a separation property does not hold between the optimal control and the Markov state estimate. Some simple examples are also presented.", "title": "" }, { "docid": "75617ed6450606c8019bb2f5471ac358", "text": "Depression is one of the most common mood disorders. Technology has the potential to assist in screening and treating people with depression by robustly modeling and tracking the complex behavioral cues associated with the disorder (e.g., speech, language, facial expressions, head movement, body language). Similarly, robust affect recognition is another challenge which stands to benefit from modeling such cues. The Audio/Visual Emotion Challenge (AVEC) aims toward understanding the two phenomena and modeling their correlation with observable cues across several modalities. In this paper, we use multimodal signal processing methodologies to address the two problems using data from human-computer interactions. We develop separate systems for predicting depression levels and affective dimensions, experimenting with several methods for combining the multimodal information. The proposed depression prediction system uses a feature selection approach based on audio, visual, and linguistic cues to predict depression scores for each session. Similarly, we use multiple systems trained on audio and visual cues to predict the affective dimensions in continuous-time. Our affect recognition system accounts for context during the frame-wise inference and performs a linear fusion of outcomes from the audio-visual systems. For both problems, our proposed systems outperform the video-feature based baseline systems. As part of this work, we analyze the role played by each modality in predicting the target variable and provide analytical insights.", "title": "" }, { "docid": "c1942b141986fde3d9161383ba8d7949", "text": "VideoWhiteboard is a prototype tool to support remote shared drawing activity. It provides a whiteboard-sized shared drawing space for collaborators who are located in remote sites. It allows each user to see the drawings and a shadow of the gestures of collaborators at the remote site. The development of VideoWhiteboard is based on empirical studies of collaborative drawing activity, including experiences in using the VideoDraw shared drawing prototype. VideoWhiteboard enables remote collaborators to work together much as if they were sharing a whiteboard, and in some ways allows them to work together even more closely than if they were in the same room.", "title": "" }, { "docid": "0afde87c9fb4fb21c6bad3196ef433d0", "text": "Blockchain and verifiable identities have a lot of potential in future distributed software applications e.g. smart cities, eHealth, autonomous vehicles, networks, etc. In this paper, we proposed a novel technique, namely VeidBlock, to generate verifiable identities by following a reliable authentication process. These entities are managed by using the concepts of blockchain ledger and distributed through an advance mechanism to protect them against tampering. All identities created using VeidBlock approach are verifiable and anonymous therefore it preserves user's privacy in verification and authentication phase. As a proof of concept, we implemented and tested the VeidBlock protocols by integrating it in a SDN based infrastructure. Analysis of the test results yield that all components successfully and autonomously performed initial authentication and locally verified all the identities of connected components.", "title": "" }, { "docid": "56d0609fe4e68abbce27124dd5291033", "text": "Existing works indicate that the absence of explicit discourse connectives makes it difficult to recognize implicit discourse relations. In this paper we attempt to overcome this difficulty for implicit relation recognition by automatically inserting discourse connectives between arguments with the use of a language model. Then we propose two algorithms to leverage the information of these predicted connectives. One is to use these predicted implicit connectives as additional features in a supervised model. The other is to perform implicit relation recognition based only on these predicted connectives. Results on Penn Discourse Treebank 2.0 show that predicted discourse connectives help implicit relation recognition and the first algorithm can achieve an absolute average f-score improvement of 3% over a state of the art baseline system.", "title": "" }, { "docid": "8f2a4de3669b26af17cd127387769ad6", "text": "This research provides the first empirical investigation of how approach and avoidance motives for engaging in sex in intimate relationships are associated with personal well-being and relationship quality. A 2-week daily experience study of college student dating couples tested specific predictions from the theoretical model and included both longitudinal and dyadic components. Whereas approach sex motives were positively associated with personal and interpersonal well-being, avoidance sex motives were negatively associated with well-being. Engaging in sex for avoidance motives was particularly detrimental to the maintenance of relationships over time. Perceptions of a partner s motives for sex were also associated with well-being. Implications for the conceptualization of sexuality in relationships along these two dimensions are discussed. Sexual interactions in young adulthood can be positive forces that bring partners closer and make them feel good about themselves and their relationships. In the National Health and Social Life Survey (NHSLS), 78% of participants in monogamous dating relationships reported being either extremely or very pleased with their sexual relationship (Laumann, Gagnon, Michael, & Michaels, 1994). For instance, when asked to rate specific feelings they experienced after engaging in sex, a majority of the participants reported positive feelings (i.e., ‘‘felt loved,’’ ‘‘thrilled,’’ ‘‘wanted,’’ or ‘‘taken care of ’’). More generally, feelings of satisfaction with the sexual aspects of an intimate relationship contribute to overall relationship satisfaction and stability over time (e.g., Sprecher, 2002; see review by Sprecher & Cate, 2004). In short, sexual interactions can be potent forces that sustain and enhance intimate relationships. For some individuals and under certain circumstances, however, sexual interactions can be anything but positive and rewarding. They may create emotional distress, personal discontent, and relationship conflict. For instance, in the NHSLS, a sizable minority of respondents in dating relationships indicated that sex with an exclusive partner made them feel ‘‘sad,’’ ‘‘anxious and worried,’’ ‘‘scared and afraid,’’ or ‘‘guilty’’ (Laumann et al., 1994). Negative reactions to sex may stem from such diverse sources as prior traumatic or coercive experiences in relationships, feeling at a power disadvantage in one s current relationship, or discrepancies in sexual desire between partners, to name a few (e.g., Davies, Katz, & Jackson, 1999; Muehlenhard & Schrag, 1991). The studies reported here were based on Emily A. Impett s dissertation. Preparation of this article was supported by a fellowship awarded to the first author from the Sexuality Research Fellowship Program of the Social Science Research Council with funds provided by the Ford Foundation. We thank Katie Bishop, Renee Delgado, and Laura Tsang for their assistance with data collection and Andrew Christensen, Terri Conley, Martie Haselton, and Linda Sax for comments on an earlier version of this manuscript. Correspondence should be addressed to Emily A. Impett, Center for Research on Gender and Sexuality, San Francisco State University, 2017 Mission Street #300, San Francisco, CA 94110, e-mail: eimpett@sfsu.edu. Personal Relationships, 12 (2005), 465–482. Printed in the United States of America. Copyright 2005 IARR. 1350-4126=05", "title": "" }, { "docid": "72be75e973b6a843de71667566b44929", "text": "We think that hand pose estimation technologies with a camera should be developed for character conversion systems from sign languages with a not so high performance terminal. Fingernail positions can be used for getting finger information which can’t be obtained from outline information. Therefore, we decided to construct a practical fingernail detection system. The previous fingernail detection method, using distribution density of strong nail-color pixels, was not good at removing some skin areas having gloss like finger side area. Therefore, we should use additional information to remove them. We thought that previous method didn’t use boundary information and this information would be available. Color continuity information is available for getting it. In this paper, therefore, we propose a new fingernail detection method using not only distribution density but also color continuity to improve accuracy. We investigated the relationship between wrist rotation angles and percentages of correct detection. The number of users was three. As a result, we confirmed that our proposed method raised accuracy compared with previous method and could detect only fingernails with at least 85% probability from -90 to 40 degrees and from 40 to 90 degrees. Therefore, we concluded that our proposed method was effective.", "title": "" }, { "docid": "18d7b3f9f966f36af7ab6ceca1f5440c", "text": "This letter presents a Si nanowire based tunneling field-effect transistor (TFET) using a CMOS-compatible vertical gate-all-around structure. By minimizing the thermal budget with low-temperature dopant-segregated silicidation for the source-side dopant activation, excellent TFET characteristics were obtained. We have demonstrated for the first time the lowest ever reported subthreshold swing (SS) of 30 mV/decade at room temperature. In addition, we reported a very convincing SS of 50 mV/decade for close to three decades of drain current. Moreover, our TFET device exhibits excellent characteristics without ambipolar behavior and with high Ion/Ioff ratio (105), as well as low Drain-Induced Barrier Lowering of 70 mV/V.", "title": "" }, { "docid": "26ad79619be484ec239daf5b735ae5a4", "text": "The placenta is a complex organ, playing multiple roles during fetal development. Very little is known about the association between placental morphological abnormalities and fetal physiology. In this work, we present an open sourced, computationally tractable deep learning pipeline to analyse placenta histology at the level of the cell. By utilising two deep convolutional neural network architectures and transfer learning, we can robustly localise and classify placental cells within five classes with an accuracy of 89%. Furthermore, we learn deep embeddings encoding phenotypic knowledge that is capable of both stratifying five distinct cell populations and learn intraclass phenotypic variance. We envisage that the automation of this pipeline to population scale studies of placenta histology has the potential to improve our understanding of basic cellular placental biology and its variations, particularly its role in predicting adverse birth outcomes.", "title": "" }, { "docid": "34690f455f9e539b06006f30dd3e512b", "text": "Disaster relief operations rely on the rapid deployment of wireless network architectures to provide emergency communications. Future emergency networks will consist typically of terrestrial, portable base stations and base stations on-board low altitude platforms (LAPs). The effectiveness of network deployment will depend on strategically chosen station positions. In this paper a method is presented for calculating the optimal proportion of the two station types and their optimal placement. Random scenarios and a real example from Hurricane Katrina are used for evaluation. The results confirm the strength of LAPs in terms of high bandwidth utilisation, achieved by their ability to cover wide areas, their portability and adaptability to height. When LAPs are utilized, the total required number of base stations to cover a desired area is generally lower. For large scale disasters in particular, this leads to shorter response times and the requirement of fewer resources. This goal can be achieved more easily if algorithms such as the one presented in this paper are used.", "title": "" }, { "docid": "ff5d1ace34029619d79342e5fe63e0b7", "text": "In this paper, Proposes SIW slot antenna backed with a cavity for 57-64 GHz frequency. This frequency is used for wireless communication applications. The proposed antenna is designed by using Rogers substrate with dielectric constant of 2.2, substrate thickness is 0.381 mm and the microstrip feed is used with the input impedance of 50ohms. The structure provides 5.2GHz impedance bandwidth with a range of 57.8 to 64 GHz and matches with VSWR 2:1. The values of reflection coefficient, VSWR, gain, transmission efficiency and radiation efficiency of proposed antenna at 60GHz are −17.32dB, 1.3318, 7.19dBi, 79.5% and 89.5%.", "title": "" }, { "docid": "cebfc5224413c5acb7831cbf29ae5a8e", "text": "Radio Frequency (RF) Energy Harvesting holds a pro mising future for generating a small amount of electrical power to drive partial circuits in wirelessly communicating electronics devices. Reducing power consumption has become a major challenge in wireless sensor networks. As a vital factor affecting system cost and lifetime, energy consumption in wireless sensor networks is an emerging and active res arch area. This chapter presents a practical approach for RF Energy harvesting and man agement of the harvested and available energy for wireless sensor networks using the Impro ved Energy Efficient Ant Based Routing Algorithm (IEEABR) as our proposed algorithm. The c hapter looks at measurement of the RF power density, calculation of the received power, s torage of the harvested power, and management of the power in wireless sensor networks . The routing uses IEEABR technique for energy management. Practical and real-time implemen tatio s of the RF Energy using PowercastTM harvesters and simulations using the ene rgy model of our Libelium Waspmote to verify the approach were performed. The chapter con cludes with performance analysis of the harvested energy, comparison of IEEABR and other tr aditional energy management techniques, while also looking at open research areas of energy harvesting and management for wireless sensor networks.", "title": "" }, { "docid": "1564a94998151d52785dd0429b4ee77d", "text": "Location management refers to the problem of updating and searching the current location of mobile nodes in a wireless network. To make it efficient, the sum of update costs of location database must be minimized. Previous work relying on fixed location databases is unable to fully exploit the knowledge of user mobility patterns in the system so as to achieve this minimization. The study presents an intelligent location management approach which has interacts between intelligent information system and knowledge-base technologies, so we can dynamically change the user patterns and reduce the transition between the VLR and HLR. The study provides algorithms are ability to handle location registration and call delivery.", "title": "" } ]
scidocsrr
3569c33942343532ad67adae1cf900b4
CORD: Energy-Efficient Reliable Bulk Data Dissemination in Sensor Networks
[ { "docid": "a9550f5f2158f0519a66264b6a948c29", "text": "In this paper, we present a family of adaptive protocols, called SPIN (Sensor Protocols for Information via Negotiation), that efficiently disseminate information among sensors in an energy-constrained wireless sensor network. Nodes running a SPIN communication protocol name their data using high-level data descriptors, called meta-data. They use meta-data negotiations to eliminate the transmission of redundant data throughout the network. In addition, SPIN nodes can base their communication decisions both upon application-specific knowledge of the data and upon knowledge of the resources that are available to them. This allows the sensors to efficiently distribute data given a limited energy supply. We simulate and analyze the performance of four specific SPIN protocols: SPIN-PP and SPIN-EC, which are optimized for a point-to-point network, and SPIN-BC and SPIN-RL, which are optimized for a broadcast network. Comparing the SPIN protocols to other possible approaches, we find that the SPIN protocols can deliver 60% more data for a given amount of energy than conventional approaches in a point-to-point network and 80% more data for a given amount of energy in a broadcast network. We also find that, in terms of dissemination rate and energy usage, the SPIN protocols perform close to the theoretical optimum in both point-to-point and broadcast networks.", "title": "" } ]
[ { "docid": "be4fbfdde6ec503bebd5b2a8ddaa2820", "text": "Attack-defence Capture The Flag (CTF) competitions are effective pedagogic platforms to teach secure coding practices due to the interactive and real-world experiences they provide to the contest participants. Two of the key challenges that prevent widespread adoption of such contests are: 1) The game infrastructure is highly resource intensive requiring dedication of significant hardware resources and monitoring by organizers during the contest and 2) the participants find the gameplay to be complicated, requiring performance of multiple tasks that overwhelms inexperienced players. In order to address these, we propose a novel attack-defence CTF game infrastructure which uses application containers. The results of our work showcase effectiveness of these containers and supporting tools in not only reducing the resources organizers need but also simplifying the game infrastructure. The work also demonstrates how the supporting tools can be leveraged to help participants focus more on playing the game i.e. attacking and defending services and less on administrative tasks. The results from this work indicate that our architecture can accommodate over 150 teams with 15 times fewer resources when compared to existing infrastructures of most contests today.", "title": "" }, { "docid": "71c4f414520c171aca6e88753c9ef179", "text": "This brief presents an ultralow quiescent class-AB error amplifier (ERR AMP) of low dropout (LDO) and a slew-rate (SR) enhancement circuit to minimize compensation capacitance and speed up transient response designed in the 0.11-μm 1-poly 6-metal CMOS process. In order to increase the current capability with a low standby quiescent current under large-signal operation, the proposed scheme has a class-AB-operation operational transconductance amplifier (OTA) that acts as an ERR AMP. As a result, the new OTA achieved a higher dc gain and faster settling time than conventional OTAs, demonstrating a dc gain improvement of 15.8 dB and a settling time six times faster than that of a conventional OTA. The proposed additional SR enhancement circuit improved the response based on voltage-spike detection when the voltage dramatically changed at the output node.", "title": "" }, { "docid": "db3758b88c374135c1c7c935c09ba233", "text": "Graphical models provide a rich framework for summarizing the dependencies among variables. The graphical lasso approach attempts to learn the structure of a Gaussian graphical model (GGM) by maximizing the log likelihood of the data, subject to an l1 penalty on the elements of the inverse co-variance matrix. Most algorithms for solving the graphical lasso problem do not scale to a very large number of variables. Furthermore, the learned network structure is hard to interpret. To overcome these challenges, we propose a novel GGM structure learning method that exploits the fact that for many real-world problems we have prior knowledge that certain edges are unlikely to be present. For example, in gene regulatory networks, a pair of genes that does not participate together in any of the cellular processes, typically referred to as pathways, is less likely to be connected. In computer vision applications in which each variable corresponds to a pixel, each variable is likely to be connected to the nearby variables. In this paper, we propose the pathway graphical lasso, which learns the structure of a GGM subject to pathway-based constraints. In order to solve this problem, we decompose the network into smaller parts, and use a message-passing algorithm in order to communicate among the subnetworks. Our algorithm has orders of magnitude improvement in run time compared to the state-of-the-art optimization methods for the graphical lasso problem that were modified to handle pathway-based constraints.", "title": "" }, { "docid": "19e070089a8495a437e81da50f3eb21c", "text": "Mobile payment refers to the use of mobile devices to conduct payment transactions. Users can use mobile devices for remote and proximity payments; moreover, they can purchase digital contents and physical goods and services. It offers an alternative payment method for consumers. However, there are relative low adoption rates in this payment method. This research aims to identify and explore key factors that affect the decision of whether to use mobile payments. Two well-established theories, the Technology Acceptance Model (TAM) and the Innovation Diffusion Theory (IDT), are applied to investigate user acceptance of mobile payments. Survey data from mobile payments users will be used to test the proposed hypothesis and the model.", "title": "" }, { "docid": "6ad7d97140d7a5d6b72039b4bb9c3be5", "text": "This study evaluated the criterion-related validity of the Electronic Head Posture Instrument (EHPI) in measuring the craniovertebral (CV) angle by correlating the measurements of CV angle with anterior head translation (AHT) in lateral cervical radiographs. It also investigated the correlation of AHT and CV angle with the Chinese version of the Northwick Park Questionnaire (NPQ) and Numeric Pain Rating Scale (NPRS). Thirty patients with diagnosis of mechanical neck pain for at least 3 months without referred symptoms were recruited in an outpatient physiotherapy clinic. The results showed that AHT measured with X-ray correlated negatively with CV angle measured with EHPI (r = -0.71, p < 0.001). CV angle also correlated negatively with NPQ (r = -0.67, p < 0.001) and NPRS (r = -0.70, p < 0.001), while AHT positively correlated with NPQ (r = 0.390, p = 0.033) and NPRS (r = 0.49, p = 0.006). We found a negative correlation between CV angle measured with the EHPI and AHT measured with the X-ray lateral film as well as with NPQ and NPRS in patients with chronic mechanical neck pain. EHPI is a valid tool in clinically assessing and evaluating cervical posture of patients with chronic mechanical neck pain.", "title": "" }, { "docid": "442504997ef102d664081b390ff09dd3", "text": "An intelligent traffic management system (E-Traffic Warden) is proposed, using image processing techniques along with smart traffic control algorithm. Traffic recognition was achieved using cascade classifier for vehicle recognition utilizing Open CV and Visual Studio C/C++. The classifier was trained on 700 positive samples and 1140 negative samples. The results show that the accuracy of vehicle detection is approximately 93 percent. The count of vehicles at all approaches of intersection is used to estimate traffic. Traffic build up is then avoided or resolved by passing the extracted data to traffic control algorithm. The control algorithm shows approximately 86% improvement over Fixed-Delay controller in worst case scenarios.", "title": "" }, { "docid": "1c66d84dfc8656a23e2a4df60c88ab51", "text": "Our method aims at reasoning over natural language questions and visual images. Given a natural language question about an image, our model updates the question representation iteratively by selecting image regions relevant to the query and learns to give the correct answer. Our model contains several reasoning layers, exploiting complex visual relations in the visual question answering (VQA) task. The proposed network is end-to-end trainable through back-propagation, where its weights are initialized using pre-trained convolutional neural network (CNN) and gated recurrent unit (GRU). Our method is evaluated on challenging datasets of COCO-QA [19] and VQA [2] and yields state-of-the-art performance.", "title": "" }, { "docid": "aed7133c143edbe0e1c6f6dfcddee9ec", "text": "This paper describes a version of the auditory image model (AIM) [1] implemented in MATLAB. It is referred to as “aim-mat” and it includes the basic modules that enable AIM to simulate the spectral analysis, neural encoding and temporal integration performed by the auditory system. The dynamic representations produced by non-static sounds can be viewed on a frame-by-frame basis or in movies with synchronized sound. The software has a sophisticated graphical user interface designed to facilitate the auditory modelling. It is also possible to add MATLAB code and complete modules to aim-mat. The software can be downloaded from http://www.mrccbu.cam.ac.uk/cnbh/aimmanual", "title": "" }, { "docid": "e011ab57139a9a2f6dc13033b0ab6223", "text": "Over the last few years, virtual reality (VR) has re-emerged as a technology that is now feasible at low cost via inexpensive cellphone components. In particular, advances of high-resolution micro displays, low-latency orientation trackers, and modern GPUs facilitate immersive experiences at low cost. One of the remaining challenges to further improve visual comfort in VR experiences is the vergence-accommodation conflict inherent to all stereoscopic displays. Accurate reproduction of all depth cues is crucial for visual comfort. By combining well-known stereoscopic display principles with emerging factored light field technology, we present the first wearable VR display supporting high image resolution as well as focus cues. A light field is presented to each eye, which provides more natural viewing experiences than conventional near-eye displays. Since the eye box is just slightly larger than the pupil size, rank-1 light field factorizations are sufficient to produce correct or nearly-correct focus cues; no time-multiplexed image display or gaze tracking is required. We analyze lens distortions in 4D light field space and correct them using the afforded high-dimensional image formation. We also demonstrate significant improvements in resolution and retinal blur quality over related near-eye displays. Finally, we analyze diffraction limits of these types of displays.", "title": "" }, { "docid": "806a83d17d242a7fd5272862158db344", "text": "Solar power has become an attractive alternative of electricity energy. Solar cells that form the basis of a solar power system are mainly based on multicrystalline silicon. A set of solar cells are assembled and interconnected into a large solar module to offer a large amount of electricity power for commercial applications. Many defects in a solar module cannot be visually observed with the conventional CCD imaging system. This paper aims at defect inspection of solar modules in electroluminescence (EL) images. The solar module charged with electrical current will emit infrared light whose intensity will be darker for intrinsic crystal grain boundaries and extrinsic defects including micro-cracks, breaks and finger interruptions. The EL image can distinctly highlight the invisible defects but also create a random inhomogeneous background, which makes the inspection task extremely difficult. The proposed method is based on independent component analysis (ICA), and involves a learning and a detection stage. The large solar module image is first divided into small solar cell subimages. In the training stage, a set of defect-free solar cell subimages are used to find a set of independent basis images using ICA. In the inspection stage, each solar cell subimage under inspection is reconstructed as a linear combination of the learned basis images. The coefficients of the linear combination are used as the feature vector for classification. Also, the reconstruction error between the test image and its reconstructed image from the ICA basis images is also evaluated for detecting the presence of defects. Experimental results have shown that the image reconstruction with basis images distinctly outperforms the ICA feature extraction approach. It can achieve a mean recognition rate of 93.4% for a set of 80 test samples.", "title": "" }, { "docid": "c3317ea39578195cab8801b8a31b21b6", "text": "We study a novel machine learning (ML) problem setting of sequentially allocating small subsets of training data amongst a large set of classifiers. The goal is to select a classifier that will give near-optimal accuracy when trained on all data, while also minimizing the cost of misallocated samples. This is motivated by large modern datasets and ML toolkits with many combinations of learning algorithms and hyperparameters. Inspired by the principle of “optimism under uncertainty,” we propose an innovative strategy, Data Allocation using Upper Bounds (DAUB), which robustly achieves these objectives across a variety of real-world datasets. We further develop substantial theoretical support for DAUB in an idealized setting where the expected accuracy of a classifier trained on n samples can be known exactly. Under these conditions we establish a rigorous sub-linear bound on the regret of the approach (in terms of misallocated data), as well as a rigorous bound on suboptimality of the selected classifier. Our accuracy estimates using real-world datasets only entail mild violations of the theoretical scenario, suggesting that the practical behavior of DAUB is likely to approach the idealized behavior.", "title": "" }, { "docid": "ce3ac7716734e2ebd814900d77ca3dfb", "text": "The large pose discrepancy between two face images is one of the fundamental challenges in automatic face recognition. Conventional approaches to pose-invariant face recognition either perform face frontalization on, or learn a pose-invariant representation from, a non-frontal face image. We argue that it is more desirable to perform both tasks jointly to allow them to leverage each other. To this end, this paper proposes a Disentangled Representation learning-Generative Adversarial Network (DR-GAN) with three distinct novelties. First, the encoder-decoder structure of the generator enables DR-GAN to learn a representation that is both generative and discriminative, which can be used for face image synthesis and pose-invariant face recognition. Second, this representation is explicitly disentangled from other face variations such as pose, through the pose code provided to the decoder and pose estimation in the discriminator. Third, DR-GAN can take one or multiple images as the input, and generate one unified identity representation along with an arbitrary number of synthetic face images. Extensive quantitative and qualitative evaluation on a number of controlled and in-the-wild databases demonstrate the superiority of DR-GAN over the state of the art in both learning representations and rotating large-pose face images.", "title": "" }, { "docid": "e737bb31bb7dbb6dbfdfe0fd01bfe33c", "text": "Cannabidiol (CBD) is a non-psychotomimetic phytocannabinoid derived from Cannabis sativa. It has possible therapeutic effects over a broad range of neuropsychiatric disorders. CBD attenuates brain damage associated with neurodegenerative and/or ischemic conditions. It also has positive effects on attenuating psychotic-, anxiety- and depressive-like behaviors. Moreover, CBD affects synaptic plasticity and facilitates neurogenesis. The mechanisms of these effects are still not entirely clear but seem to involve multiple pharmacological targets. In the present review, we summarized the main biochemical and molecular mechanisms that have been associated with the therapeutic effects of CBD, focusing on their relevance to brain function, neuroprotection and neuropsychiatric disorders.", "title": "" }, { "docid": "2b595cab271cac15ea165e46459d6923", "text": "Autonomous Mobility On Demand (MOD) systems can utilize fleet management strategies in order to provide a high customer quality of service (QoS). Previous works on autonomous MOD systems have developed methods for rebalancing single capacity vehicles, where QoS is maintained through large fleet sizing. This work focuses on MOD systems utilizing a small number of vehicles, such as those found on a campus, where additional vehicles cannot be introduced as demand for rides increases. A predictive positioning method is presented for improving customer QoS by identifying key locations to position the fleet in order to minimize expected customer wait time. Ridesharing is introduced as a means for improving customer QoS as arrival rates increase. However, with ridesharing perceived QoS is dependent on an often unknown customer preference. To address this challenge, a customer ratings model, which learns customer preference from a 5-star rating, is developed and incorporated directly into a ridesharing algorithm. The predictive positioning and ridesharing methods are applied to simulation of a real-world campus MOD system. A combined predictive positioning and ridesharing approach is shown to reduce customer service times by up to 29%. and the customer ratings model is shown to provide the best overall MOD fleet management performance over a range of customer preferences.", "title": "" }, { "docid": "60de343325a305b08dfa46336f2617b5", "text": "On Friday, May 12, 2017 a large cyber-attack was launched using WannaCry (or WannaCrypt). In a few days, this ransomware virus targeting Microsoft Windows systems infected more than 230,000 computers in 150 countries. Once activated, the virus demanded ransom payments in order to unlock the infected system. The widespread attack affected endless sectors – energy, transportation, shipping, telecommunications, and of course health care. Britain’s National Health Service (NHS) reported that computers, MRI scanners, blood-storage refrigerators and operating room equipment may have all been impacted. Patient care was reportedly hindered and at the height of the attack, NHS was unable to care for non-critical emergencies and resorted to diversion of care from impacted facilities. While daunting to recover from, the entire situation was entirely preventable. A Bcritical^ patch had been released by Microsoft on March 14, 2017. Once applied, this patch removed any vulnerability to the virus. However, hundreds of organizations running thousands of systems had failed to apply the patch in the first 59 days it had been released. This entire situation highlights a critical need to reexamine how we maintain our health information systems. Equally important is a need to rethink how organizations sunset older, unsupported operating systems, to ensure that security risks are minimized. For example, in 2016, the NHS was reported to have thousands of computers still running Windows XP – a version no longer supported or maintained by Microsoft. There is no question that this will happen again. However, health organizations can mitigate future risk by ensuring best security practices are adhered to.", "title": "" }, { "docid": "31be3d5db7d49d1bfc58c81efec83bdc", "text": "Electromagnetic elements such as inductance are not used in switched-capacitor converters to convert electrical power. In contrast, capacitors are used for storing and transforming the electrical power in these new topologies. Lower volume, higher power density, and more integration ability are the most important features of these kinds of converters. In this paper, the most important switched-capacitor converters topologies, which have been developed in the last decade as new topologies in power electronics, are introduced, analyzed, and compared with each other, in brief. Finally, a 100 watt double-phase half-mode resonant converter is simulated to convert 48V dc to 24 V dc for light weight electrical vehicle applications. Low output voltage ripple (0.4%), and soft switching for all power diodes and switches are achieved under the worst-case conditions.", "title": "" }, { "docid": "d6cca63107e04f225b66e02289c601a2", "text": "To avoid a sarcastic message being understood in its unintended literal meaning, in microtexts such as messages on Twitter.com sarcasm is often explicitly marked with a hashtag such as ‘#sarcasm’. We collected a training corpus of about 406 thousand Dutch tweets with hashtag synonyms denoting sarcasm. Assuming that the human labeling is correct (annotation of a sample indicates that about 90% of these tweets are indeed sarcastic), we train a machine learning classifier on the harvested examples, and apply it to a sample of a day’s stream of 2.25 million Dutch tweets. Of the 353 explicitly marked tweets on this day, we detect 309 (87%) with the hashtag removed. We annotate the top of the ranked list of tweets most likely to be sarcastic that do not have the explicit hashtag. 35% of the top250 ranked tweets are indeed sarcastic. Analysis indicates that the use of hashtags reduces the further use of linguistic markers for signaling sarcasm, such as exclamations and intensifiers. We hypothesize that explicit markers such as hashtags are the digital extralinguistic equivalent of non-verbal expressions that people employ in live interaction when conveying sarcasm. Checking the consistency of our finding in a language from another language family, we observe that in French the hashtag ‘#sarcasme’ has a similar polarity switching function, be it to a lesser extent. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "0502b30d45e6f51a7eb0eeec1f0af2e9", "text": "Identification and extraction of singing voice from within musical mixtures is a key challenge in sourc e separation and machine audition. Recently, deep neural network s (DNN) have been used to estimate 'ideal' binary masks for carefully controlled cocktail party speech separation problems. However, it is not yet known whether these methods are capab le of generalizing to the discrimination of voice and non -voice in the context of musical mixtures. Here, we trained a con volutional DNN (of around a billion parameters) to provide probabilistic estimates of the ideal binary mask for separation o f vocal sounds from real-world musical mixtures. We contrast our DNN results with more traditional linear methods. Our approach may be useful for automatic removal of vocal sounds from musical mixtures for 'karaoke' type applications.", "title": "" }, { "docid": "27ddea786e06ffe20b4f526875cdd76b", "text": "It , is generally unrecognized that Sigmund Freud's contribution to the scientific understanding of dreams derived from a radical reorientation to the dream experience. During the nineteenth century, before publication of The Interpretation of Dreams, the presence of dreaming was considered by the scientific community as a manifestation of mental activity during sleep. The state of sleep was given prominence as a factor accounting for the seeming lack of organization and meaning to the dream experience. Thus, the assumed relatively nonpsychological sleep state set the scientific stage for viewing the nature of the dream. Freud radically shifted the context. He recognized-as myth, folklore, and common sense had long understood-that dreams were also linked with the psychology of waking life. This shift in orientation has proved essential for our modern view of dreams and dreaming. Dreams are no longer dismissed as senseless notes hit at random on a piano keyboard by an untrained player. Dreams are now recognized as psychologically significant and meaningful expressions of the life of the dreamer, albeit expressed in disguised and concealed forms. (For a contrasting view, see AcFIIa ION_sYNTHESIS xxroTESis .) Contemporary Dream Research During the past quarter-century, there has been increasing scientific interest in the process of dreaming. A regular sleep-wakefulness cycle has been discovered, and if experimental subjects are awakened during periods of rapid eye movements (REM periods), they will frequently report dreams. In a typical night, four or five dreams occur during REM periods, accompanied by other signs of physiological activation, such as increased respiratory rate, heart rate, and penile and clitoral erection. Dreams usually last for the duration of the eye movements, from about 10 to 25 minutes. Although dreaming usually occurs in such regular cycles ;.dreaming may occur at other times during sleep, as well as during hypnagogic (falling asleep) or hypnopompic .(waking up) states, when REMs are not present. The above findings are discoveries made since the monumental work of Freud reported in The Interpretation of Dreams, and .although of great interest to the study of the mind-body problem, these .findings as yet bear only a peripheral relationship to the central concerns of the psychology of dream formation, the meaning of dream content, the dream as an approach to a deeper understanding of emotional life, and the use of the dream in psychoanalytic treatment .", "title": "" } ]
scidocsrr
d51e620d0827c768462fdccfb6158405
Sample-efficient Actor-Critic Reinforcement Learning with Supervised Data for Dialogue Management
[ { "docid": "5cc1f15c45f57d1206e9181dc601ee4a", "text": "In an end-to-end dialog system, the aim of dialog state tracking is to accurately estimate a compact representation of the current dialog status from a sequence of noisy observations produced by the speech recognition and the natural language understanding modules. This paper introduces a novel method of dialog state tracking based on the general paradigm of machine reading and proposes to solve it using an End-to-End Memory Network, MemN2N, a memory-enhanced neural network architecture. We evaluate the proposed approach on the second Dialog State Tracking Challenge (DSTC-2) dataset. The corpus has been converted for the occasion in order to frame the hidden state variable inference as a questionanswering task based on a sequence of utterances extracted from a dialog. We show that the proposed tracker gives encouraging results. Then, we propose to extend the DSTC-2 dataset and the definition of this dialog state task with specific reasoning capabilities like counting, list maintenance, yes-no question answering and indefinite knowledge management. Finally, we present encouraging results using our proposed MemN2N based tracking model.", "title": "" }, { "docid": "c5bbdfc0da1635ad0a007e60e224962f", "text": "Natural gradient descent is an optimization method traditionally motivated from the perspective of information geometry, and works well for many applications as an alternative to stochastic gradient descent. In this paper we critically analyze this method and its properties, and show how it can be viewed as a type of approximate 2nd-order optimization method, where the Fisher information matrix used to compute the natural gradient direction can be viewed as an approximation of the Hessian. This perspective turns out to have significant implications for how to design a practical and robust version of the method. Among our various other contributions is a thorough analysis of the convergence speed of natural gradient descent and more general stochastic methods, a critical examination of the oft-used “empirical” approximation of the Fisher matrix, and an analysis of the (approximate) parameterization invariance property possessed by the method, which we show still holds for certain other choices of the curvature matrix, but notably not the Hessian. ∗jmartens@cs.toronto.edu 1 ar X iv :1 41 2. 11 93 v5 [ cs .L G ] 1 O ct 2 01 5", "title": "" }, { "docid": "0f5959e5952a029cbe7807dc0268e25e", "text": "We describe a two-step approach for dialogue management in task-oriented spoken dialogue systems. A unified neural network framework is proposed to enable the system to first learn by supervision from a set of dialogue data and then continuously improve its behaviour via reinforcement learning, all using gradientbased algorithms on one single model. The experiments demonstrate the supervised model’s effectiveness in the corpus-based evaluation, with user simulation, and with paid human subjects. The use of reinforcement learning further improves the model’s performance in both interactive settings, especially under higher-noise conditions.", "title": "" }, { "docid": "3486d3493a0deef5c3c029d909e3cdfc", "text": "To date, reinforcement learning has mostly been studied solving simple learning tasks. Reinforcement learning methods that have been studied so far typically converge slowly. The purpose of this work is thus two-fold: 1) to investigate the utility of reinforcement learning in solving much more complicated learning tasks than previously studied, and 2) to investigate methods that will speed up reinforcement learning. This paper compares eight reinforcement learning frameworks: adaptive heuristic critic (AHC) learning due to Sutton, Q-learning due to Watkins, and three extensions to both basic methods for speeding up learning. The three extensions are experience replay, learning action models for planning, and teaching. The frameworks were investigated using connectionism as an approach to generalization. To evaluate the performance of different frameworks, a dynamic environment was used as a testbed. The environment is moderately complex and nondeterministic. This paper describes these frameworks and algorithms in detail and presents empirical evaluation of the frameworks.", "title": "" }, { "docid": "a986826041730d953dfbf9fbc1b115a6", "text": "This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units. These algorithms, called REINFORCE algorithms, are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reinforcement tasks, and they do this without explicitly computing gradient estimates or even storing information from which such estimates could be computed. Specific examples of such algorithms are presented, some of which bear a close relationship to certain existing algorithms while others are novel but potentially interesting in their own right. Also given are results that show how such algorithms can be naturally integrated with backpropagation. We close with a brief discussion of a number of additional issues surrounding the use of such algorithms, including what is known about their limiting behaviors as well as further considerations that might be used to help develop similar but potentially more powerful reinforcement learning algorithms.", "title": "" } ]
[ { "docid": "5e4ab26751f36cad7b348320d71dd937", "text": "In this paper we examine the relations between parent spatial language input, children's own production of spatial language, and children's later spatial abilities. Using a longitudinal study design, we coded the use of spatial language (i.e. words describing the spatial features and properties of objects; e.g. big, tall, circle, curvy, edge) from child age 14 to 46 months in a diverse sample of 52 parent-child dyads interacting in their home settings. These same children were given three non-verbal spatial tasks, items from a Spatial Transformation task (Levine et al., 1999), the Block Design subtest from the WPPSI-III (Wechsler, 2002), and items on the Spatial Analogies subtest from Primary Test of Cognitive Skills (Huttenlocher & Levine, 1990) at 54 months of age. We find that parents vary widely in the amount of spatial language they use with their children during everyday interactions. This variability in spatial language input, in turn, predicts the amount of spatial language children produce, controlling for overall parent language input. Furthermore, children who produce more spatial language are more likely to perform better on spatial problem solving tasks at a later age.", "title": "" }, { "docid": "cff8ae2635684a6f0e07142175b7fbf1", "text": "Collaborative writing is on the increase. In order to write well together, authors often need to be aware of who has done what recently. We offer a new tool, DocuViz, that displays the entire revision history of Google Docs, showing more than the one-step-at-a-time view now shown in revision history and tracking changes in Word. We introduce the tool and present cases in which the tool has the potential to be useful: To authors themselves to see recent \"seismic activity,\" indicating where in particular a co-author might want to pay attention, to instructors to see who has contributed what and which changes were made to comments from them, and to researchers interested in the new patterns of collaboration made possible by simultaneous editing capabilities.", "title": "" }, { "docid": "e3459fda9310bb18e55caf505b13a08a", "text": "Variable-speed pulsewidth-modulated (PWM) drives allow for precise speed control of induction motors, as well as a high power factor and fast response characteristics, compared with nonelectronic speed controllers. However, due to the high switching frequencies and the high dV/dt, there are increased dielectric stresses in the insulation system of the motor, leading to premature failure, in high power and medium- and high-voltage motors. Studying the degradation mechanism of these insulation systems on an actual motor is both extremely costly and impractical. In addition, to replicate the aging process, the same waveform that the motor is subjected to should be applied to the test samples. As a result, a low-power two-level high-voltage PWM inverter has been built to replicate the voltage waveforms for aging processes. This generator allows for testing the insulation systems considering a real PWM waveform in which both the fast pulses and the fundamental low frequency are included. The results show that the effects of PWM waveforms cannot be entirely replicated by a unipolar pulse generator.", "title": "" }, { "docid": "266b9bfde23fdfaedb35d293f7293c93", "text": "We introduce a neural method for transfer learning between two (source and target) classification tasks or aspects over the same domain. Rather than training on target labels, we use a few keywords pertaining to source and target aspects indicating sentence relevance instead of document class labels. Documents are encoded by learning to embed and softly select relevant sentences in an aspect-dependent manner. A shared classifier is trained on the source encoded documents and labels, and applied to target encoded documents. We ensure transfer through aspect-adversarial training so that encoded documents are, as sets, aspect-invariant. Experimental results demonstrate that our approach outperforms different baselines and model variants on two datasets, yielding an improvement of 27% on a pathology dataset and 5% on a review dataset.", "title": "" }, { "docid": "47baaddefd3476ce55d39a0f111ade5a", "text": "We propose a novel method for classifying resume data of job applicants into 27 different job categories using convolutional neural networks. Since resume data is costly and hard to obtain due to its sensitive nature, we use domain adaptation. In particular, we train a classifier on a large number of freely available job description snippets and then use it to classify resume data. We empirically verify a reasonable classification performance of our approach despite having only a small amount of labeled resume data available.", "title": "" }, { "docid": "bcda82b5926620060f65506ccbac042f", "text": "This paper investigates spirolaterals for their beauty of form and the unexpected complexity arising from them. From a very simple generative procedure, spirolaterals can be created having great complexity and variation. Using mathematical and computer-based methods, issues of closure, variation, enumeration, and predictictability are discussed. A historical review is also included. The overriding interest in this research is to develop methods and procedures to investigate geometry for the purpose of inspiration for new architectural and sculptural forms. This particular phase will concern the two dimensional representations of spirolaterals.", "title": "" }, { "docid": "b2cf33b05e93d1c15a32a54e8bc60bed", "text": "Prevention of fraud and abuse has become a major concern of many organizations. The industry recognizes the problem and is just now starting to act. Although prevention is the best way to reduce frauds, fraudsters are adaptive and will usually find ways to circumvent such measures. Detecting fraud is essential once prevention mechanism has failed. Several data mining algorithms have been developed that allow one to extract relevant knowledge from a large amount of data like fraudulent financial statements to detect. In this paper we present an efficient approach for fraud detection. In our approach we first maintain a log file for data which contain the content separated by space, position and also the frequency. Then we encrypt the data by substitution method and send to the receiver end. We also send the log file to the receiver end before proceed to the encryption which is also in the form of secret message. So the receiver can match the data according to the content, position and frequency, if there is any mismatch occurs, we can detect the fraud and does not accept the file.", "title": "" }, { "docid": "7cf625ce06d335d7758c868514b4c635", "text": "Jeffrey's rule of conditioning has been proposed in order to revise a probability measure by another probability function. We generalize it within the framework of the models based on belief functions. We show that several forms of Jeffrey's conditionings can be defined that correspond to the geometrical rule of conditioning and to Dempster's rule of conditioning, respectively. 1. Jeffrey's rule in probability theory. In probability theory conditioning on an event . is classically obtained by the application of Bayes' rule. Let (Q, � , P) be a probability space where P(A) is the probability of the event Ae � where� is a Boolean algebra defined on a finite2 set n. P(A) quantified the degree of belief or the objective probability, depending on the interpretation given to the probability measure, that a particular arbitrary element m of n which is not a priori located in any of the sets of� belongs to a particular set Ae�. Suppose it is known that m belongs to Be� and P(B)>O. The probability measure P must be updated into PB that quantifies the same event as previously but after taking in due consideration the know ledge that me B. PB is obtained by Bayes' rule of conditioning: This rule can be obtained by requiring that: 81: VBE�. PB(B) = 1 82: VBe�, VX,Ye� such that X.Y�B. and PJ3(X) _ P(X) PB(Y)P(Y) PB(Y) = 0 ifP(Y)>O", "title": "" }, { "docid": "6c4d6eff1fb7ef03efc3197726545ed8", "text": "Gait enjoys advantages over other biometrics in that it can be perceived from a distance and is di/cult to disguise. Current approaches are mostly statistical and concentrate on walking only. By analysing leg motion we show how we can recognise people not only by the walking gait, but also by the running gait. This is achieved by either of two new modelling approaches which employ coupled oscillators and the biomechanics of human locomotion as the underlying concepts. These models give a plausible method for data reduction by providing estimates of the inclination of the thigh and of the leg, from the image data. Both approaches derive a phase-weighted Fourier description gait signature by automated non-invasive means. One approach is completely automated whereas the other requires speci5cation of a single parameter to distinguish between walking and running. Results show that both gaits are potential biometrics, with running being more potent. By its basis in evidence gathering, this new technique can tolerate noise and low resolution. ? 2003 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "c8a0276919005f36a587d7d209063e2f", "text": "Praveen Prakash1, Kuttapa Nishanth2, Nikul Jasani1, Aneesh Katyal1, US Krishna Nayak3 1Post Graduate Student, Department of Orthodontics & Dentofacial Orthopaedics, A.B. Shetty Memorial Institute of Dental Sciences, Mangalore, Karnataka, India, 2Professor, Department of Orthodontics & Dentofacial Orthopaedics, A.B. Shetty Memorial Institute of Dental Sciences, Mangalore, Karnataka, India, 3Dean Academics, Head of Department, Department of Orthodontics & Dentofacial Orthopaedics, A.B. Shetty Memorial Institute of Dental Sciences, Mangalore, Karnataka, India", "title": "" }, { "docid": "22841c2d63cf94f76643244475b547cb", "text": "Problems of reference, identity, and meaning are becoming increasingly endemic on the Web. We focus first on the convergence between Web architecture and classical problems in philosophy, leading to the advent of “philosophical engineering.” We survey how the Semantic Web initiative in particular provoked an “identity crisis” for the Web due to its use of URIs for both “things” and web pages and the W3C’s proposed solution. The problem of reference is inspected in relation to both the direct object theory of reference of Russell and the causal theory of reference of Kripke, and the proposed standards of new URN spaces and Published Subjects. Then we progress onto the problem of meaning in light of the Fregean slogan of the priority of truth over reference and the notion of logical interpretation. The popular notion of “social meaning” and the practice of tagging as a possible solution is analyzed in light of the ideas of Lewis on convention. Finally, we conclude that a full notion of meaning, identity, and reference may be possible, but that it is an open problem whether or not practical implementations and standards can be created. 1. PHILOSOPHICAL ENGINEERING While the Web epitomizes the beginning of a new digital era, it has also caused an untimely return of philosophical issues in identify, reference, and meaning. These questions are thought of as a “black hole” that has long puzzled philosophers and logicians. Up until now, there has been little incentive outside academic philosophy to solve these issues in any practical manner. Could there be any connection between the fast-paced world of the Web and philosophers who dwell upon unsolvable questions? In a surprising move, the next stage in the development of the Web seems to be signalling a return to the very same questions of identity, reference, and meaning that have troubled philosophers for so long. While the hypertext Web has skirted around these questions, attempts at increasing the scope of the Web can not: “The Semantic Web is not a separate Web but an extension of the current one, in which information is given well-defined meaning, better enabling computers and people to work in cooperation” [41]. Meaning is a thorny word: do we define meaning as “machine-readable” or “has a relation to a formal model?” Or do we define meaning as “easily understood by humans,” or “somehow connected to the world in a roCopyright is held by the author/owner(s). WWW2006, May 22–26, 2006, Edinburgh, UK. . bust manner?” Further progress in creating both satisfying and pragmatic solutions to these problems in the context of the Web is possible since currently many of these questions are left underspecified by current Web standards. While many in philosophy seem to be willing to hedge their bets in various ideological camps, on the Web there is a powerful urge to co-operate. There is a distinct difference between the classical posing of these questions in philosophy and these questions in the context of the Web, since the Web is a human artifact. The inventor of the Web, Tim Berners-Lee, summarized this position: “We are not analyzing a world, we are building it. We are not experimental philosophers, we are philosophical engineers” [2]. 2. THE IDENTITY CRISIS OF URIS The first step in the creation of the Semantic Web was to extend the use of a URI (Uniform Resource Identifier) to identify not just web pages, but anything. This was historically always part of Berners-Lee’s vision, but only recently came to light with Semantic Web standardization efforts and has caused disagreement from some of the other original Web architects like Larry Masinter, co-author of the URI standard [4]. In contrast to past practice that generally used URIs for web pages, URIs could be given to things traditionally thought of as “not on the Web” such as concepts and people. The guiding example is that instead of just visiting Tim Berners-Lee’s web page to retrieve a representation of Tim Berners-Lee via http, you could use the Semantic Web to make statements about Tim himself, such as where he works or the color of his hair. Early proposals made a chasm across URIs, dividing them into URLs and URNs. URIs for web pages (documents) are URLs (Uniform Resource Locators) that could use a scheme such as http to perform a “variety of operations” on a resource[5]. In contrast, URNs (Uniform Resource Names) purposely avoided such access mechanisms in order to create “persistent, location-independent, resource identifiers” [29]. URNs were not widely adopted, perhaps due to their centralized nature that required explicitly registering them with IANA. In response, URLs were just called “URIs” and used not only for web pages, but for things not on the web. Separate URN standards such as Masinter’s tdb URN space have been declared, but have not been widely adopted [27]. Instead, people use http in general to identify both web pages and things. There is one sensible solution to get a separate URI for the thing if one has a URI that currently serves a representation of a thing, but one wishes to make statements about the thing itself. First, one can use URI redirection for a URI about a thing and then resolve the redirection to a more informative web page [28]. The quickest way to do this is to append a “hash” (fragment identifier) onto the end of a URI, and so the redirection happens automatically. This is arguably an abuse of fragment identifiers which were originally meant for client-side processing. Yet according to the W3C, using a fragment identifier technically also identifies a separate and distinct “secondary resource” [23]. Regardless, this ability to talk about anything with URIs leads to a few practical questions: Can I make a statement on the Semantic Web about Tim Berners-Lee by making a statement about his home-page? If he is using a separate URI for himself, should he deliver a representation of himself? However, in all these cases there is the lurking threat of ambiguity: There is no principled way to distinguish a URI for a web page versus a URI for a thing “not on the Web.” This was dubbed the Identity Crisis, and has spawned endless discussions ever since [9]. For web pages (or “documents”) it’s pretty easy to tell what a URI identifies: The URI identifies the stream of bits that one gets when one accesses the URI with whatever operations are allowed by the scheme of the URI. Therefore, unlike names in natural language, URIs often imply the potential possession of whatever representations the URI gives one access to, and in a Wittgenstein-like move David Booth declares that there is only a myth of identity [7]. What a URI identifies or means is a question of use. “The problem of URI identity is the problem of locating appropriate descriptive information about the associated resource – descriptive information that enables you to make use of that URI in a particular application” [7]. In general, this should be the minimal amount of information one can get away with to make sure that the URI is used properly in a particular application. However, if the meaning of a URI is its use, then this use can easily change between applications, and nothing about the meaning (use) of a URI should be assumed to be invariant across applications. While this is a utilitarian and attractive reading, it prevents the one thing the Web is supposed to allow: a universal information space. Since the elementary building blocks of this space, URIs, are meaningless without the concrete context of an application, and each applications may have orthogonal contexts, there is no way an application can share its use of URIs in general with other applications. 3. URIS IDENTIFY ONE THING Tim Berners-Lee has stated that URIs “identify one thing” [3]. This thing is a resource. The most current IETF RFC for URIs states that it does not “limit the scope of what might be a resource” but that a resource “is used in a general sense for whatever might be identified by a URI” such as “human beings, corporations, and bound books in a library” and even “abstract concepts” [4]. An earlier RFC tried to ground out the concept of a resource as “the conceptual mapping to an entity or set of entities, not necessarily the entity which corresponds to that mapping at any particular instance in time” in order to deal with changes in particular representations over time, as exemplified by the web sites of newspapers like http://www.guardian.co.uk [4]. If a URI identifies a conceptual mapping, in whose head in that conceptual mapping? The user of the URI, the owner of the URI, or some common understanding? While Tim Berners-Lee argues that the URI owner dictates what thing the URI identifies, Larry Masinter believes the user should be the final authority, and calls this “the power of readers over writers.” Yet this psychological middle-man is dropped in the latest RFC that states that URIs provide “a simple and extensible means for identifying a resource,” and a resource is “whatever might be identified by a URI” [4]. In this manner, a resource and its URI become disturbing close to a tautology. Given a URI, what does it identify? A resource. What’s a resource? It’s what the URI identifies. According to Berners-Lee, in a given RDF statement, a URI should identify one resource. Furthermore, this URI identifies one thing in a “global context” [2]. This position taken to an extreme leads to problems: given two textually distinct URIs, is it possible they could identify the same thing? How can we judge if they identify the same thing? The classic definition of identity is whether or not two objects are in fact, on some given level, the same. The classic formulation is Leibniz’s Law, which states if two objects have all their properties in common, then they are identical and so only one object [25]. With web pages, one can compare the representations byte-by-byte even if the URIs are different, and so we can say two mirrors of a web pages have the sa", "title": "" }, { "docid": "0626c39604a1dde16a5d27de1c4cef24", "text": "Two dimensional (2D) materials with a monolayer of atoms represent an ultimate control of material dimension in the vertical direction. Molybdenum sulfide (MoS2) monolayers, with a direct bandgap of 1.8 eV, offer an unprecedented prospect of miniaturizing semiconductor science and technology down to a truly atomic scale. Recent studies have indeed demonstrated the promise of 2D MoS2 in fields including field effect transistors, low power switches, optoelectronics, and spintronics. However, device development with 2D MoS2 has been delayed by the lack of capabilities to produce large-area, uniform, and high-quality MoS2 monolayers. Here we present a self-limiting approach that can grow high quality monolayer and few-layer MoS2 films over an area of centimeters with unprecedented uniformity and controllability. This approach is compatible with the standard fabrication process in semiconductor industry. It paves the way for the development of practical devices with 2D MoS2 and opens up new avenues for fundamental research.", "title": "" }, { "docid": "3ee39231fc2fbf3b6295b1b105a33c05", "text": "We address a text regression problem: given a piece of text, predict a real-world continuous quantity associated with the text’s meaning. In this work, the text is an SEC-mandated financial report published annually by a publiclytraded company, and the quantity to be predicted is volatility of stock returns, an empirical measure of financial risk. We apply wellknown regression techniques to a large corpus of freely available financial reports, constructing regression models of volatility for the period following a report. Our models rival past volatility (a strong baseline) in predicting the target variable, and a single model that uses both can significantly outperform past volatility. Interestingly, our approach is more accurate for reports after the passage of the Sarbanes-Oxley Act of 2002, giving some evidence for the success of that legislation in making financial reports more informative.", "title": "" }, { "docid": "9932770cfc46cee41fb0b37a72771410", "text": "This study explores the extent to which a bilingual advantage can be observed for three tasks in an established population of fully fluent bilinguals from childhood through adulthood. Welsh-English simultaneous and early sequential bilinguals, as well as English monolinguals, aged 3 years through older adults, were tested on three sets of cognitive and executive function tasks. Bilinguals were Welsh-dominant, balanced, or English-dominant, with only Welsh, Welsh and English, or only English at home. Card sorting, Simon, and a metalinguistic judgment task (650, 557, and 354 participants, respectively) reveal little support for a bilingual advantage, either in relation to control or globally. Primarily there is no difference in performance across groups, but there is occasionally better performance by monolinguals or persons dominant in the language being tested, and in one case-in one condition and in one age group-lower performance by the monolinguals. The lack of evidence for a bilingual advantage in these simultaneous and early sequential bilinguals suggests the need for much closer scrutiny of what type of bilingual might demonstrate the reported effects, under what conditions, and why.", "title": "" }, { "docid": "58640b446a3c03ab8296302498e859a5", "text": "With Islands of Music we present a system which facilitates exploration of music libraries without requiring manual genre classification. Given pieces of music in raw audio format we estimate their perceived sound similarities based on psychoacoustic models. Subsequently, the pieces are organized on a 2-dimensional map so that similar pieces are located close to each other. A visualization using a metaphor of geographic maps provides an intuitive interface where islands resemble genres or styles of music. We demonstrate the approach using a collection of 359 pieces of music.", "title": "" }, { "docid": "b5d18b82e084042a6f31cb036ee83af5", "text": "In this paper, signal and power integrity of complete High Definition Multimedia Interface (HDMI) channel with IBIS-AMI model is presented. Gigahertz serialization and deserialization (SERDES) has become a leading inter-chip and inter-board data transmission technique in high-end computing devices. The IBIS-AMI model is used for circuit simulation of high-speed serial interfaces. A 3D frequency-domain simulator (FEM) was used to estimate the channel loss for data bus and HDMI connector. Compliance testing is performed for HDMI channels to ensure channel parameters are meeting HDMI specifications.", "title": "" }, { "docid": "28c03f6fb14ed3b7d023d0983cb1e12b", "text": "The focus of this paper is speeding up the application of convolutional neural networks. While delivering impressive results across a range of computer vision and machine learning tasks, these networks are computationally demanding, limiting their deployability. Convolutional layers generally consume the bulk of the processing time, and so in this work we present two simple schemes for drastically speeding up these layers. This is achieved by exploiting cross-channel or filter redundancy to construct a low rank basis of filters that are rank-1 in the spatial domain. Our methods are architecture agnostic, and can be easily applied to existing CPU and GPU convolutional frameworks for tuneable speedup performance. We demonstrate this with a real world network designed for scene text character recognition [15], showing a possible 2.5⇥ speedup with no loss in accuracy, and 4.5⇥ speedup with less than 1% drop in accuracy, still achieving state-of-the-art on standard benchmarks.", "title": "" }, { "docid": "697ae7ff6a0ace541ea0832347ba044f", "text": "The repair of wounds is one of the most complex biological processes that occur during human life. After an injury, multiple biological pathways immediately become activated and are synchronized to respond. In human adults, the wound repair process commonly leads to a non-functioning mass of fibrotic tissue known as a scar. By contrast, early in gestation, injured fetal tissues can be completely recreated, without fibrosis, in a process resembling regeneration. Some organisms, however, retain the ability to regenerate tissue throughout adult life. Knowledge gained from studying such organisms might help to unlock latent regenerative pathways in humans, which would change medical practice as much as the introduction of antibiotics did in the twentieth century.", "title": "" } ]
scidocsrr
a415503ceb55bfe061cf67864f66da36
Insight and reduction of MapReduce stragglers in heterogeneous environment
[ { "docid": "8222f36e2aa06eac76085fb120c8edab", "text": "Small jobs, that are typically run for interactive data analyses in datacenters, continue to be plagued by disproportionately long-running tasks called stragglers. In the production clusters at Facebook and Microsoft Bing, even after applying state-of-the-art straggler mitigation techniques, these latency sensitive jobs have stragglers that are on average 8 times slower than the median task in that job. Such stragglers increase the average job duration by 47%. This is because current mitigation techniques all involve an element of waiting and speculation. We instead propose full cloning of small jobs, avoiding waiting and speculation altogether. Cloning of small jobs only marginally increases utilization because workloads show that while the majority of jobs are small, they only consume a small fraction of the resources. The main challenge of cloning is, however, that extra clones can cause contention for intermediate data. We use a technique, delay assignment, which efficiently avoids such contention. Evaluation of our system, Dolly, using production workloads shows that the small jobs speedup by 34% to 46% after state-of-the-art mitigation techniques have been applied, using just 5% extra resources for cloning.", "title": "" } ]
[ { "docid": "b9538c45fc55caff8b423f6ecc1fe416", "text": " Summary. The Probabilistic I/O Automaton model of [31] is used as the basis for a formal presentation and proof of the randomized consensus algorithm of Aspnes and Herlihy. The algorithm guarantees termination within expected polynomial time. The Aspnes-Herlihy algorithm is a rather complex algorithm. Processes move through a succession of asynchronous rounds, attempting to agree at each round. At each round, the agreement attempt involves a distributed random walk. The algorithm is hard to analyze because of its use of nontrivial results of probability theory (specifically, random walk theory which is based on infinitely many coin flips rather than on finitely many coin flips), because of its complex setting, including asynchrony and both nondeterministic and probabilistic choice, and because of the interplay among several different sub-protocols. We formalize the Aspnes-Herlihy algorithm using probabilistic I/O automata. In doing so, we decompose it formally into three subprotocols: one to carry out the agreement attempts, one to conduct the random walks, and one to implement a shared counter needed by the random walks. Properties of all three subprotocols are proved separately, and combined using general results about automaton composition. It turns out that most of the work involves proving non-probabilistic properties (invariants, simulation mappings, non-probabilistic progress properties, etc.). The probabilistic reasoning is isolated to a few small sections of the proof. The task of carrying out this proof has led us to develop several general proof techniques for probabilistic I/O automata. These include ways to combine expectations for different complexity measures, to compose expected complexity properties, to convert probabilistic claims to deterministic claims, to use abstraction mappings to prove probabilistic properties, and to apply random walk theory in a distributed computational setting. We apply all of these techniques to analyze the expected complexity of the algorithm.", "title": "" }, { "docid": "5183794d8bef2d8f2ee4048d75a2bd3c", "text": "Uncovering the topics within short texts, such as tweets and instant messages, has become an important task for many content analysis applications. However, directly applying conventional topic models (e.g. LDA and PLSA) on such short texts may not work well. The fundamental reason lies in that conventional topic models implicitly capture the document-level word co-occurrence patterns to reveal topics, and thus suffer from the severe data sparsity in short documents. In this paper, we propose a novel way for modeling topics in short texts, referred as biterm topic model (BTM). Specifically, in BTM we learn the topics by directly modeling the generation of word co-occurrence patterns (i.e. biterms) in the whole corpus. The major advantages of BTM are that 1) BTM explicitly models the word co-occurrence patterns to enhance the topic learning; and 2) BTM uses the aggregated patterns in the whole corpus for learning topics to solve the problem of sparse word co-occurrence patterns at document-level. We carry out extensive experiments on real-world short text collections. The results demonstrate that our approach can discover more prominent and coherent topics, and significantly outperform baseline methods on several evaluation metrics. Furthermore, we find that BTM can outperform LDA even on normal texts, showing the potential generality and wider usage of the new topic model.", "title": "" }, { "docid": "d612ca22b9895c0e85f2b64327a1b22c", "text": "Physical inactivity has been associated with increasing prevalence and mortality of cardiovascular and other diseases. The purpose of this study is to identify if there is an association between, self–efficacy, mental health, and physical inactivity among university students. The study comprises of 202 males and 692 females age group 18-25 years drawn from seven faculties selected using a table of random numbers. Questionnaires were used for the data collection. The findings revealed that the prevalence of physical inactivity among the respondents was 41.4%. Using a univariate analysis, the study showed that there was an association between gender (female), low family income, low self-efficacy, respondents with mental health probable cases and physical inactivity (p<0.05).Using a multivariate analysis, physical inactivity was higher among females(OR = 3.72, 95% CI = 2.399-5.788), low family income (OR = 4.51, 95% CI = 3.266 – 6.241), respondents with mental health probable cases (OR = 1.58, 95% CI = 1.1362.206) and low self-efficacy for pysical activity(OR = 1.86, 95% CI = 1.350 2.578).Conclusively there is no significant decrease in physical inactivity among university students when compared with previous studies in this population, it is therefore recommended that counselling on mental health, physical activity awareness among new university students should be encouraged. Keyword:Exercise,Mental Health, Self-Efficacy,Physical Inactivity, University students", "title": "" }, { "docid": "0188eb4ef8a87b6cee8657018360fa69", "text": "This paper presents a pattern division multiple access (PDMA) concept for cellular future radio access (FRA) towards the 2020s information society. Different from the current LTE radio access scheme (until Release 11), PDMA is a novel non-orthogonal multiple access technology based on the total optimization of multiple user communication system. It considers joint design from both transmitter and receiver. At the receiver, multiple users are detected by successive interference cancellation (SIC) detection method. Numerical results show that the PDMA system based on SIC improve the average sum rate of users over the orthogonal system with affordable complexity.", "title": "" }, { "docid": "7b989f3da78e75d9616826644d210b79", "text": "BACKGROUND\nUse of cannabis is often an under-reported activity in our society. Despite legal restriction, cannabis is often used to relieve chronic and neuropathic pain, and it carries psychotropic and physical adverse effects with a propensity for addiction. This article aims to update the current knowledge and evidence of using cannabis and its derivatives with a view to the sociolegal context and perspectives for future research.\n\n\nMETHODS\nCannabis use can be traced back to ancient cultures and still continues in our present society despite legal curtailment. The active ingredient, Δ9-tetrahydrocannabinol, accounts for both the physical and psychotropic effects of cannabis. Though clinical trials demonstrate benefits in alleviating chronic and neuropathic pain, there is also significant potential physical and psychotropic side-effects of cannabis. Recent laboratory data highlight synergistic interactions between cannabinoid and opioid receptors, with potential reduction of drug-seeking behavior and opiate sparing effects. Legal rulings also have changed in certain American states, which may lead to wider use of cannabis among eligible persons.\n\n\nCONCLUSIONS\nFamily physicians need to be cognizant of such changing landscapes with a practical knowledge on the pros and cons of medical marijuana, the legal implications of its use, and possible developments in the future.", "title": "" }, { "docid": "969c83b4880879f1137284f531c9f94a", "text": "The extant literature on cross-national differences in approaches to corporate social responsibility (CSR) has mostly focused on developed countries. Instead, we offer two interrelated studies into corporate codes of conduct issued by developing country multinational enterprises (DMNEs). First, we analyse code adoption rates and code content through a mixed methods design. Second, we use multilevel analyses to examine country-level drivers of", "title": "" }, { "docid": "ad004dd47449b977cd30f2454c5af77a", "text": "Plants are a tremendous source for the discovery of new products of medicinal value for drug development. Today several distinct chemicals derived from plants are important drugs currently used in one or more countries in the world. Many of the drugs sold today are simple synthetic modifications or copies of the naturally obtained substances. The evolving commercial importance of secondary metabolites has in recent years resulted in a great interest in secondary metabolism, particularly in the possibility of altering the production of bioactive plant metabolites by means of tissue culture technology. Plant cell culture technologies were introduced at the end of the 1960’s as a possible tool for both studying and producing plant secondary metabolites. Different strategies, using an in vitro system, have been extensively studied to improve the production of plant chemicals. The focus of the present review is the application of tissue culture technology for the production of some important plant pharmaceuticals. Also, we describe the results of in vitro cultures and production of some important secondary metabolites obtained in our laboratory.", "title": "" }, { "docid": "037df2435ae0f995a40d5cce429af5cb", "text": "Breakthroughs from the field of deep learning are radically changing how sensor data are interpreted to extract important information to help advance healthcare, make our cities smarter, and innovate in smart home technology. Deep convolutional neural networks, which are at the heart of many emerging Internet-of-Things (IoT) applications, achieve remarkable performance in audio and visual recognition tasks, at the expense of high computational complexity in convolutional layers, limiting their deployability. In this paper, we present an easy-to-implement acceleration scheme, named ADaPT, which can be applied to already available pre-trained networks. Our proposed technique exploits redundancy present in the convolutional layers to reduce computation and storage requirements. Additionally, we also decompose each convolution layer into two consecutive one-dimensional stages to make full use of the approximate model. This technique can easily be applied to existing low power processors, GPUs or new accelerators. We evaluated this technique using four diverse and widely used benchmarks, on hardware ranging from embedded CPUs to server GPUs. Our experiments show an average 3-5x speed-up in all deep models and a maximum 8-9x speed-up on many individual convolutional layers. We demonstrate that unlike iterative pruning based methodology, our approximation technique is mathematically well grounded, robust, does not require any time-consuming retraining, and still achieves speed-ups solely from convolutional layers with no loss in baseline accuracy.", "title": "" }, { "docid": "3df95e4b2b1bb3dc80785b25c289da92", "text": "The problem of efficiently locating previously known patterns in a time series database (i.e., query by content) has received much attention and may now largely be regarded as a solved problem. However, from a knowledge discovery viewpoint, a more interesting problem is the enumeration of previously unknown, frequently occurring patterns. We call such patterns “motifs”, because of their close analogy to their discrete counterparts in computation biology. An efficient motif discovery algorithm for time series would be useful as a tool for summarizing and visualizing massive time series databases. In addition it could be used as a subroutine in various other data mining tasks, including the discovery of association rules, clustering and classification. In this work we carefully motivate, then introduce, a nontrivial definition of time series motifs. We propose an efficient algorithm to discover them, and we demonstrate the utility and efficiency of our approach on several real world datasets.", "title": "" }, { "docid": "d4766ccd502b9c35ee83631fadc69aaf", "text": "The approach proposed by Śliwerski, Zimmermann, and Zeller (SZZ) for identifying bug-introducing changes is at the foundation of several research areas within the software engineering discipline. Despite the foundational role of SZZ, little effort has been made to evaluate its results. Such an evaluation is a challenging task because the ground truth is not readily available. By acknowledging such challenges, we propose a framework to evaluate the results of alternative SZZ implementations. The framework evaluates the following criteria: (1) the earliest bug appearance, (2) the future impact of changes, and (3) the realism of bug introduction. We use the proposed framework to evaluate five SZZ implementations using data from ten open source projects. We find that previously proposed improvements to SZZ tend to inflate the number of incorrectly identified bug-introducing changes. We also find that a single bug-introducing change may be blamed for introducing hundreds of future bugs. Furthermore, we find that SZZ implementations report that at least 46 percent of the bugs are caused by bug-introducing changes that are years apart from one another. Such results suggest that current SZZ implementations still lack mechanisms to accurately identify bug-introducing changes. Our proposed framework provides a systematic mean for evaluating the data that is generated by a given SZZ implementation.", "title": "" }, { "docid": "5c8ab947856945b32d4d3e0edc89a9e0", "text": "While MOOCs offer educational data on a new scale, many educators find great potential of the big data including detailed activity records of every learner. A learner's behavior such as if a learner will drop out from the course can be predicted. How to provide an effective, economical, and scalable method to detect cheating on tests such as surrogate exam-taker is a challenging problem. In this paper, we present a grade predicting method that uses student activity features to predict whether a learner may get a certification if he/she takes a test. The method consists of two-step classifications: motivation classification (MC) and grade classification (GC). The MC divides all learners into three groups including certification earning, video watching, and course sampling. The GC then predicts a certification earning learner may or may not obtain a certification. Our experiment shows that the proposed method can fit the classification model at a fine scale and it is possible to find a surrogate exam-taker.", "title": "" }, { "docid": "5ca75490c015685a1fc670b2ee5103ff", "text": "The motion of the hand is the result of a complex interaction of extrinsic and intrinsic muscles of the forearm and hand. Whereas the origin of the extrinsic hand muscles is mainly located in the forearm, the origin (and insertion) of the intrinsic muscles is located within the hand itself. The intrinsic muscles of the hand include the lumbrical muscles I to IV, the dorsal and palmar interosseous muscles, the muscles of the thenar eminence (the flexor pollicis brevis, the abductor pollicis brevis, the adductor pollicis, and the opponens pollicis), as well as the hypothenar muscles (the abductor digiti minimi, flexor digiti minimi, and opponens digiti minimi). The thenar muscles control the motion of the thumb, and the hypothenar muscles control the motion of the little finger.1,2 The intrinsic muscles of the hand have not received much attention in the radiologic literature, despite their importance in moving the hand.3–7 Prospective studies on magnetic resonance (MR) imaging of the intrinsic muscles of the hand are rare, especially with a focus on new imaging techniques.6–8 However, similar to the other skeletal muscles, the intrinsic muscles of the hand can be affected by many conditions with resultant alterations in MR signal intensity ormorphology (e.g., with congenital abnormalities, inflammation, infection, trauma, neurologic disorders, and neoplastic conditions).1,9–12 MR imaging plays an important role in the evaluation of skeletal muscle disorders. Considered the most reliable diagnostic imaging tool, it can show subtle changes of signal and morphology, allow reliable detection and documentation of abnormalities, as well as provide a clear baseline for follow-up studies.13 It is also observer independent and allows second-opinion evaluation that is sometimes necessary, for example before a multidisciplinary discussion. Few studies exist on the clinical impact of MR imaging of the intrinsic muscles of the hand. A study by Andreisek et al in 19 patients with clinically evident or suspected intrinsic hand muscle abnormalities showed that MR imaging of the hand is useful and correlates well with clinical findings in patients with posttraumatic syndromes, peripheral neuropathies, myositis, and tumorous lesions, as well as congenital abnormalities.14,15 Because there is sparse literature on the intrinsic muscles of the hand, this review article offers a comprehensive review of muscle function and anatomy, describes normal MR imaging anatomy, and shows a spectrum of abnormal imaging findings.", "title": "" }, { "docid": "ac3d9b8a93cb18449b76b2f2ef818d76", "text": "Slotless brushless dc motors find more and more applications due to their high performance and their low production cost. This paper focuses on the windings inserted in the air gap of these motors and, in particular, to an original production technique that consists in printing them on a flexible printed circuit board. It theoretically shows that this technique, when coupled with an optimization of the winding shape, can improve the power density of about 23% compared with basic skewed and rhombic windings made of round wire. It also presents a first prototype of a winding realized using this technique and an experimental characterization aimed at identifying the importance and the origin of the differences between theory and practice.", "title": "" }, { "docid": "dffc11786d4a0d9247e22445f48d8fca", "text": "Tuberization in potato (Solanum tuberosum L.) is a complex biological phenomenon which is affected by several environmental cues, genetic factors and plant nutrition. Understanding the regulation of tuber induction is essential to devise strategies to improve tuber yield and quality. It is well established that short-day photoperiods promote tuberization, whereas long days and high-temperatures inhibit or delay tuberization. Worldwide research on this complex biological process has yielded information on the important bio-molecules (proteins, RNAs, plant growth regulators) associated with the tuberization process in potato. Key proteins involved in the regulation of tuberization include StSP6A, POTH1, StBEL5, StPHYB, StCONSTANS, Sucrose transporter StSUT4, StSP5G, etc. Biomolecules that become transported from \"source to sink\" have also been suggested to be important signaling candidates regulating the tuberization process in potatos. Four molecules, namely StSP6A protein, StBEL5 RNA, miR172 and GAs, have been found to be the main candidates acting as mobile signals for tuberization. These biomolecules can be manipulated (overexpressed/inhibited) for improving the tuberization in commercial varieties/cultivars of potato. In this review, information about the genes/proteins and their mechanism of action associated with the tuberization process is discussed.", "title": "" }, { "docid": "926734e0a379f678740d07c1042a5339", "text": "The increasing pervasiveness of digital technologies, also refered to as \"Internet of Things\" (IoT), offers a wealth of business model opportunities, which often involve an ecosystem of partners. In this context, companies are required to look at business models beyond a firm-centric lens and respond to changed dynamics. However, extant literature has not yet provided actionable approaches for business models for IoT-driven environments. Our research therefore addresses the need for a business model framework that captures the specifics of IoT-driven ecosystems. Applying an iterative design science research approach, the present paper describes (a) the methodology, (b) the requirements, (c) the design and (d) the evaluation of a business model framework that enables researchers and practitioners to visualize, analyze and design business models in the IoT context in a structured and actionable way. The identified dimensions in the framework include the value network of collaborating partners (who); sources of value creation (where); benefits from collaboration (why). Evidence from action research and multiple case studies indicates that the framework is able to depict business models in IoT.", "title": "" }, { "docid": "35c08abd57d2700164373c688c24b2a6", "text": "Image enhancement is a common pre-processing step before the extraction of biometric features from a fingerprint sample. This can be essential especially for images of low image quality. An ideal fingerprint image enhancement should intend to improve the end-to-end biometric performance, i.e. the performance achieved on biometric features extracted from enhanced fingerprint samples. We use a model from Deep Learning for the task of image enhancement. This work's main contribution is a dedicated cost function which is optimized during training The cost function takes into account the biometric feature extraction. Our approach intends to improve the accuracy and reliability of the biometric feature extraction process: No feature should be missed and all features should be extracted as precise as possible. By doing so, the loss function forced the image enhancement to learn how to improve the suitability of a fingerprint sample for a biometric comparison process. The effectivity of the cost function was demonstrated for two different biometric feature extraction algorithms.", "title": "" }, { "docid": "a870b0b347d15d8e8c788ede7ff5fa4a", "text": "On the twentieth anniversary of the original publication [10], following ten years of intense activity in the research literature, hardware support for transactional memory (TM) has finally become a commercial reality, with HTM-enabled chips currently or soon-to-be available from many hardware vendors. In this paper we describe architectural support for TM added to a future version of the Power ISA#8482;. Two imperatives drove the development: the desire to complement our weakly-consistent memory model with a more friendly interface to simplify the development and porting of multithreaded applications, and the need for robustness beyond that of some early implementations. In the process of commercializing the feature, we had to resolve some previously unexplored interactions between TM and existing features of the ISA, for example translation shootdown, interrupt handling, atomic read-modify-write primitives, and our weakly consistent memory model. We describe these interactions, the overall architecture, and discuss the motivation and rationale for our choices of architectural semantics, beyond what is typically found in reference manuals.", "title": "" }, { "docid": "9c008dc2f3da4453317ce92666184da0", "text": "In embedded system design, there is an increasing demand for modeling techniques that can provide both accurate measurements of delay and fast simulation speed. Modeling latency effects of a cache can greatly increase accuracy of the simulation and assist developers to optimize their software. Current solutions have not succeeded in balancing three important factors: speed, accuracy and usability. In this research, we created a cache simulation module inside a well-known instruction set simulator QEMU. Our implementation can simulate various cases of cache configuration and obtain every memory access. In full system simulation, speed is kept at around 73 MIPS on a personal host computer which is close to native execution of ARM Cortex-M3(125 MIPS at 100 MHz). Compared to the widely used cache simulation tool, Valgrind, our simulator is three time faster.", "title": "" }, { "docid": "5d9106a06f606cefb3b24fb14c72d41a", "text": "Most existing relation extraction models make predictions for each entity pair locally and individually, while ignoring implicit global clues available in the knowledge base, sometimes leading to conflicts among local predictions from different entity pairs. In this paper, we propose a joint inference framework that utilizes these global clues to resolve disagreements among local predictions. We exploit two kinds of clues to generate constraints which can capture the implicit type and cardinality requirements of a relation. Experimental results on three datasets, in both English and Chinese, show that our framework outperforms the state-of-theart relation extraction models when such clues are applicable to the datasets. And, we find that the clues learnt automatically from existing knowledge bases perform comparably to those refined by human.", "title": "" } ]
scidocsrr
c4ff1bae68e8e1d9cde109c65924ede6
Enhancing CNN Incremental Learning Capability with an Expanded Network
[ { "docid": "7d112c344167add5749ab54de184e224", "text": "Since Krizhevsky won the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2012 competition with the brilliant deep convolutional neural networks (D-CNNs), researchers have designed lots of D-CNNs. However, almost all the existing very deep convolutional neural networks are trained on the giant ImageNet datasets. Small datasets like CIFAR-10 has rarely taken advantage of the power of depth since deep models are easy to overfit. In this paper, we proposed a modified VGG-16 network and used this model to fit CIFAR-10. By adding stronger regularizer and using Batch Normalization, we achieved 8.45% error rate on CIFAR-10 without severe overfitting. Our results show that the very deep CNN can be used to fit small datasets with simple and proper modifications and don't need to re-design specific small networks. We believe that if a model is strong enough to fit a large dataset, it can also fit a small one.", "title": "" }, { "docid": "9b1874fb7e440ad806aa1da03f9feceb", "text": "Given an existing trained neural network, it is often desirable to learn new capabilities without hindering performance of those already learned. Existing approaches either learn sub-optimal solutions, require joint training, or incur a substantial increment in the number of parameters for each added domain, typically as many as the original network. We propose a method called Deep Adaptation Modules (DAM) that constrains newly learned filters to be linear combinations of existing ones. DAMs precisely preserve performance on the original domain, require a fraction (typically 13%, dependent on network architecture) of the number of parameters compared to standard fine-tuning procedures and converge in less cycles of training to a comparable or better level of performance. When coupled with standard network quantization techniques, we further reduce the parameter cost to around 3% of the original with negligible or no loss in accuracy. The learned architecture can be controlled to switch between various learned representations, enabling a single network to solve a task from multiple different domains. We conduct extensive experiments showing the effectiveness of our method on a range of image classification tasks and explore different aspects of its behavior.", "title": "" }, { "docid": "5092b52243788c4f4e0c53e7556ed9de", "text": "This work attempts to address two fundamental questions about the structure of the convolutional neural networks (CNN): 1) why a nonlinear activation function is essential at the filter output of all intermediate layers? 2) what is the advantage of the two-layer cascade system over the one-layer system? A mathematical model called the “REctified-COrrelations on a Sphere” (RECOS) is proposed to answer these two questions. After the CNN training process, the converged filter weights define a set of anchor vectors in the RECOS model. Anchor vectors represent the frequently occurring patterns (or the spectral components). The necessity of rectification is explained using the RECOS model. Then, the behavior of a two-layer RECOS system is analyzed and compared with its one-layer counterpart. The LeNet-5 and the MNIST dataset are used to illustrate discussion points. Finally, the RECOS model is generalized to a multilayer system with the AlexNet as an example.", "title": "" } ]
[ { "docid": "6b1f584a5665bda68a5215de5aed2fc7", "text": "Most semi-supervised learning models propagate the labels over the Laplacian graph, where the graph should be built beforehand. However, the computational cost of constructing the Laplacian graph matrix is very high. On the other hand, when we do classification, data points lying around the decision boundary (boundary points) are noisy for learning the correct classifier and deteriorate the classification performance. To address these two challenges, in this paper, we propose an adaptive semi-supervised learning model. Different from previous semi-supervised learning approaches, our new model needn't construct the graph Laplacian matrix. Thus, our method avoids the huge computational cost required by previous methods, and achieves a computational complexity linear to the number of data points. Therefore, our method is scalable to large-scale data. Moreover, the proposed model adaptively suppresses the weights of boundary points, such that our new model is robust to the boundary points. An efficient algorithm is derived to alternatively optimize the model parameter and class probability distribution of the unlabeled data, such that the induction of classifier and the transduction of labels are adaptively unified into one framework. Extensive experimental results on six real-world data sets show that the proposed semi-supervised learning model outperforms other related methods in most cases.", "title": "" }, { "docid": "7a9b9633243d84978d9e975744642e18", "text": "Our aim is to provide a pixel-level object instance labeling of a monocular image. We build on recent work [27] that trained a convolutional neural net to predict instance labeling in local image patches, extracted exhaustively in a stride from an image. A simple Markov random field model using several heuristics was then proposed in [27] to derive a globally consistent instance labeling of the image. In this paper, we formulate the global labeling problem with a novel densely connected Markov random field and show how to encode various intuitive potentials in a way that is amenable to efficient mean field inference [13]. Our potentials encode the compatibility between the global labeling and the patch-level predictions, contrast-sensitive smoothness as well as the fact that separate regions form different instances. Our experiments on the challenging KITTI benchmark [8] demonstrate that our method achieves a significant performance boost over the baseline [27].", "title": "" }, { "docid": "e913d5a0d898df3db28b97b27757b889", "text": "Speech-language pathologists tend to rely on the noninstrumental swallowing evaluation in making recommendations about a patient’s diet and management plan. The present study was designed to examine the sensitivity and specificity of the accuracy of using the chin-down posture during the clinical/bedside swallowing assessment. In 15 patients with acute stroke and clinically suspected oropharyngeal dysphagia, the correlation between clinical and videofluoroscopic findings was examined. Results identified that there is a difference in outcome prediction using the chin-down posture during the clinical/bedside assessment of swallowing compared to assessment by videofluoroscopy. Results are discussed relative to statistical and clinical perspectives, including site of lesion and factors to be considered in the design of an overall treatment plan for a patient with disordered swallowing.", "title": "" }, { "docid": "523a1bc4ac20bd0bbabd85a8eea66c5b", "text": "Crime is a major social problem in the United States, threatening public safety and disrupting the economy. Understanding patterns in criminal activity allows for the prediction of future high-risk crime “hot spots” and enables police precincts to more effectively allocate officers to prevent or respond to incidents. With the ever-increasing ability of states and organizations to collect and store detailed data tracking crime occurrence, a significant amount of data with spatial and temporal information has been collected. How to use the benefit of massive spatial-temporal information to precisely predict the regional crime rates becomes necessary. The recurrent neural network model has been widely proven effective for detecting the temporal patterns in a time series. In this study, we propose the Spatio-Temporal neural network (STNN) to precisely forecast crime hot spots with embedding spatial information. We evaluate the model using call-for-service data provided by the Portland, Oregon Police Bureau (PPB) for a 5-year period from March 2012 through the end of December 2016. We show that our STNN model outperforms a number of classical machine learning approaches and some alternative neural network architectures.", "title": "" }, { "docid": "aae743c3254352ff973dcb8fbff55299", "text": "Software Defined Radar is the latest trend in radar development. To handle enhanced radar signal processing techniques, advanced radars need to be able of generating various types of waveforms, such as frequency modulated or phase coded, and to perform multiple functions. The adoption of a Software Defined Radio system makes easier all these abilities. In this work, the implementation of a Software Defined Radar system for target tracking using the Universal Software Radio Peripheral platform is discussed. For the first time, an experimental characterization in terms of radar application is performed on the latest Universal Software Radio Peripheral NI2920, demonstrating a strongly improved target resolution with respect to the first generation platform.", "title": "" }, { "docid": "60ad412d0d6557d2a06e9914bbf3c680", "text": "Helpfulness of online reviews is a multi-faceted concept that can be driven by several types of factors. This study was designed to extend existing research on online review helpfulness by looking at not just the quantitative factors (such as word count), but also qualitative aspects of reviewers (including reviewer experience, reviewer impact, reviewer cumulative helpfulness). This integrated view uncovers some insights that were not available before. Our findings suggest that word count has a threshold in its effects on review helpfulness. Beyond this threshold, its effect diminishes significantly or becomes near non-existent. Reviewer experience and their impact were not statistically significant predictors of helpfulness, but past helpfulness records tended to predict future helpfulness ratings. Review framing was also a strong predictor of helpfulness. As a result, characteristics of reviewers and review messages have a varying degree of impact on review helpfulness. Theoretical and practical implications are discussed. 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "d3e2efde80890e469684a41287833eb6", "text": "Recent work has suggested reducing electricity generation cost by cutting the peak to average ratio (PAR) without reducing the total amount of the loads. However, most of these proposals rely on consumer's willingness to act. In this paper, we propose an approach to cut PAR explicitly from the supply side. The resulting cut loads are then distributed among consumers by the means of a multiunit auction which is done by an intelligent agent on behalf of the consumer. This approach is also in line with the future vision of the smart grid to have the demand side matched with the supply side. Experiments suggest that our approach reduces overall system cost and gives benefit to both consumers and the energy provider.", "title": "" }, { "docid": "4a8448ab4c1c9e0a1df5e2d1c1d20417", "text": "We present an empirical framework for testing game strategies in The Settlers of Catan, a complex win-lose game that lacks any analytic solution. This framework provides the means to change different components of an autonomous agent's strategy, and to test them in suitably controlled ways via performance metrics in game simulations and via comparisons of the agent's behaviours with those exhibited in a corpus of humans playing the game. We provide changes to the game strategy that not only improve the agent's strength, but corpus analysis shows that they also bring the agent closer to a model of human players.", "title": "" }, { "docid": "065eb4ca2fbef1a8d0d4029b178a0c98", "text": "Melanoma is the deadliest type of skin cancer with highest mortality rate. However, the annihilation in its early stage implies a high survival rate therefore, it demands early diagnosis. The accustomed diagnosis methods are costly and cumbersome due to the involvement of experienced experts as well as the requirements for the highly equipped environment. The recent advancements in computerized solutions for this diagnosis are highly promising with improved accuracy and efficiency. In this article, a method for the identification and classification of the lesion based on probabilistic distribution and best features selection is proposed. The probabilistic distribution such as normal distribution and uniform distribution are implemented for segmentation of lesion in the dermoscopic images. Then multi-level features are extracted and parallel strategy is performed for fusion. A novel entropy-based method with the combination of Bhattacharyya distance and variance are calculated for the selection of best features. Only selected features are classified using multi-class support vector machine, which is selected as a base classifier. The proposed method is validated on three publicly available datasets such as PH2, ISIC (i.e. ISIC MSK-2 and ISIC UDA), and Combined (ISBI 2016 and ISBI 2017), including multi-resolution RGB images and achieved accuracy of 97.5%, 97.75%, and 93.2%, respectively. The base classifier performs significantly better on proposed features fusion and selection method as compared to other methods in terms of sensitivity, specificity, and accuracy. Furthermore, the presented method achieved satisfactory segmentation results on selected datasets.", "title": "" }, { "docid": "31dfedb06716502fcf33871248fd7e9e", "text": "Multi-sensor precipitation datasets including two products from the Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) and estimates from Climate Prediction Center Morphing Technique (CMORPH) product were quantitatively evaluated to study the monsoon variability over Pakistan. Several statistical and graphical techniques are applied to illustrate the nonconformity of the three satellite products from the gauge observations. During the monsoon season (JAS), the three satellite precipitation products captures the intense precipitation well, all showing high correlation for high rain rates (>30 mm/day). The spatial and temporal satellite rainfall error variability shows a significant geo-topography dependent distribution, as all the three products overestimate over mountain ranges in the north and coastal region in the south parts of Indus basin. The TMPA-RT product tends to overestimate light rain rates (approximately 100%) and the bias is low for high rain rates (about ±20%). In general, daily comparisons from 2005 to 2010 show the best agreement between the TMPA-V7 research product and gauge observations with correlation coefficient values ranging from moderate (0.4) to high (0.8) over the spatial domain of Pakistan. The seasonal variation of rainfall frequency has large biases (100–140%) over high latitudes (36N) with complex terrain for daily, monsoon, and pre-monsoon comparisons. Relatively low uncertainties and errors (Bias ±25% and MAE 1–10 mm) were associated with the TMPA-RT product during the monsoon-dominated region (32–35N), thus demonstrating their potential use for developing an operational hydrological application of the satellite-based near real-time products in Pakistan for flood monitoring. 2014 COSPAR. Published by Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "db9ff7ade6b863707bf595e2b866745b", "text": "Pneumatic devices require tight tolerances to keep them leak-free. Specialized companies offer various off-the-shelf devices, while these work well for many applications, there are also situations where custom design and production of pneumatic parts are desired. Cost efficiency, design flexibility, rapid prototyping, and MRI compatibility requirements are reasons why we investigated a method to design and produce different pneumatic devices using a laser cutter from acrylic, acetal, and rubber-like materials. The properties of the developed valves, pneumatic cylinders, and stepper motors were investigated. At 4-bar working pressure, the 4/3-way valves are capable of 5-Hz switching frequency and provide at most 22-L/min airflow. The pneumatic cylinder delivers 48 N of force, the acrylic stepper motor 30 N. The maximum switching frequency over 6-m long transmission lines is 4.5 Hz, using 2-mm tubing. A MRI-compatible robotic biopsy system driven by the pneumatic stepper motors is also demonstrated. We have shown that it is possible to construct pneumatic devices using laser-cutting techniques. This way, plastic MRI-compatible cylinders, stepper motors, and valves can be developed. Provided that a laser-cutting machine is available, the described pneumatic devices can be fabricated within hours at relatively low cost, making it suitable for rapid prototyping applications.", "title": "" }, { "docid": "d9366c0456eedecd396a9aa1dbc31e35", "text": "A connectionist model is presented, the TraceLink model, that implements an autonomous \"off-line\" consolidation process. The model consists of three subsystems: (1) a trace system (neocortex), (2) a link system (hippocampus and adjacent regions), and (3) a modulatory system (basal forebrain and other areas). The model is able to account for many of the characteristics of anterograde and retrograde amnesia, including Ribot gradients, transient global amnesia, patterns of shrinkage of retrograde amnesia, and correlations between anterograde and retrograde amnesia or the absence thereof (e.g., in isolated retrograde amnesia). In addition, it produces normal forgetting curves and can exhibit permastore. It also offers an explanation for the advantages of learning under high arousal for long-term retention.", "title": "" }, { "docid": "15ba6a0a5ce45fbecf33bff5d2194250", "text": "Recently, pathological diagnosis plays a crucial role in many areas of medicine, and some researchers have proposed many models and algorithms for improving classification accuracy by extracting excellent feature or modifying the classifier. They have also achieved excellent results on pathological diagnosis using tongue images. However, pixel values can't express intuitive features of tongue images and different classifiers for training samples have different adaptability. Accordingly, this paper presents a robust approach to infer the pathological characteristics by observing tongue images. Our proposed method makes full use of the local information and similarity of tongue images. Firstly, tongue images in RGB color space are converted to Lab. Then, we compute tongue statistics information. In the calculation process, Lab space dictionary is created at first, through it, we compute statistic value for each dictionary value. After that, a method based on Doublets is taken for feature optimization. At last, we use XGBOOST classifier to predict the categories of tongue images. We achieve classification accuracy of 95.39% using statistics feature and the improved classifier, which is helpful for TCM (Traditional Chinese Medicine) diagnosis.", "title": "" }, { "docid": "b5b8ae3b7b307810e1fe39630bc96937", "text": "Up to this point in the text we have considered the use of the logistic regression model in settings where we observe a single dichotomous response for a sample of statistically independent subjects. However, there are settings where the assumption of independence of responses may not hold for a variety of reasons. For example, consider a study of asthma in children in which subjects are interviewed bi-monthly for 1 year. At each interview the date is recorded and the mother is asked whether, during the previous 2 months, her child had an asthma attack severe enough to require medical attention, whether the child had a chest cold, and how many smokers lived in the household. The child’s age and race are recorded at the first interview. The primary outcome is the occurrence of an asthma attack. What differs here is the lack of independence in the observations due to the fact that we have six measurements on each child. In this example, each child represents a cluster of correlated observations of the outcome. The measurements of the presence or absence of a chest cold and the number of smokers residing in the household can change from observation to observation and thus are called clusterspecific or time-varying covariates. The date changes in a systematic way and is recorded to model possible seasonal effects. The child’s age and race are constant for the duration of the study and are referred to as cluster-level or time-invariant covariates. The terms clusters, subjects, cluster-specific and cluster-level covariates are general enough to describe multiple measurements on a single subject or single measurements on different but related subjects. An example of the latter setting would be a study of all children in a household. Repeated measurements on the same subject or a subject clustered in some sort of unit (household, hospital, or physician) are the two most likely scenarios leading to correlated data.", "title": "" }, { "docid": "70a970138428aeb06c139abb893a56a9", "text": "Two sequentially rotated, four stage, wideband circularly polarized high gain microstrip patch array antennas at Ku-band are investigated and compared by incorporating both unequal and equal power division based feeding networks. Four stages of sequential rotation is used to create 16×16 patch array which provides wider common bandwidth between the impedance matching (S11 < −10dB), 3dB axial ratio and 3dB gain of 12.3% for the equal power divider based feed array and 13.2% for the unequal power divider based feed array in addition to high polarization purity. The high peak gain of 28.5dBic is obtained for the unequal power division feed based array antennas compared to 26.8dBic peak gain in the case of the equal power division based feed array antennas. The additional comparison between two feed networks based arrays reveals that the unequal power divider based array antennas provide better array characteristics than the equal power divider based feed array antennas.", "title": "" }, { "docid": "ae43fc77cfe3e88f00a519744407eed7", "text": "In this work we use the recent advances in representation learning to propose a neural architecture for the problem of natural language inference. Our approach is aligned to mimic how a human does the natural language inference process given two statements. The model uses variants of Long Short Term Memory (LSTM), attention mechanism and composable neural networks, to carry out the task. Each part of our model can be mapped to a clear functionality humans do for carrying out the overall task of natural language inference. The model is end-to-end differentiable enabling training by stochastic gradient descent. On Stanford Natural Language Inference(SNLI) dataset, the proposed model achieves better accuracy numbers than all published models in literature.", "title": "" }, { "docid": "5de07054546347e150aeabe675234966", "text": "Smart farming is seen to be the future of agriculture as it produces higher quality of crops by making farms more intelligent in sensing its controlling parameters. Analyzing massive amount of data can be done by accessing and connecting various devices with the help of Internet of Things (IoT). However, it is not enough to have an Internet support and self-updating readings from the sensors but also to have a self-sustainable agricultural production with the use of analytics for the data to be useful. This study developed a smart hydroponics system that is used in automating the growing process of the crops using exact inference in Bayesian Network (BN). Sensors and actuators are installed in order to monitor and control the physical events such as light intensity, pH, electrical conductivity, water temperature, and relative humidity. The sensor values gathered were used to build the Bayesian Network in order to infer the optimum value for each parameter. A web interface is developed wherein the user can monitor and control the farm remotely via the Internet. Results have shown that the fluctuations in terms of the sensor values were minimized in the automatic control using BN as compared to the manual control. The yielded crop on the automatic control was 66.67% higher than the manual control which implies that the use of exact inference in BN aids in producing high-quality crops. In the future, the system can use higher data analytics and longer data gathering to improve the accuracy of inference.", "title": "" }, { "docid": "c2ac1c1f08e7e4ccba14ea203acba661", "text": "This paper describes an approach to determine a layout for the order picking area in warehouses, such that the average travel distance for the order pickers is minimized. We give analytical formulas by which the average length of an order picking route can be calculated for two different routing policies. The optimal layout can be determined by using such formula as an objective function in a non-linear programming model. The optimal number of aisles in an order picking area appears to depend strongly on the required storage space and the pick list size.", "title": "" }, { "docid": "4768b338044e38949f50c5856bc1a07c", "text": "Radio-frequency identification (RFID) technology provides an effective tool for managing traceability along food supply chains. This is because it allows automatic digital registration of data, and therefore reduces errors and enables the availability of information on demand. A complete traceability system can be developed in the wine production sector by joining this technology with the use of wireless sensor networks for monitoring at the vineyards. A proposal of such a merged solution for a winery in Spain has been designed, deployed in an actual environment, and evaluated. It was shown that the system could provide a competitive advantage to the company by improving visibility of the processes performed and the associated control over product quality. Much emphasis has been placed on minimizing the impact of the new system in the current activities.", "title": "" } ]
scidocsrr
302b33b7f7abe43e01027e16fe586812
Is the Implicit Association Test a Valid and Valuable Measure of Implicit Consumer Social Cognition ?
[ { "docid": "eed70d4d8bfbfa76382bfc32dd12c3db", "text": "Three studies tested basic assumptions derived from a theoretical model based on the dissociation of automatic and controlled processes involved in prejudice. Study 1 supported the model's assumption that highand low-prejudice persons are equally knowledgeable of the cultural stereotype. The model suggests that the stereotype is automatically activated in the presence of a member (or some symbolic equivalent) of the stereotyped group and that low-prejudice responses require controlled inhibition of the automatically activated stereotype. Study 2, which examined the effects of automatic stereotype activation on the evaluation of ambiguous stereotype-relevant behaviors performed by a race-unspecified person, suggested that when subjects' ability to consciously monitor stereotype activation is precluded, both highand low-prejudice subjects produce stereotype-congruent evaluations of ambiguous behaviors. Study 3 examined highand low-prejudice subjects' responses in a consciously directed thought-listing task. Consistent with the model, only low-prejudice subjects inhibited the automatically activated stereotype-congruent thoughts and replaced them with thoughts reflecting equality and negations of the stereotype. The relation between stereotypes and prejudice and implications for prejudice reduction are discussed.", "title": "" }, { "docid": "6d5bb9f895461b3bd7ee82041c3db6aa", "text": "Respondents at an Internet site completed over 600,000 tasks between October 1998 and April 2000 measuring attitudes toward and stereotypes of social groups. Their responses demonstrated, on average, implicit preference for White over Black and young over old and stereotypic associations linking male terms with science and career and female terms with liberal arts and family. The main purpose was to provide a demonstration site at which respondents could experience their implicit attitudes and stereotypes toward social groups. Nevertheless, the data collected are rich in information regarding the operation of attitudes and stereotypes, most notably the strength of implicit attitudes, the association and dissociation between implicit and explicit attitudes, and the effects of group membership on attitudes and stereotypes.", "title": "" } ]
[ { "docid": "5d91c93728632586a63634c941420c64", "text": "A new method for analyzing analog single-event transient (ASET) data has been developed. The approach allows for quantitative error calculations, given device failure thresholds. The method is described and employed in the analysis of an OP-27 op-amp.", "title": "" }, { "docid": "b59f429192a680c1dc07580d21f9e374", "text": "Recently, several competing smart home programming frameworks that support third party app development have emerged. These frameworks provide tangible benefits to users, but can also expose users to significant security risks. This paper presents the first in-depth empirical security analysis of one such emerging smart home programming platform. We analyzed Samsung-owned SmartThings, which has the largest number of apps among currently available smart home platforms, and supports a broad range of devices including motion sensors, fire alarms, and door locks. SmartThings hosts the application runtime on a proprietary, closed-source cloud backend, making scrutiny challenging. We overcame the challenge with a static source code analysis of 499 SmartThings apps (called SmartApps) and 132 device handlers, and carefully crafted test cases that revealed many undocumented features of the platform. Our key findings are twofold. First, although SmartThings implements a privilege separation model, we discovered two intrinsic design flaws that lead to significant overprivilege in SmartApps. Our analysis reveals that over 55% of SmartApps in the store are overprivileged due to the capabilities being too coarse-grained. Moreover, once installed, a SmartApp is granted full access to a device even if it specifies needing only limited access to the device. Second, the SmartThings event subsystem, which devices use to communicate asynchronously with SmartApps via events, does not sufficiently protect events that carry sensitive information such as lock codes. We exploited framework design flaws to construct four proof-of-concept attacks that: (1) secretly planted door lock codes, (2) stole existing door lock codes, (3) disabled vacation mode of the home, and (4) induced a fake fire alarm. We conclude the paper with security lessons for the design of emerging smart home programming frameworks.", "title": "" }, { "docid": "d6adda476cc8bd69c37bd2d00f0dace4", "text": "The conceptualization of a distinct construct known as statistics anxiety has led to the development of numerous rating scales, including the Statistical Anxiety Rating Scale (STARS), designed to assess levels of statistics anxiety. In the current study, the STARS was administered to a sample of 423 undergraduate and graduate students from a midsized, western United States university. The Rasch measurement rating scale model was used to analyze scores from the STARS. Misfitting items were removed from the analysis. In general, items from the six subscales represented a broad range of abilities, with the major exception being a lack of items at the lower extremes of the subscales. Additionally, a differential item functioning (DIF) analysis was performed across sex and student classification. Several items displayed DIF, which indicates subgroups may ascribe different meanings to those items. The paper concludes with several recommendations for researchers considering using the STARS.", "title": "" }, { "docid": "0899cfa62ccd036450c079eb3403902a", "text": "Manual editing of a metro map is essential because many aesthetic and readability demands in map generation cannot be achieved by using a fully automatic method. In addition, a metro map should be updated when new metro lines are developed in a city. Considering that manually designing a metro map is time-consuming and requires expert skills, we present an interactive editing system that considers human knowledge and adjusts the layout to make it consistent with user expectations. In other words, only a few stations are controlled and the remaining stations are relocated by our system. Our system supports both curvilinear and octilinear layouts when creating metro maps. It solves an optimization problem, in which even spaces, route straightness, and maximum included angles at junctions are considered to obtain a curvilinear result. The system then rotates each edge to extend either vertically, horizontally, or diagonally while approximating the station positions provided by users to generate an octilinear layout. Experimental results, quantitative and qualitative evaluations, and user studies show that our editing system is easy to use and allows even non-professionals to design a metro map.", "title": "" }, { "docid": "95d6189ba97f15c7cc33028f13f8789f", "text": "This paper presents a new Bayesian nonnegative matrix factorization (NMF) for monaural source separation. Using this approach, the reconstruction error based on NMF is represented by a Poisson distribution, and the NMF parameters, consisting of the basis and weight matrices, are characterized by the exponential priors. A variational Bayesian inference procedure is developed to learn variational parameters and model parameters. The randomness in separation process is faithfully represented so that the system robustness to model variations in heterogeneous environments could be achieved. Importantly, the exponential prior parameters are used to impose sparseness in basis representation. The variational lower bound of log marginal likelihood is adopted as the objective to control model complexity. The dependencies of variational objective on model parameters are fully characterized in the derived closed-form solution. A clustering algorithm is performed to find the groups of bases for unsupervised source separation. The experiments on speech/music separation and singing voice separation show that the proposed Bayesian NMF (BNMF) with adaptive basis representation outperforms the NMF with fixed number of bases and the other BNMFs in terms of signal-to-distortion ratio and the global normalized source to distortion ratio.", "title": "" }, { "docid": "7e2ba771e25a2e6716ce59522ace2835", "text": "Online debate sites are a large source of informal and opinion-sharing dialogue on current socio-political issues. Inferring users’ stance (PRO or CON) towards discussion topics in domains such as politics or news is an important problem, and is of utility to researchers, government organizations, and companies. Predicting users’ stance supports identification of social and political groups, building of better recommender systems, and personalization of users’ information preferences to their ideological beliefs. In this paper, we develop a novel collective classification approach to stance classification, which makes use of both structural and linguistic features, and which collectively labels the posts’ stance across a network of the users’ posts. We identify both linguistic features of the posts and features that capture the underlying relationships between posts and users. We use probabilistic soft logic (PSL) (Bach et al., 2013) to model post stance by leveraging both these local linguistic features as well as the observed network structure of the posts to reason over the dataset. We evaluate our approach on 4FORUMS (Walker et al., 2012b), a collection of discussions from an online debate site on issues ranging from gun control to gay marriage. We show that our collective classification model is able to easily incorporate rich, relational information and outperforms a local model which uses only linguistic information.", "title": "" }, { "docid": "6dfb62138ad7e0c23826a2c6b7c2507e", "text": "End-to-end speech recognition systems have been successfully designed for English. Taking into account the distinctive characteristics between Chinese Mandarin and English, it is worthy to do some additional work to transfer these approaches to Chinese. In this paper, we attempt to build a Chinese speech recognition system using end-to-end learning method. The system is based on a combination of deep Long Short-Term Memory Projected (LSTMP) network architecture and the Connectionist Temporal Classification objective function (CTC). The Chinese characters (the number is about 6,000) are used as the output labels directly. To integrate language model information during decoding, the CTC Beam Search method is adopted and optimized to make it more effective and more efficient. We present the first-pass decoding results which are obtained by decoding from scratch using CTC-trained network and language model. Although these results are not as good as the performance of DNN-HMMs hybrid system, they indicate that it is feasible to choose Chinese characters as the output alphabet in the end-toend speech recognition system.", "title": "" }, { "docid": "5bee78694f3428d3882e27000921f501", "text": "We introduce a new approach to perform background subtraction in moving camera scenarios. Unlike previous treatments of the problem, we do not restrict the camera motion or the scene geometry. The proposed approach relies on Bayesian selection of the transformation that best describes the geometric relation between consecutive frames. Based on the selected transformation, we propagate a set of learned background and foreground appearance models using a single or a series of homography transforms. The propagated models are subjected to MAP-MRF optimization framework that combines motion, appearance, spatial, and temporal cues; the optimization process provides the final background/foreground labels. Extensive experimental evaluation with challenging videos shows that the proposed method outperforms the baseline and state-of-the-art methods in most cases.", "title": "" }, { "docid": "764840c288985e0257413c94205d2bf2", "text": "Although deep learning approaches have stood out in recent years due to their state-of-the-art results, they continue to suffer from catastrophic forgetting, a dramatic decrease in overall performance when training with new classes added incrementally. This is due to current neural network architectures requiring the entire dataset, consisting of all the samples from the old as well as the new classes, to update the model—a requirement that becomes easily unsustainable as the number of classes grows. We address this issue with our approach to learn deep neural networks incrementally, using new data and only a small exemplar set corresponding to samples from the old classes. This is based on a loss composed of a distillation measure to retain the knowledge acquired from the old classes, and a cross-entropy loss to learn the new classes. Our incremental training is achieved while keeping the entire framework end-to-end, i.e., learning the data representation and the classifier jointly, unlike recent methods with no such guarantees. We evaluate our method extensively on the CIFAR-100 and ImageNet (ILSVRC 2012) image classification datasets, and show state-of-the-art performance.", "title": "" }, { "docid": "2c2daf28c81e7f12113a391835961981", "text": "We address the problem of generating images across two drastically different views, namely ground (street) and aerial (overhead) views. Image synthesis by itself is a very challenging computer vision task and is even more so when generation is conditioned on an image in another view. Due the difference in viewpoints, there is small overlapping field of view and little common content between these two views. Here, we try to preserve the pixel information between the views so that the generated image is a realistic representation of cross view input image. For this, we propose to use homography as a guide to map the images between the views based on the common field of view to preserve the details in the input image. We then use generative adversarial networks to inpaint the missing regions in the transformed image and add realism to it. Our exhaustive evaluation and model comparison demonstrate that utilizing geometry constraints adds fine details to the generated images and can be a better approach for cross view image synthesis than purely pixel based synthesis methods.", "title": "" }, { "docid": "26d20cd47dfd174ecb8606b460c1c040", "text": "In this article, we use an automated bottom-up approach to identify semantic categories in an entire corpus. We conduct an experiment using a word vector model to represent the meaning of words. The word vectors are then clustered, giving a bottom-up representation of semantic categories. Our main finding is that the likelihood of changes in a word’s meaning correlates with its position within its cluster.", "title": "" }, { "docid": "5cb44c68cecb0618be14cd52182dc96e", "text": "Recognition of objects using Deep Neural Networks is an active area of research and many breakthroughs have been made in the last few years. The paper attempts to indicate how far this field has progressed. The paper briefly describes the history of research in Neural Networks and describe several of the recent advances in this field. The performances of recently developed Neural Network Algorithm over benchmark datasets have been tabulated. Finally, some the applications of this field have been provided.", "title": "" }, { "docid": "ff76b52f7859aaffa58307018edb8323", "text": "Malevolent Trojan circuits inserted by layout modifications in an IC at untrustworthy fabrication facilities are difficult to detect by traditional post-manufacturing testing. In this paper, we develop a novel low-overhead design methodology that facilitates the detection of inserted Trojan hardware in an IC through logic testing. As a byproduct, it also increases the security of the design by design obfuscation. Application of the proposed design methodology to an 8-bit RISC processor and a JPEG encoder resulted in improvement in Trojan detection probability significantly. It also obfuscated the design with verification mismatch for 90% of the verification points, while incurring moderate area, power and delay overheads.", "title": "" }, { "docid": "486d31b962600141ba75dfde718f5b3d", "text": "The design, fabrication, and measurement of a coax to double-ridged waveguide launcher and horn antenna is presented. The novel launcher design employs two symmetric field probes across the ridge gap to minimize spreading inductance in the transition, and achieves better than 15 dB return loss over a 10:1 bandwidth. The aperture-matched horn uses a half-cosine transition into a linear taper for the outer waveguide dimensions and ridge width, and a power-law scaled gap to realize monotonically varying cutoff frequencies, thus avoiding the appearance of trapped mode resonances. It achieves a nearly constant beamwidth in both E- and H-planes for an overall directivity of about 16.5 dB from 10-100 GHz.", "title": "" }, { "docid": "970a76190e980afe51928dcaa6d594c8", "text": "Despite sequences being core to NLP, scant work has considered how to handle noisy sequence labels from multiple annotators for the same text. Given such annotations, we consider two complementary tasks: (1) aggregating sequential crowd labels to infer a best single set of consensus annotations; and (2) using crowd annotations as training data for a model that can predict sequences in unannotated text. For aggregation, we propose a novel Hidden Markov Model variant. To predict sequences in unannotated text, we propose a neural approach using Long Short Term Memory. We evaluate a suite of methods across two different applications and text genres: Named-Entity Recognition in news articles and Information Extraction from biomedical abstracts. Results show improvement over strong baselines. Our source code and data are available online.", "title": "" }, { "docid": "ad1582fb37440ef7182af4925427f5ca", "text": "The advent of new information technology has radically changed the end-user computing environment over the past decade. To enhance their management decision-making capability, many organizations have made significant investments in business intelligence (BI) systems. The realization of business benefits from BI investments depends on supporting effective use of BI systems and satisfying their end user requirements. Even though a lot of attention has been paid to the decision-making benefits of BI systems in practice, there is still a limited amount of empirical research that explores the nature of enduser satisfaction with BI systems. End-user satisfaction and system usage have been recognized by many researchers as critical determinants of the success of information systems (IS). As an increasing number of companies have adopted BI systems, there is a need to understand their impact on an individual end-user’s performance. In recent years, researchers have considered assessing individual performance effects from IS use as a key area of concern. Therefore, this study aims to empirically test a framework identifying the relationships between end-user computing satisfaction (EUCS), system usage, and individual performance. Data gathered from 330 end users of BI systems in the Taiwanese electronics industry were used to test the relationships proposed in the framework using the structural equation modeling approach. The results provide strong support for our model. Our results indicate that higher levels of EUCS can lead to increased BI system usage and improved individual performance, and that higher levels of BI system usage will lead to higher levels of individual performance. In addition, this study’s findings, consistent with DeLone and McLean’s IS success model, confirm that there exists a significant positive relationship between EUCS and system usage. Theoretical and practical implications of the findings are discussed. © 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "f3c5a1cef29f5fa834433ce859b15694", "text": "This paper describes the design, construction, and testing of a 750-V 100-kW 20-kHz bidirectional isolated dual-active-bridge dc-dc converter using four 1.2-kV 400-A SiC-MOSFET/SBD dual modules. The maximum conversion efficiency from the dc-input to the dc-output terminals is accurately measured to be as high as 98.7% at 42-kW operation. The overall power loss at the rated-power (100 kW) operation, excluding the gate-drive and control circuit losses, is divided into the conduction and switching losses produced by the SiC modules, the iron and copper losses due to magnetic devices, and the other unknown loss. The power-loss breakdown concludes that the sum of the conduction and switching losses is about 60% of the overall power loss and that the conduction loss is nearly equal to the switching loss at the 100-kW and 20-kHz operation.", "title": "" }, { "docid": "c1389acb62cca5cb3cfdec34bd647835", "text": "A Chinese resume information extraction system (CRIES) based on semi-structured text is designed and implemented to obtain formatted information by extracting text content of every field from resumes in different formats and update information automatically based on the web. Firstly, ideas to classify resumes, some constraints obtained by analyzing resume features and overall extraction strategy is introduced. Then two extraction algorithms for parsing resumes in different text formats are given. Consequently, the system was implemented by java programming. Finally, use the system to resolve the resume samples, and the statistical analysis and system optimization analysis are carried out according to the accuracy rate and recall rate of the extracted results.", "title": "" }, { "docid": "53b43126d066f5e91d7514f5da754ef3", "text": "This paper describes a computationally inexpensive, yet high performance trajectory generation algorithm for omnidirectional vehicles. It is shown that the associated nonlinear control problem can be made tractable by restricting the set of admissible control functions. The resulting problem is linear with coupled control efforts and a near-optimal control strategy is shown to be piecewise constant (bang-bang type). A very favorable trade-off between optimality and computational efficiency is achieved. The proposed algorithm is based on a small number of evaluations of simple closed-form expressions and is thus extremely efficient. The low computational cost makes this method ideal for path planning in dynamic environments.", "title": "" }, { "docid": "72bc688726c5fc26b2dd7e63d3b28ac0", "text": "In Convolutional Neural Network (CNN)-based object detection methods, region proposal becomes a bottleneck when objects exhibit significant scale variation, occlusion or truncation. In addition, these methods mainly focus on 2D object detection and cannot estimate detailed properties of objects. In this paper, we propose subcategory-aware CNNs for object detection. We introduce a novel region proposal network that uses subcategory information to guide the proposal generating process, and a new detection network for joint detection and subcategory classification. By using subcategories related to object pose, we achieve state of-the-art performance on both detection and pose estimation on commonly used benchmarks.", "title": "" } ]
scidocsrr
240a10a3748a237c47aff9013c7e3949
Examining Spectral Reflectance Saturation in Landsat Imagery and Corresponding Solutions to Improve Forest Aboveground Biomass Estimation
[ { "docid": "59b10765f9125e9c38858af901a39cc7", "text": "--------__------------------------------------__---------------", "title": "" }, { "docid": "9a4ca8c02ffb45013115124011e7417e", "text": "Now, we come to offer you the right catalogues of book to open. multisensor data fusion a review of the state of the art is one of the literary work in this world in suitable to be reading material. That's not only this book gives reference, but also it will show you the amazing benefits of reading a book. Developing your countless minds is needed; moreover you are kind of people with great curiosity. So, the book is very appropriate for you.", "title": "" } ]
[ { "docid": "edeefde21bbe1ace9a34a0ebe7bc6864", "text": "Social media platforms provide active communication channels during mass convergence and emergency events such as disasters caused by natural hazards. As a result, first responders, decision makers, and the public can use this information to gain insight into the situation as it unfolds. In particular, many social media messages communicated during emergencies convey timely, actionable information. Processing social media messages to obtain such information, however, involves solving multiple challenges including: parsing brief and informal messages, handling information overload, and prioritizing different types of information found in messages. These challenges can be mapped to classical information processing operations such as filtering, classifying, ranking, aggregating, extracting, and summarizing. We survey the state of the art regarding computational methods to process social media messages and highlight both their contributions and shortcomings. In addition, we examine their particularities, and methodically examine a series of key subproblems ranging from the detection of events to the creation of actionable and useful summaries. Research thus far has, to a large extent, produced methods to extract situational awareness information from social media. In this survey, we cover these various approaches, and highlight their benefits and shortcomings. We conclude with research challenges that go beyond situational awareness, and begin to look at supporting decision making and coordinating emergency-response actions.", "title": "" }, { "docid": "74287743f75368623da74e716ae8e263", "text": "Organizations increasingly use social media and especially social networking sites (SNS) to support their marketing agenda, enhance collaboration, and develop new capabilities. However, the success of SNS initiatives is largely dependent on sustainable user participation. In this study, we argue that the continuance intentions of users may be gendersensitive. To theorize and investigate gender differences in the determinants of continuance intentions, this study draws on the expectation-confirmation model, the uses and gratification theory, as well as the self-construal theory and its extensions. Our survey of 488 users shows that while both men and women are motivated by the ability to selfenhance, there are some gender differences. Specifically, while women are mainly driven by relational uses, such as maintaining close ties and getting access to social information on close and distant networks, men base their continuance intentions on their ability to gain information of a general nature. Our research makes several contributions to the discourse in strategic information systems literature concerning the use of social media by individuals and organizations. Theoretically, it expands the understanding of the phenomenon of continuance intentions and specifically the role of the gender differences in its determinants. On a practical level, it delivers insights for SNS providers and marketers into how satisfaction and continuance intentions of male and female SNS users can be differentially promoted. Furthermore, as organizations increasingly rely on corporate social networks to foster collaboration and innovation, our insights deliver initial recommendations on how organizational social media initiatives can be supported with regard to gender-based differences. 2017 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "b6ec4629a39097178895762a35e0c7eb", "text": "In this paper, we dedicate to the topic of aspect ranking, which aims to automatically identify important product aspects from online consumer reviews. The important aspects are identified according to two observations: (a) the important aspects of a product are usually commented by a large number of consumers; and (b) consumers’ opinions on the important aspects greatly influence their overall opinions on the product. In particular, given consumer reviews of a product, we first identify the product aspects by a shallow dependency parser and determine consumers’ opinions on these aspects via a sentiment classifier. We then develop an aspect ranking algorithm to identify the important aspects by simultaneously considering the aspect frequency and the influence of consumers’ opinions given to each aspect on their overall opinions. The experimental results on 11 popular products in four domains demonstrate the effectiveness of our approach. We further apply the aspect ranking results to the application of documentlevel sentiment classification, and improve the performance significantly.", "title": "" }, { "docid": "5b43cce2027f1e5afbf7985ca2d4af1a", "text": "With Internet delivery of video content surging to an unprecedented level, video has become one of the primary sources for online advertising. In this paper, we present VideoSense as a novel contextual in-video advertising system, which automatically associates the relevant video ads and seamlessly inserts the ads at the appropriate positions within each individual video. Unlike most video sites which treat video advertising as general text advertising by displaying video ads at the beginning or the end of a video or around a video, VideoSense aims to embed more contextually relevant ads at less intrusive positions within the video stream. Specifically, given a Web page containing an online video, VideoSense is able to extract the surrounding text related to this video, detect a set of candidate ad insertion positions based on video content discontinuity and attractiveness, select a list of relevant candidate ads according to multimodal relevance. To support contextual advertising, we formulate this task as a nonlinear 0-1 integer programming problem by maximizing contextual relevance while minimizing content intrusiveness at the same time. The experiments proved the effectiveness of VideoSense for online video service.", "title": "" }, { "docid": "b5de3747c17f6913539b62377f9af5c4", "text": "In this paper, we propose a novel embedding model, named ConvKB, for knowledge base completion. Our model ConvKB advances state-of-the-art models by employing a convolutional neural network, so that it can capture global relationships and transitional characteristics between entities and relations in knowledge bases. In ConvKB, each triple (head entity, relation, tail entity) is represented as a 3-column matrix where each column vector represents a triple element. This 3-column matrix is then fed to a convolution layer where multiple filters are operated on the matrix to generate different feature maps. These feature maps are then concatenated into a single feature vector representing the input triple. The feature vector is multiplied with a weight vector via a dot product to return a score. This score is then used to predict whether the triple is valid or not. Experiments show that ConvKB obtains better link prediction and triple classification results than previous state-of-the-art models on benchmark datasets WN18RR, FB15k-237, WN11 and FB13. We further apply our ConvKB to a search personalization problem which aims to tailor the search results to each specific user based on the user’s personal interests and preferences. In particular, we model the potential relationship between the submitted query, the user and the search result (i.e., document) as a triple (query, user, document) on which the ConvKB is able to work. Experimental results on query logs from a commercial web search engine show that ConvKB achieves better performances than the standard ranker as well as strong search personalization baselines.", "title": "" }, { "docid": "32a2bfb7a26631f435f9cb5d825d8da2", "text": "An important aspect for the task of grammatical error correction (GEC) that has not yet been adequately explored is adaptation based on the native language (L1) of writers, despite the marked influences of L1 on second language (L2) writing. In this paper, we adapt a neural network joint model (NNJM) using L1-specific learner text and integrate it into a statistical machine translation (SMT) based GEC system. Specifically, we train an NNJM on general learner text (not L1-specific) and subsequently train on L1-specific data using a Kullback-Leibler divergence regularized objective function in order to preserve generalization of the model. We incorporate this adapted NNJM as a feature in an SMT-based English GEC system and show that adaptation achieves significant F0.5 score gains on English texts written by L1 Chinese, Russian, and Spanish writers.", "title": "" }, { "docid": "15ada8f138d89c52737cfb99d73219f0", "text": "A dual-band circularly polarized stacked annular-ring patch antenna is presented in this letter. This antenna operates at both the GPS L1 frequency of 1575 MHz and L2 frequency of 1227 MHz, whose frequency ratio is about 1.28. The proposed antenna is formed by two concentric annular-ring patches that are placed on opposite sides of a substrate. Wide axial-ratio bandwidths (larger than 2%), determined by 3-dB axial ratio, are achieved at both bands. The measured gains at 1227 and 1575 MHz are about 6 and 7 dBi, respectively, with the loss of substrate taken into consideration. Both simulated and measured results are presented. The method of varying frequency ratio is also discussed.", "title": "" }, { "docid": "8e794530be184686a49e5ced6ac6521d", "text": "A key feature of the immune system is its ability to induce protective immunity against pathogens while maintaining tolerance towards self and innocuous environmental antigens. Recent evidence suggests that by guiding cells to and within lymphoid organs, CC-chemokine receptor 7 (CCR7) essentially contributes to both immunity and tolerance. This receptor is involved in organizing thymic architecture and function, lymph-node homing of naive and regulatory T cells via high endothelial venules, as well as steady state and inflammation-induced lymph-node-bound migration of dendritic cells via afferent lymphatics. Here, we focus on the cellular and molecular mechanisms that enable CCR7 and its two ligands, CCL19 and CCL21, to balance immunity and tolerance.", "title": "" }, { "docid": "eb6823bcc7e01dbdc9a21388bde0ce4f", "text": "This paper extends previous research on two approaches to human-centred automation: (1) intermediate levels of automation (LOAs) for maintaining operator involvement in complex systems control and facilitating situation awareness; and (2) adaptive automation (AA) for managing operator workload through dynamic control allocations between the human and machine over time. Some empirical research has been conducted to examine LOA and AA independently, with the objective of detailing a theory of human-centred automation. Unfortunately, no previous work has studied the interaction of these two approaches, nor has any research attempted to systematically determine which LOAs should be used in adaptive systems and how certain types of dynamic function allocations should be scheduled over time. The present research briefly reviews the theory of humancentred automation and LOA and AA approaches. Building on this background, an initial study was presented that attempts to address the conjuncture of these two approaches to human-centred automation. An experiment was conducted in which a dual-task scenario was used to assess the performance, SA and workload effects of low, intermediate and high LOAs, which were dynamically allocated (as part of an AA strategy) during manual system control for various cycle times comprising 20, 40 and 60% of task time. The LOA and automation allocation cycle time (AACT) combinations were compared to completely manual control and fully automated control of a dynamic control task performed in conjunction with an embedded secondary monitoring task. Results revealed LOA to be the driving factor in determining primary task performance and SA. Low-level automation produced superior performance and intermediate LOAs facilitated higher SA, but this was not associated with improved performance or reduced workload. The AACT was the driving factor in perceptions of primary task workload and secondary task performance. When a greater percentage of primary task time was automated, operator perceptual resources were freed-up and monitoring performance on the secondary task improved. Longer automation cycle times than have previously been studied may have benefits for overall human–machine system performance. The combined effect of LOA and AA on all measures did not appear to be ‘additive’ in nature. That is, the LOA producing the best performance (low level automation) did not do so at the AACT, which produced superior performance (maximum cycle time). In general, the results are supportive of intermediate LOAs and AA as approaches to human-centred automation, but each appears to provide different benefits to human–machine system performance. This work provides additional information for a developing theory of human-centred automation. Theor. Issues in Ergon. Sci., 2003, 1–40, preview article", "title": "" }, { "docid": "2fe1ed0f57e073372e4145121e87d7c6", "text": "Information visualization (InfoVis), the study of transforming data, information, and knowledge into interactive visual representations, is very important to users because it provides mental models of information. The boom in big data analytics has triggered broad use of InfoVis in a variety of domains, ranging from finance to sports to politics. In this paper, we present a comprehensive survey and key insights into this fast-rising area. The research on InfoVis is organized into a taxonomy that contains four main categories, namely empirical methodologies, user interactions, visualization frameworks, and applications, which are each described in terms of their major goals, fundamental principles, recent trends, and state-of-the-art approaches. At the conclusion of this survey, we identify existing technical challenges and propose directions for future research.", "title": "" }, { "docid": "a28c252f9f3e96869c72e6e41146b5bc", "text": "Technically, a feature represents a distinguishing property, a recognizable measurement, and a functional component obtained from a section of a pattern. Extracted features are meant to minimize the loss of important information embedded in the signal. In addition, they also simplify the amount of resources needed to describe a huge set of data accurately. This is necessary to minimize the complexity of implementation, to reduce the cost of information processing, and to cancel the potential need to compress the information. More recently, a variety of methods have been widely used to extract the features from EEG signals, among these methods are time frequency distributions (TFD), fast fourier transform (FFT), eigenvector methods (EM), wavelet transform (WT), and auto regressive method (ARM), and so on. In general, the analysis of EEG signal has been the subject of several studies, because of its ability to yield an objective mode of recording brain stimulation which is widely used in brain-computer interface researches with application in medical diagnosis and rehabilitation engineering. The purposes of this paper, therefore, shall be discussing some conventional methods of EEG feature extraction methods, comparing their performances for specific task, and finally, recommending the most suitable method for feature extraction based on performance.", "title": "" }, { "docid": "040329beb0f4688ced46d87a51dac169", "text": "We present a characterization methodology for fast direct measurement of the charge accumulated on Floating Gate (FG) transistors of Flash EEPROM cells. Using a Scanning Electron Microscope (SEM) in Passive Voltage Contrast (PVC) mode we were able to distinguish between '0' and '1' bit values stored in each memory cell. Moreover, it was possible to characterize the remaining charge on the FG; thus making this technique valuable for Failure Analysis applications for data retention measurements in Flash EEPROM. The technique is at least two orders of magnitude faster than state-of-the-art Scanning Probe Microscopy (SPM) methods. Only a relatively simple backside sample preparation is necessary for accessing the FG of memory transistors. The technique presented was successfully implemented on a 0.35 μm technology node microcontroller and a 0.21 μm smart card integrated circuit. We also show the ease of such technique to cover all cells of a memory (using intrinsic features of SEM) and to automate memory cells characterization using standard image processing technique.", "title": "" }, { "docid": "067e24b29aae26865c858d6b8e60b135", "text": "In this paper, we present an optimization path of stress memorization technique (SMT) for 45nm node and below using a nitride capping layer. We demonstrate that the understanding of coupling between nitride properties, dopant activation and poly-silicon gate mechanical stress allows enhancing nMOS performance by 7% without pMOS degradation. In contrast to previously reported works on SMT (Chen et al., 2004) - (Singh et al., 2005), a low-cost process compatible with consumer electronics requirements has been successfully developed", "title": "" }, { "docid": "715fda02bad1633be9097cc0a0e68c8d", "text": "Data uncertainty is common in real-world applications due to various causes, including imprecise measurement, network latency, outdated sources and sampling errors. These kinds of uncertainty have to be handled cautiously, or else the mining results could be unreliable or even wrong. In this paper, we propose a new rule-based classification and prediction algorithm called uRule for classifying uncertain data. This algorithm introduces new measures for generating, pruning and optimizing rules. These new measures are computed considering uncertain data interval and probability distribution function. Based on the new measures, the optimal splitting attribute and splitting value can be identified and used for classification and prediction. The proposed uRule algorithm can process uncertainty in both numerical and categorical data. Our experimental results show that uRule has excellent performance even when data is highly uncertain.", "title": "" }, { "docid": "26b13a3c03014fc910ed973c264e4c9d", "text": "Deep convolutional neural networks (CNNs) have shown great potential for numerous real-world machine learning applications, but performing inference in large CNNs in real-time remains a challenge. We have previously demonstrated that traditional CNNs can be converted into deep spiking neural networks (SNNs), which exhibit similar accuracy while reducing both latency and computational load as a consequence of their data-driven, event-based style of computing. Here we provide a novel theory that explains why this conversion is successful, and derive from it several new tools to convert a larger and more powerful class of deep networks into SNNs. We identify the main sources of approximation errors in previous conversion methods, and propose simple mechanisms to fix these issues. Furthermore, we develop spiking implementations of common CNN operations such as max-pooling, softmax, and batch-normalization, which allow almost loss-less conversion of arbitrary CNN architectures into the spiking domain. Empirical evaluation of different network architectures on the MNIST and CIFAR10 benchmarks leads to the best SNN results reported to date.", "title": "" }, { "docid": "82119f5c85eaa2c4a76b2c7b0561375c", "text": "A system is described that integrates vision and tactile sensing in a robotics environment to perform object recognition tasks. It uses multiple sensor systems (active touch and passive stereo vision) to compute three dimensional primitives that can be matched against a model data base of complex curved surface objects containing holes and cavities. The low level sensing elements provide local surface and feature matches which arc constrained by relational criteria embedded in the models. Once a model has been invoked, a verification procedure establishes confidence measures for a correct recognition. The three dimen* sional nature of the sensed data makes the matching process more robust as does the system's ability to sense visually occluded areas with touch. The model is hierarchic in nature and allows matching at different levels to provide support or inhibition for recognition. 1. INTRODUCTION Robotic systems are being designed and built to perform complex tasks such as object recognition, grasping, parts manipulation, inspection and measurement. In the case of object recognition, many systems have been designed that have tried to exploit a single sensing modality [1,2,3,4,5,6]. Single sensor systems are necessarily limited in their power. The approach described here to overcome the inherent limitations of a single sensing modality is to integrate multiple sensing modalities (passive stereo vision and active tactile sensing) for object recognition. The advantages of multiple sensory systems in a task like this are many. Multiple sensor systems supply redundant and complementary kinds of data that can be integrated to create a more coherent understanding of a scene. The inclusion of multiple sensing systems is becoming more apparent as research continues in distributed systems and parallel approaches to problem solving. The redundancy and support for a hypothesis that comes from more than one sensing subsystem is important in establishing confidence measures during a recognition process, just as the disagreement between two sensors will inhibit a hypothesis and point to possible sensing or reasoning error. The complementary nature of these sensors allows more powerful matching primitives to be used. The primitives that are the outcome of sensing with these complementary sensors are throe dimensional in nature, providing stronger invariants and a more natural way to recognize objects which are also three dimensional in nature [7].", "title": "" }, { "docid": "ed22fe0d13d4450005abe653f41df2c0", "text": "Polycystic ovary syndrome (PCOS) is a complex endocrine disorder affecting 5-10 % of women of reproductive age. It generally manifests with oligo/anovulatory cycles, hirsutism and polycystic ovaries, together with a considerable prevalence of insulin resistance. Although the aetiology of the syndrome is not completely understood yet, PCOS is considered a multifactorial disorder with various genetic, endocrine and environmental abnormalities. Moreover, PCOS patients have a higher risk of metabolic and cardiovascular diseases and their related morbidity, if compared to the general population.", "title": "" }, { "docid": "d07281bab772b6ba613f9526d418661e", "text": "GSM (Global Services of Mobile Communications) 1800 licenses were granted in the beginning of the 2000’s in Turkey. Especially in the installation phase of the wireless telecom services, fraud usage can be an important source of revenue loss. Fraud can be defined as a dishonest or illegal use of services, with the intention to avoid service charges. Fraud detection is the name of the activities to identify unauthorized usage and prevent losses for the mobile network operators’. Mobile phone user’s intentions may be predicted by the call detail records (CDRs) by using data mining (DM) techniques. This study compares various data mining techniques to obtain the best practical solution for the telecom fraud detection and offers the Adaptive Neuro Fuzzy Inference (ANFIS) method as a means to efficient fraud detection. In the test run, shown that ANFIS has provided sensitivity of 97% and specificity of 99%, where it classified 98.33% of the instances correctly.", "title": "" }, { "docid": "0e2a2a32923d8e9fa5779e80e6090dba", "text": "The most powerful and common approach to countering the threats to network / information security is encryption [1]. Even though it is very powerful, the cryptanalysts are very intelligent and they were working day and night to break the ciphers. To make a stronger cipher it is recommended that to use: More stronger and complicated encryption algorithms, Keys with more number of bits (Longer keys), larger block size as input to process, use authentication and confidentiality and secure transmission of keys. It is for sure that if we follow all the mentioned principles we can make a very stronger cipher. With this we have the following problems: It is a time consuming process for both encryption and decryption, It is difficult for the crypt analyzer to analyze the problem. Also suffers with the problems in the existing system. The main objective of this paper is to solve all these problems and to bring the revolution in the Network security with a new substitution technique [3] is ‘color substitution technique’ and named as a “Play color cipher”.", "title": "" } ]
scidocsrr
3e7af8497d080d88c7873de1ca8a4027
Natural Language Semantics Using Probabilistic Logic
[ { "docid": "41a0b9797c556368f84e2a05b80645f3", "text": "This paper describes and evaluates log-linear parsing models for Combinatory Categorial Grammar (CCG). A parallel implementation of the L-BFGS optimisation algorithm is described, which runs on a Beowulf cluster allowing the complete Penn Treebank to be used for estimation. We also develop a new efficient parsing algorithm for CCG which maximises expected recall of dependencies. We compare models which use all CCG derivations, including nonstandard derivations, with normal-form models. The performances of the two models are comparable and the results are competitive with existing wide-coverage CCG parsers.", "title": "" }, { "docid": "70fd543752f17237386b3f8e99954230", "text": "Using Markov logic to integrate logical and distributional information in natural-language semantics results in complex inference problems involving long, complicated formulae. Current inference methods for Markov logic are ineffective on such problems. To address this problem, we propose a new inference algorithm based on SampleSearch that computes probabilities of complete formulae rather than ground atoms. We also introduce a modified closed-world assumption that significantly reduces the size of the ground network, thereby making inference feasible. Our approach is evaluated on the recognizing textual entailment task, and experiments demonstrate its dramatic impact on the efficiency", "title": "" } ]
[ { "docid": "11f2adab1fb7a93e0c9009a702389af1", "text": "OBJECTIVE\nThe authors present clinical outcome data and satisfaction of patients who underwent minimally invasive vertebral body corpectomy and cage placement via a mini-open, extreme lateral, transpsoas approach and posterior short-segment instrumentation for lumbar burst fractures.\n\n\nMETHODS\nPatients with unstable lumbar burst fractures who underwent corpectomy and anterior column reconstruction via a mini-open, extreme lateral, transpsoas approach with short-segment posterior fixation were reviewed retrospectively. Demographic information, operative parameters, perioperative radiographic measurements, and complications were analyzed. Patient-reported outcome instruments (Oswestry Disability Index [ODI], 12-Item Short Form Health Survey [SF-12]) and an anterior scar-specific patient satisfaction questionnaire were recorded at the latest follow-up.\n\n\nRESULTS\nTwelve patients (7 men, 5 women, average age 42 years, range 22-68 years) met the inclusion criteria. Lumbar corpectomies with anterior column support were performed (L-1, n = 8; L-2, n = 2; L-3, n = 2) and supplemented with short-segment posterior instrumentation (4 open, 8 percutaneous). Four patients had preoperative neurological deficits, all of which improved after surgery. No new neurological complications were noted. The anterior incision on average was 6.4 cm (range 5-8 cm) in length, caused mild pain and disability, and was aesthetically acceptable to the large majority of patients. Three patients required chest tube placement for pleural violation, and 1 patient required reoperation for cage subsidence/hardware failure. Average clinical follow-up was 38 months (range 16-68 months), and average radiographic follow-up was 37 months (range 6-68 months). Preoperative lumbar lordosis and focal lordosis were significantly improved/maintained after surgery. Patients were satisfied with their outcomes, had minimal/moderate disability (average ODI score 20, range 0-52), and had good physical (SF-12 physical component score 41.7% ± 10.4%) and mental health outcomes (SF-12 mental component score 50.2% ± 11.6%) after surgery.\n\n\nCONCLUSIONS\nAnterior corpectomy and cage placement via a mini-open, extreme lateral, transpsoas approach supplemented by short-segment posterior instrumentation is a safe, effective alternative to conventional approaches in the treatment of single-level unstable burst fractures and is associated with excellent functional outcomes and patient satisfaction.", "title": "" }, { "docid": "f5b372607a89ea6595683276e48d6dce", "text": "In this paper, we present YAMAMA, a multi-dialect Arabic morphological analyzer and disambiguator. Our system is almost five times faster than the state-of-the-art MADAMIRA system with a slightly lower quality. In addition to speed, YAMAMA outputs a rich representation which allows for a wider spectrum of use. In this regard, YAMAMA transcends other systems, such as FARASA, which is faster but provides specific outputs catering to specific applications.", "title": "" }, { "docid": "9228218e663951e54f31d697997c80f9", "text": "In this paper, we describe a simple set of \"recipes\" for the analysis of high spatial density EEG. We focus on a linear integration of multiple channels for extracting individual components without making any spatial or anatomical modeling assumptions, instead requiring particular statistical properties such as maximum difference, maximum power, or statistical independence. We demonstrate how corresponding algorithms, for example, linear discriminant analysis, principal component analysis and independent component analysis, can be used to remove eye-motion artifacts, extract strong evoked responses, and decompose temporally overlapping components. The general approach is shown to be consistent with the underlying physics of EEG, which specifies a linear mixing model of the underlying neural and non-neural current sources.", "title": "" }, { "docid": "de682d74b30e699d7185765f8b235e00", "text": "A key goal of research in conversational systems is to train an interactive agent to help a user with a task. Human conversation, however, is notoriously incomplete, ambiguous, and full of extraneous detail. To operate effectively, the agent must not only understand what was explicitly conveyed but also be able to reason in the presence of missing or unclear information. When unable to resolve ambiguities on its own, the agent must be able to ask the user for the necessary clarifications and incorporate the response in its reasoning. Motivated by this problem we introduce QRAQ (Query, Reason, and Answer Questions), a new synthetic domain, in which a User gives an Agent a short story and asks a challenge question. These problems are designed to test the reasoning and interaction capabilities of a learningbased Agent in a setting that requires multiple conversational turns. A good Agent should ask only non-deducible, relevant questions until it has enough information to correctly answer the User’s question. We use standard and improved reinforcement learning based memory-network architectures to solve QRAQ problems in the difficult setting where the reward signal only tells the Agent if its final answer to the challenge question is correct or not. To provide an upper-bound to the RL results we also train the same architectures using supervised information that tells the Agent during training which variables to query and the answer to the challenge question. We evaluate our architectures on four QRAQ dataset types, and scale the complexity for each along multiple dimensions.", "title": "" }, { "docid": "753dcf47f0d1d63d2b93a8f4b5d78a33", "text": "BACKGROUND\nTrichostasis spinulosa (TS) is a common, underdiagnosed cosmetic skin condition.\n\n\nOBJECTIVES\nThe main objectives of this study were to determine the occurrence of TS relative to age and gender, to analyze its cutaneous distribution, and to investigate any possible familial basis for this condition, its impact on patients, and the types and efficacy of previous treatments.\n\n\nMETHODS\nAll patients presenting to the outpatient dermatology clinic at the study institution and their relatives were examined for the presence of TS and were questioned about family history and previous treatment. Photographs and biopsies of suspected cases of TS were obtained.\n\n\nRESULTS\nOf 2400 patients seen between August and December 2013, 286 patients were diagnosed with TS (135 males, 151 females; prevalence: 11.9%). Women presented more frequently than men with complaints of TS (6.3 vs. 4.2%), and more women had received prior treatment for TS (10.5 vs. 2.8%). The most commonly affected sites were the face (100%), interscapular area (10.5%), and arms (3.1%). Lesions involved the nasal alae in 96.2%, the nasal tip in 90.9%, the chin in 55.9%, and the cheeks in 52.4% of patients. Only 15.7% of patients had forehead lesions, and only 4.5% had perioral lesions. Among the 38 previously treated patients, 65.8% reported temporary improvement.\n\n\nCONCLUSIONS\nTrichostasis spinulosa is a common condition that predominantly affects the face in patients of all ages. Additional studies employing larger cohorts from multiple centers will be required to determine the prevalence of TS in the general population.", "title": "" }, { "docid": "65b34f78e3b8d54ad75d32cdef487dac", "text": "Recognizing polarity requires a list of polar words and phrases. For the purpose of building such lexicon automatically, a lot of studies have investigated (semi-) unsupervised method of learning polarity of words and phrases. In this paper, we explore to use structural clues that can extract polar sentences from Japanese HTML documents, and build lexicon from the extracted polar sentences. The key idea is to develop the structural clues so that it achieves extremely high precision at the cost of recall. In order to compensate for the low recall, we used massive collection of HTML documents. Thus, we could prepare enough polar sentence corpus.", "title": "" }, { "docid": "8cd8fbbc3e20d29989deeb2fd2362c10", "text": "Modern programming languages and software engineering principles are causing increasing problems for compiler systems. Traditional approaches, which use a simple compile-link-execute model, are unable to provide adequate application performance under the demands of the new conditions. Traditional approaches to interprocedural and profile-driven compilation can provide the application performance needed, but require infeasible amounts of compilation time to build the application. This thesis presents LLVM, a design and implementation of a compiler infrastructure which supports a unique multi-stage optimization system. This system is designed to support extensive interprocedural and profile-driven optimizations, while being efficient enough for use in commercial compiler systems. The LLVM virtual instruction set is the glue that holds the system together. It is a low-level representation, but with high-level type information. This provides the benefits of a low-level representation (compact representation, wide variety of available transformations, etc.) as well as providing high-level information to support aggressive interprocedural optimizations at link-and post-link time. In particular, this system is designed to support optimization in the field, both at run-time and during otherwise unused idle time on the machine. This thesis also describes an implementation of this compiler design, the LLVM compiler infrastructure , proving that the design is feasible. The LLVM compiler infrastructure is a maturing and efficient system, which we show is a good host for a variety of research. More information about LLVM can be found on its web site at: iii Acknowledgments This thesis would not be possible without the support of a large number of people who have helped me both in big ways and little. In particular, I would like to thank my advisor, Vikram Adve, for his support, patience, and especially his trust and respect. He has shown me how to communicate ideas more effectively and how to find important and meaningful topics for research. By being demanding, understanding, and allowing me the freedom to explore my interests, he has driven me to succeed. The inspiration for this work certainly stems from one person: Tanya. She has been a continuous source of support, ideas, encouragement, and understanding. Despite my many late nights, unimaginable amounts of stress, and a truly odd sense of humor, she has not just tolerated me, but loved me. Another person who made this possible, perhaps without truly understanding his contribution, has been Brian Ensink. Brian has been an invaluable sounding board for ideas, a welcoming ear to occasional frustrations, provider …", "title": "" }, { "docid": "4cb0358724add5f51b598b7dd19c3640", "text": "110 CSEG RECORDER 2006 Special Edition Continued on Page 111 Seismic attributes have come a long way since their intro d u ction in the early 1970s and have become an integral part of seismic interpretation projects. To d a y, they are being used widely for lithological and petrophysical prediction of re s e rvoirs and various methodologies have been developed for their application to broader hydrocarbon exploration and development decision making. Beginning with the digital re c o rding of seismic data in the early 1960s and the ensuing bright spot analysis, the 1970s saw the introduction of complex trace attributes and seismic inversion along with their color displays. This was followed by the development of response attributes, introduction of texture analysis, 2D attributes, horizon and interval attributes and the pervasive use of c o l o r. 3D seismic acquisition dominated the 1990s as the most successful exploration technology of several decades and along with that came the seismic sequence attributes. The c o h e rence technology introduced in the mid 1990s significantly changed the way geophysicists interpreted seismic data. This was followed by the introduction of spectral decomposition in the late 1990s and a host of methods for evaluation of a combination of attributes. These included pattern recognition techniques as well as neural network applications. These developments continued into the new millennium, with enhanced visualization and 3D computation and interpretation of texture and curvature attributes coming to the fore f ront. Of course all this was possible with the power of scientific computing making significant advances during the same period of time. A detailed re c o ns t ruction of these key historical events that lead to the modern seismic attribute analysis may be found in Chopra and Marfurt (2005). The proliferation of seismic attributes in the last two decades has led to attempts to their classification and to bring some order to their chaotic development.", "title": "" }, { "docid": "843ea8a700adf545288175c1062107bb", "text": "Stress is a natural reaction to various stress-inducing factors which can lead to physiological and behavioral changes. If persists for a longer period, stress can cause harmful effects on our body. The body sensors along with the concept of the Internet of Things can provide rich information about one's mental and physical health. The proposed work concentrates on developing an IoT system which can efficiently detect the stress level of a person and provide a feedback which can assist the person to cope with the stressors. The system consists of a smart band module and a chest strap module which can be worn around wrist and chest respectively. The system monitors the parameters such as Electro dermal activity and Heart rate in real time and sends the data to a cloud-based ThingSpeak server serving as an online IoT platform. The computation of the data is performed using a ‘MATLAB Visualization’ application and the stress report is displayed. The authorized person can log in, view the report and take actions such as consulting a medical person, perform some meditation or yoga exercises to cope with the condition.", "title": "" }, { "docid": "96bd733f9168bed4e400f315c57a48e8", "text": "New phase transition phenomena have recently been discovered for the stochastic block model, for the special case of two non-overlapping symmetric communities. This gives raise in particular to new algorithmic challenges driven by the thresholds. This paper investigates whether a general phenomenon takes place for multiple communities, without imposing symmetry. In the general stochastic block model SBM(n,p,W), n vertices are split into k communities of relative size {pi}i∈[k], and vertices in community i and j connect independently with probability {Wij}i,j∈[k]. This paper investigates the partial and exact recovery of communities in the general SBM (in the constant and logarithmic degree regimes), and uses the generality of the results to tackle overlapping communities. The contributions of the paper are: (i) an explicit characterization of the recovery threshold in the general SBM in terms of a new f-divergence function D+, which generalizes the Hellinger and Chernoff divergences, and which provides an operational meaning to a divergence function analog to the KL-divergence in the channel coding theorem, (ii) the development of an algorithm that recovers the communities all the way down to the optimal threshold and runs in quasi-linear time, showing that exact recovery has no information-theoretic to computational gap for multiple communities, (iii) the development of an efficient algorithm that detects communities in the constant degree regime with an explicit accuracy bound that can be made arbitrarily close to 1 when a prescribed signal-to-noise ratio [defined in terms of the spectrum of diag(p)W] tends to infinity.", "title": "" }, { "docid": "1f4b3ad078c42404c6aa27d107026b18", "text": "This paper presents circuit design methodologies to enhance the electromagnetic immunity of an output-capacitor-free low-dropout (LDO) regulator. To evaluate the noise performance of an LDO regulator in the small-signal domain, power-supply rejection (PSR) is used. We optimize a bandgap reference circuit for optimum dc PSR, and propose a capacitor cancelation technique circuit for bandwidth compensation, and a low-noise biasing circuit for immunity enhancement in the bias circuit. For large-signal, transient performance enhancement, we suggest using a unity-gain amplifier to minimize the voltage difference of the differential inputs of the error amplifier, and an auxiliary N-channel metal oxide semiconductor (NMOS) pass transistor was used to maintain a stable gate voltage in the pass transistor. The effectiveness of the design methodologies proposed in this paper is verified using circuit simulations using an LDO regulator designed by 0.18-$\\mu$m CMOS process. When sine and pulse signals are applied to the input, the worst dc offset variations were enhanced from 36% to 16% and from 31.7% to 9.7%, respectively, as compared with those of the conventional LDO. We evaluated the noise performance versus the conducted electromagnetic interference generated by the dc–dc converter; the noise reduction level was significantly improved.", "title": "" }, { "docid": "d690cfa0fbb63e53e3d3f7a1c7a6a442", "text": "Ambient intelligence has acquired great importance in recent years and requires the development of new innovative solutions. This paper presents a distributed telemonitoring system, aimed at improving healthcare and assistance to dependent people at their homes. The system implements a service-oriented architecture based platform, which allows heterogeneous wireless sensor networks to communicate in a distributed way independent of time and location restrictions. This approach provides the system with a higher ability to recover from errors and a better flexibility to change their behavior at execution time. Preliminary results are presented in this paper.", "title": "" }, { "docid": "1a14570fa1d565aeb78165c72bdf8a4e", "text": "We investigate the ride-sharing assignment problem from an algorithmic resource allocation point of view. Given a number of requests with source and destination locations, and a number of available car locations, the task is to assign cars to requests with two requests sharing one car. We formulate this as a combinatorial optimization problem, and show that it is NP-hard. We then design an approximation algorithm which guarantees to output a solution with at most 2.5 times the optimal cost. Experiments are conducted showing that our algorithm actually has a much better approximation ratio (around 1.2) on synthetically generated data. Introduction The sharing economy is estimated to grow from $14 billion in 2014 to $335 billion by 2025 (Yaraghi and Ravi 2017). As one of the largest components of sharing economy, ride-sharing provides socially efficient transport services that help to save energy and to reduce congestion. Uber has 40 million monthly active riders reported in October 2016 (Kokalitcheva 2016) and Didi Chuxing has more than 400 million users(Tec 2017). A large portion of the revenue of these companies comes from ride sharing with one car catering two passenger requests, which is the topic investigated in this paper. A typical scenario is as follows: There are a large number of requests with pickup and drop-off location information, and a large number of available cars with current location information. One of the tasks is to assign the requests to the cars, with two requests for one car. The assignment needs to be made socially efficient in the sense that the ride sharing does not incur much extra traveling distance for the drivers or and extra waiting time for the passengers. In this paper we investigate this ride-sharing assignment problem from an algorithmic resource allocation point of view. Formally, suppose that there are a set R of requests {(si, ti) ∈ R : i = 1, . . . ,m} where in request i, an agent is at location si and likes to go to location ti. There are also a set D of taxis {dk ∈ R : k = 1, . . . , n}, with taxi k currently at location dk. The task is to assign two agents i and j to one taxi k, so that the total driving distance is as small as possible. The distance measure d(x, y) here can be Copyright c © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Manhattan distance (i.e., 1-norm), Euclidean distance (i.e., 2-norm), or distance on graphs if a city map is available. Here for any fixed tuple (k, {i, j}), the driver of taxi k has four possible routes, from the combination of the following two choices: he can pick agent i first or agent j first, and he can drop agent i first or drop agent j first. We assume that the driver is experienced enough to take the best among these four choices. Thus we use the total distance of this best route as the driving cost of tuple (k, {i, j}), denoted by cost(k, {i, j}). We hope to find an assignment M = {(k, {i, j}) : 1 ≤ i, j ≤ m, 1 ≤ k ≤ n} that assigns the maximum number of requests, and in the meanwhile with the cost(M) = ∑ (k,{i,j})∈M cost(k, {i, j}), summation of the driving cost, as small as possible. Here an assignment is a matching in the graph in the sense that each element in R∪D appears at most once in M . In this paper, we formulate this ride-sharing assignment as a combinatorial optimization problem. We show that the problem is NP-hard, and then present an approximation algorithm which, on any input, runs in time O(n) and outputs a solution M with cost(M) at most 2.5 times the optimal value. Our algorithm does not assume specific distance measure; indeed it works for any distance1. We conducted experiments where inputs are generated from uniform distributions and Gaussian mixture distributions. The approximation ratio on these empirical data is about 1.1-1.2, which is much better than the worst case guarantee 2.5. In addition, the results indicate that the larger n and m are, the better the approximation ratio is. Considering that n and m are very large numbers in practice, the performance of our algorithm may be even more satisfactory for practical scenarios. Related Work Ridesharing has become a key feature to increase urban transportation sustainability and is an active field of research. Several pieces of work have looked at dynamic ridesharing (Caramia et al. 2002; Fabri and Recht 2006; Agatz et al. 2012; Santos and Xavier 2013; Alonso-Mora et al. 2017), and multi-hop ridesharing (Herbawi and Weber 2011; Drews and Luxen 2013; Teubner and Flath 2015). That is, the algorithm only needs that d is nonnegative, symmetric and satisfies the triangle inequality. The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18)", "title": "" }, { "docid": "448d4704991a2bdc086df8f0d7920ec5", "text": "Global progress in the industrial field, which has led to the definition of the Industry 4.0 concept, also affects other spheres of life. One of them is the education. The subject of the article is to summarize the emerging trends in education in relation to the requirements of Industry 4.0 and present possibilities of their use. One option is using augmented reality as part of a modular learning system. The main idea is to combine the elements of the CPS technology concept with modern IT features, with emphasis on simplicity of solution and hardware ease. The synthesis of these principles can combine in a single image on a conventional device a realistic view at the technological equipment, complemented with interactive virtual model of the equipment, the technical data and real-time process information.", "title": "" }, { "docid": "218b2f7a8e088c1023202bd27164b780", "text": "The explanation of crime has been preoccupied with individuals and communities as units of analysis. Recent work on offender decision making (Cornish and Clarke, 1986), situations (Clarke, 1983, 1992), environments (Brantingham and Brantingham 1981, 1993), routine activities (Cohen and Felson, 1979; Felson, 1994), and the spatial organization of drug dealing in the U.S. suggest a new unit of analysis: places. Crime is concentrated heavily in a Jew \"hot spots\" of crime (Sherman et aL 1989). The concentration of crime among repeat places is more intensive than it is among repeat offenders (Spelman and Eck, 1989). The components of this concentration are analogous to the components of the criminal careers of persons: onset, desistance, continuance, specialization, and desistance. The theoretical explanationfor variance in these components is also stronger at the level of places than it is for individuals. These facts suggest a need for rethinking theories of crime, as well as a new approach to theorizing about crime for", "title": "" }, { "docid": "f4ee2fa60eb67b7081085ed222627115", "text": "Recent advances in deep-learning-based applications have attracted a growing attention from the IoT community. These highly capable learning models have shown significant improvements in expected accuracy of various sensory inference tasks. One important and yet overlooked direction remains to provide uncertainty estimates in deep learning outputs. Since robustness and reliability of sensory inference results are critical to IoT systems, uncertainty estimates are indispensable for IoT applications. To address this challenge, we develop ApDeepSense, an effective and efficient deep learning uncertainty estimation method for resource-constrained IoT devices. ApDeepSense leverages an implicit Bayesian approximation that links neural networks to deep Gaussian processes, allowing output uncertainty to be quantified. Our approach is shown to significantly reduce the execution time and energy consumption of uncertainty estimation thanks to a novel layer-wise approximation that replaces the traditional computationally intensive sampling-based uncertainty estimation methods. ApDeepSense is designed for neural net-works trained using dropout; one of the most widely used regularization methods in deep learning. No additional training is needed for uncertainty estimation purposes. We evaluate ApDeepSense using four IoT applications on Intel Edison devices. Results show that ApDeepSense can reduce around 88.9% of the execution time and 90.0% of the energy consumption, while producing more accurate uncertainty estimates compared with state-of-the-art methods.", "title": "" }, { "docid": "3e2e2aace1ddade88f3c8a6b7157af6b", "text": "Verb learning is clearly a function of observation of real-world contingencies; however, it is argued that such observational information is insufficient to account fully for vocabulary acquisition. This paper provides an experimental validation of Landau & Gleitman's (1985) syntactic bootstrapping procedure; namely, that children may use syntactic information to learn new verbs. Pairs of actions were presented simultaneously with a nonsense verb in one of two syntactic structures. The actions were subsequently separated, and the children (MA = 2;1) were asked to select which action was the referent for the verb. The children's choice of referent was found to be a function of the syntactic structure in which the verb had appeared.", "title": "" }, { "docid": "24006b9eb670c84904b53320fbedd32c", "text": "Maturity Models have been introduced, over the last four decades, as guides and references for Information System management in organizations from different sectors of activity. In the healthcare field, Maturity Models have also been used to deal with the enormous complexity and demand of Hospital Information Systems. This article presents a research project that aimed to develop a new comprehensive model of maturity for a health area. HISMM (Hospital Information System Maturity Model) was developed to address a complexity of SIH and intends to offer a useful tool for the demanding role of its management. The HISMM has the peculiarity of congregating a set of key maturity Influence Factors and respective characteristics, enabling not only the assessment of the global maturity of a HIS but also the individual maturity of its different dimensions. In this article, we present the methodology for the development of Maturity Models adopted for the creation of HISMM and the underlying reasons for its choice.", "title": "" }, { "docid": "c0d2fcd6daeb433a5729a412828372f8", "text": "Most 3D reconstruction approaches passively optimise over all data, exhaustively matching pairs, rather than actively selecting data to process. This is costly both in terms of time and computer resources, and quickly becomes intractable for large datasets. This work proposes an approach to intelligently filter large amounts of data for 3D reconstructions of unknown scenes using monocular cameras. Our contributions are twofold: First, we present a novel approach to efficiently optimise the Next-Best View (NBV) in terms of accuracy and coverage using partial scene geometry. Second, we extend this to intelligently selecting stereo pairs by jointly optimising the baseline and vergence to find the NBV’s best stereo pair to perform reconstruction. Both contributions are extremely efficient, taking 0.8ms and 0.3ms per pose, respectively. Experimental evaluation shows that the proposed method allows efficient selection of stereo pairs for reconstruction, such that a dense model can be obtained with only a small number of images. Once a complete model has been obtained, the remaining computational budget is used to intelligently refine areas of uncertainty, achieving results comparable to state-of-the-art batch approaches on the Middlebury dataset, using as little as 3.8% of the views.", "title": "" }, { "docid": "1de2d4e5b74461c142e054ffd2e62c2d", "text": "Table : Comparisons of CNN, LSTM and SWEM architectures. Columns correspond to the number of compositional parameters, computational complexity and sequential operations, respectively. v Consider a text sequence represented as X, composed of a sequence of words. Let {v#, v$, ...., v%} denote the respective word embeddings for each token, where L is the sentence/document length; v The compositional function, X → z, aims to combine word embeddings into a fixed-length sentence/document representation z. Typically, LSTM or CNN are employed for this purpose;", "title": "" } ]
scidocsrr
c1e2a84ff4366325837e576dd0549e24
High gain 2.45 GHz 2×2 patch array stacked antenna
[ { "docid": "3bb4d0f44ed5a2c14682026090053834", "text": "A Meander Line Antenna (MLA) for 2.45 GHz is proposed. This research focuses on the optimum value of gain and reflection coefficient. Therefore, the MLA's parametric studies is discussed which involved the number of turn, width of feed (W1), length of feed (LI) and vertical length partial ground (L3). As a result, the studies have significantly achieved MLA's gain and reflection coefficient of 3.248dB and -45dB respectively. The MLA also resembles the monopole antenna behavior of Omni-directional radiation pattern. Measured and simulated results are presented. The proposed antenna has big potential to be implemented for WLAN device such as optical mouse application.", "title": "" } ]
[ { "docid": "322161b4a43b56e4770d239fe4d2c4c0", "text": "Graph pattern matching has become a routine process in emerging applications such as social networks. In practice a data graph is typically large, and is frequently updated with small changes. It is often prohibitively expensive to recompute matches from scratch via batch algorithms when the graph is updated. With this comes the need for incremental algorithms that compute changes to the matches in response to updates, to minimize unnecessary recomputation. This paper investigates incremental algorithms for graph pattern matching defined in terms of graph simulation, bounded simulation and subgraph isomorphism. (1) For simulation, we provide incremental algorithms for unit updates and certain graph patterns. These algorithms are optimal: in linear time in the size of the changes in the input and output, which characterizes the cost that is inherent to the problem itself. For general patterns we show that the incremental matching problem is unbounded, i.e., its cost is not determined by the size of the changes alone. (2) For bounded simulation, we show that the problem is unbounded even for unit updates and path patterns. (3) For subgraph isomorphism, we show that the problem is intractable and unbounded for unit updates and path patterns. (4) For multiple updates, we develop an incremental algorithm for each of simulation, bounded simulation and subgraph isomorphism. We experimentally verify that these incremental algorithms significantly outperform their batch counterparts in response to small changes, using real-life data and synthetic data.", "title": "" }, { "docid": "1561ef2d0c846e8faa765aae2a7ad922", "text": "We propose a novel monocular visual inertial odometry algorithm that combines the advantages of EKF-based approaches with those of direct photometric error minimization methods. The method is based on sparse, very small patches and incorporates the minimization of photometric error directly into the EKF measurement model so that inertial data and vision-based surface measurements are used simultaneously during camera pose estimation. We fuse vision-based and inertial measurements almost at the raw-sensor level, allowing the estimated system state to constrain and guide image-space measurements. Our formulation allows for an efficient implementation that runs in real-time on a standard CPU and has several appealing and unique characteristics such as being robust to fast camera motion, in particular rotation, and not depending on the presence of corner-like features in the scene. We experimentally demonstrate robust and accurate performance compared to ground truth and show that our method works on scenes containing only non-intersecting lines.", "title": "" }, { "docid": "be1ac1b39ed75cb2ae2739ea1a443821", "text": "In this paper, we consider the problems of generating all maximal (bipartite) cliques in a given (bipartite) graph G = (V, E) with n vertices and m edges. We propose two algorithms for enumerating all maximal cliques. One runs with O(M(n)) time delay and in O(n) space and the other runs with O(∆) time delay and in O(n + m) space, where ∆ denotes the maximum degree of G, M(n) denotes the time needed to multiply two n×n matrices, and the latter one requires O(nm) time as a preprocessing. For a given bipartite graph G, we propose three algorithms for enumerating all maximal bipartite cliques. The first algorithm runs with O(M(n)) time delay and in O(n) space, which immediately follows from the algorithm for the nonbipartite case. The second one runs with O(∆) time delay and in O(n + m) space, and the last one runs with O(∆) time delay and in O(n + m + N∆) space, where N denotes the number of all maximal bipartite cliques in G and both algorithms require O(nm) time as a preprocessing. Our algorithms improve upon all the existing algorithms, when G is either dense or sparse. Furthermore, computational experiments show that our algorithms for sparse graphs have significantly good performance for graphs which are generated randomly and appear in real-world problems.", "title": "" }, { "docid": "ad48ba2fa5ab113fbdf5d9c148f9596d", "text": "BACKGROUND\nThe Prophylactic hypOthermia to Lessen trAumatic bRain injury-Randomised Controlled Trial (POLAR-RCT) will evaluate whether early and sustained prophylactic hypothermia delivered to patients with severe traumatic brain injury improves patient-centred outcomes.\n\n\nMETHODS\nThe POLAR-RCT is a multicentre, randomised, parallel group, phase III trial of early, prophylactic cooling in critically ill patients with severe traumatic brain injury, conducted in Australia, New Zealand, France, Switzerland, Saudi Arabia and Qatar. A total of 511 patients aged 18-60 years have been enrolled with severe acute traumatic brain injury. The trial intervention of early and sustained prophylactic hypothermia to 33 °C for 72 h will be compared to standard normothermia maintained at a core temperature of 37 °C. The primary outcome is the proportion of favourable neurological outcomes, comprising good recovery or moderate disability, observed at six months following randomisation utilising a midpoint dichotomisation of the Extended Glasgow Outcome Scale (GOSE). Secondary outcomes, also assessed at six months following randomisation, include the probability of an equal or greater GOSE level, mortality, the proportions of patients with haemorrhage or infection, as well as assessment of quality of life and health economic outcomes. The planned sample size will allow 80% power to detect a 30% relative risk increase from 50% to 65% (equivalent to a 15% absolute risk increase) in favourable neurological outcome at a two-sided alpha of 0.05.\n\n\nDISCUSSION\nConsistent with international guidelines, a detailed and prospective analysis plan has been developed for the POLAR-RCT. This plan specifies the statistical models for evaluation of primary and secondary outcomes, as well as defining covariates for adjusted analyses and methods for exploratory analyses. Application of this statistical analysis plan to the forthcoming POLAR-RCT trial will facilitate unbiased analyses of these important clinical data.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov, NCT00987688 (first posted 1 October 2009); Australian New Zealand Clinical Trials Registry, ACTRN12609000764235 . Registered on 3 September 2009.", "title": "" }, { "docid": "4467f4fc7e9f1199ca6b57f7818ca42c", "text": "Banking in several developing countries has transcended from a traditional brick-and mortar model of customers queuing for services in the banks to modern day banking where banks can be reached at any point for their services. This can be attributed to the tremendous growth in mobile penetration in many countries across the globe including Jordan. The current exploratory study is an attempt to identify the underlying factors that affects mobile banking adoption in Jordan. Data for this study have been collected using a questionnaire containing 22 questions. Out of 450 questionnaires that have been distributed, 301 are returned (66.0%). In the survey, factors that may affect Jordanian mobile phone users' to adopt mobile banking services were examined. The research findings suggested that all the six factors; self efficacy, trailability, compatibility, complexity, risk and relative advantage were statistically significant in influencing mobile banking adoption.", "title": "" }, { "docid": "3f807cb7e753ebd70558a0ce74b416b7", "text": "In this paper, we study the problem of recovering a tensor with missing data. We propose a new model combining the total variation regularization and low-rank matrix factorization. A block coordinate decent (BCD) algorithm is developed to efficiently solve the proposed optimization model. We theoretically show that under some mild conditions, the algorithm converges to the coordinatewise minimizers. Experimental results are reported to demonstrate the effectiveness of the proposed model and the efficiency of the numerical scheme. © 2015 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "a33e8a616955971014ceea9da1e8fcbe", "text": "Highlights Auditory middle and late latency responses can be recorded reliably from ear-EEG.For sources close to the ear, ear-EEG has the same signal-to-noise-ratio as scalp.Ear-EEG is an excellent match for power spectrum-based analysis. A method for measuring electroencephalograms (EEG) from the outer ear, so-called ear-EEG, has recently been proposed. The method could potentially enable robust recording of EEG in natural environments. The objective of this study was to substantiate the ear-EEG method by using a larger population of subjects and several paradigms. For rigor, we considered simultaneous scalp and ear-EEG recordings with common reference. More precisely, 32 conventional scalp electrodes and 12 ear electrodes allowed a thorough comparison between conventional and ear electrodes, testing several different placements of references. The paradigms probed auditory onset response, mismatch negativity, auditory steady-state response and alpha power attenuation. By comparing event related potential (ERP) waveforms from the mismatch response paradigm, the signal measured from the ear electrodes was found to reflect the same cortical activity as that from nearby scalp electrodes. It was also found that referencing the ear-EEG electrodes to another within-ear electrode affects the time-domain recorded waveform (relative to scalp recordings), but not the timing of individual components. It was furthermore found that auditory steady-state responses and alpha-band modulation were measured reliably with the ear-EEG modality. Finally, our findings showed that the auditory mismatch response was difficult to monitor with the ear-EEG. We conclude that ear-EEG yields similar performance as conventional EEG for spectrogram-based analysis, similar timing of ERP components, and equal signal strength for sources close to the ear. Ear-EEG can reliably measure activity from regions of the cortex which are located close to the ears, especially in paradigms employing frequency-domain analyses.", "title": "" }, { "docid": "ad7f49832562d27534f11b162e28f51b", "text": "Gaze is an important component of social interaction. The function, evolution and neurobiology of gaze processing are therefore of interest to a number of researchers. This review discusses the evolutionary role of social gaze in vertebrates (focusing on primates), and a hypothesis that this role has changed substantially for primates compared to other animals. This change may have been driven by morphological changes to the face and eyes of primates, limitations in the facial anatomy of other vertebrates, changes in the ecology of the environment in which primates live, and a necessity to communicate information about the environment, emotional and mental states. The eyes represent different levels of signal value depending on the status, disposition and emotional state of the sender and receiver of such signals. There are regions in the monkey and human brain which contain neurons that respond selectively to faces, bodies and eye gaze. The ability to follow another individual's gaze direction is affected in individuals with autism and other psychopathological disorders, and after particular localized brain lesions. The hypothesis that gaze following is \"hard-wired\" in the brain, and may be localized within a circuit linking the superior temporal sulcus, amygdala and orbitofrontal cortex is discussed.", "title": "" }, { "docid": "cf341e272dcc4773829f09e36a0519b3", "text": "Malicious Web sites are a cornerstone of Internet criminal activities. The dangers of these sites have created a demand for safeguards that protect end-users from visiting them. This article explores how to detect malicious Web sites from the lexical and host-based features of their URLs. We show that this problem lends itself naturally to modern algorithms for online learning. Online algorithms not only process large numbers of URLs more efficiently than batch algorithms, they also adapt more quickly to new features in the continuously evolving distribution of malicious URLs. We develop a real-time system for gathering URL features and pair it with a real-time feed of labeled URLs from a large Web mail provider. From these features and labels, we are able to train an online classifier that detects malicious Web sites with 99% accuracy over a balanced dataset.", "title": "" }, { "docid": "8588a3317d4b594d8e19cb005c3d35c7", "text": "Histograms of Oriented Gradients (HOG) is one of the wellknown features for object recognition. HOG features are calculated by taking orientation histograms of edge intensity in a local region. N.Dalal et al. proposed an object detection algorithm in which HOG features were extracted from all locations of a dense grid on a image region and the combined features are classified by using linear Support Vector Machine (SVM). In this paper, we employ HOG features extracted from all locations of a grid on the image as candidates of the feature vectors. Principal Component Analysis (PCA) is applied to these HOG feature vectors to obtain the score (PCA-HOG) vectors. Then a proper subset of PCA-HOG feature vectors is selected by using Stepwise Forward Selection (SFS) algorithm or Stepwise Backward Selection (SBS) algorithm to improve the generalization performance. The selected PCA-HOG feature vectors are used as an input of linear SVM to classify the given input into pedestrian/non-pedestrian. The improvement of the recognition rates are confirmed through experiments using MIT pedestrian dataset.", "title": "" }, { "docid": "955201c5191774ca14ea38e473bd7d04", "text": "We advocate a relation based approach to Argumentation Mining. Our focus lies on the extraction of argumentative relations instead of the identification of arguments, themselves. By classifying pairs of sentences according to the relation that holds between them we are able to identify sentences that may be factual when considered in isolation, but carry argumentative meaning when read in context. We describe scenarios in which this is useful, as well as a corpus of annotated sentence pairs we are developing to provide a testbed for this approach.", "title": "" }, { "docid": "c0c30c3b9539511e9079ec7894ad754f", "text": "Cardiovascular disease remains the world's leading cause of death. Yet, we have known for decades that the vast majority of atherosclerosis and its subsequent morbidity and mortality are influenced predominantly by diet. This paper will describe a health-promoting whole food, plant-based diet; delineate macro- and micro-nutrition, emphasizing specific geriatric concerns; and offer guidance to physicians and other healthcare practitioners to support patients in successfully utilizing nutrition to improve their health.", "title": "" }, { "docid": "f05d7f391d6d805308801d23bc3234f0", "text": "Identifying patterns in large high dimensional data sets is a challenge. As the number of dimensions increases, the patterns in the data sets tend to be more prominent in the subspaces than the original dimensional space. A system to facilitate presentation of such subspace oriented patterns in high dimensional data sets is required to understand the data.\n Heidi is a high dimensional data visualization system that captures and visualizes the closeness of points across various subspaces of the dimensions; thus, helping to understand the data. The core concept behind Heidi is based on prominence of patterns within the nearest neighbor relations between pairs of points across the subspaces.\n Given a d-dimensional data set as input, Heidi system generates a 2-D matrix represented as a color image. This representation gives insight into (i) how the clusters are placed with respect to each other, (ii) characteristics of placement of points within a cluster in all the subspaces and (iii) characteristics of overlapping clusters in various subspaces.\n A sample of results displayed and discussed in this paper illustrate how Heidi Visualization can be interpreted.", "title": "" }, { "docid": "8ca55e6a146406634335ccc1914a09d2", "text": "In this paper we present the results of a simulation study to explore the ability of Bayesian parametric and nonparametric models to provide an adequate fit to count data, of the type that would routinely be analyzed parametrically either through fixed-effects or random-effects Poisson models. The context of the study is a randomized controlled trial with two groups (treatment and control). Our nonparametric approach utilizes several modeling formulations based on Dirichlet process priors. We find that the nonparametric models are able to flexibly adapt to the data, to offer rich posterior inference, and to provide, in a variety of settings, more accurate predictive inference than parametric models.", "title": "" }, { "docid": "3bf5eaa6400ae63000a1d100114fe8fd", "text": "In Fig. 4e of this Article, the labels for ‘Control’ and ‘HFD’ were reversed (‘Control’ should have been labelled blue rather than purple, and ‘HFD’ should have been labelled purple rather than blue). Similarly, in Fig. 4f of this Article, the labels for ‘V’ and ‘GW’ were reversed (‘V’ should have been labelled blue rather than purple, and ‘GW’ should have been labelled purple instead of blue). The original figure has been corrected online.", "title": "" }, { "docid": "f309d2f237f4451bea75767f53277143", "text": "Most problems in computational geometry are algebraic. A general approach to address nonrobustness in such problems is Exact Geometric Computation (EGC). There are now general libraries that support EGC for the general programmer (e.g., Core Library, LEDA Real). Many applications require non-algebraic functions as well. In this paper, we describe how to provide non-algebraic functions in the context of other EGC capabilities. We implemented a multiprecision hypergeometric series package which can be used to evaluate common elementary math functions to an arbitrary precision. This can be achieved relatively easily using the Core Library which supports a guaranteed precision level of accuracy. We address several issues of efficiency in such a hypergeometric package: automatic error analysis, argument reduction, preprocessing of hypergeometric parameters, and precomputed constants. Some preliminary experimental results are reported.", "title": "" }, { "docid": "cbad7caa1cc1362e8cd26034617c39f4", "text": "Many state-machine Byzantine Fault Tolerant (BFT) protocols have been introduced so far. Each protocol addressed a different subset of conditions and use-cases. However, if the underlying conditions of a service span different subsets, choosing a single protocol will likely not be a best fit. This yields robustness and performance issues which may be even worse in services that exhibit fluctuating conditions and workloads. In this paper, we reconcile existing state-machine BFT protocols in a single adaptive BFT system, called ADAPT, aiming at covering a larger set of conditions and use-cases, probably the union of individual subsets of these protocols. At anytime, a launched protocol in ADAPT can be aborted and replaced by another protocol according to a potential change (an event) in the underlying system conditions. The launched protocol is chosen according to an \"evaluation process\" that takes into consideration both: protocol characteristics and its performance. This is achieved by applying some mathematical formulas that match the profiles of protocols to given user (e.g., service owner) preferences. ADAPT can assess the profiles of protocols (e.g., throughput) at run-time using Machine Learning prediction mechanisms to get accurate evaluations. We compare ADAPT with well known BFT protocols showing that it outperforms others as system conditions change and under dynamic workloads.", "title": "" }, { "docid": "417ba025ea47d354b8e087d37ddb3655", "text": "User satisfaction in computer games seems to be influenced by game balance, the level of challenge faced by the user. This work presents an evaluation, performed by human players, of dynamic game balancing approaches. The results indicate that adaptive approaches are more effective. This paper also enumerates some issues encountered in evaluating users’ satisfaction, in the context of games, and depicts some learned lessons.", "title": "" }, { "docid": "b14a77c6e663af1445e466a3e90d4e5f", "text": "This paper proposes an approach for applying GANs to NMT. We build a conditional sequence generative adversarial net which comprises of two adversarial sub models, a generator and a discriminator. The generator aims to generate sentences which are hard to be discriminated from human-translated sentences ( i.e., the golden target sentences); And the discriminator makes efforts to discriminate the machine-generated sentences from humantranslated ones. The two sub models play a mini-max game and achieve the win-win situation when they reach a Nash Equilibrium. Additionally, the static sentence-level BLEU is utilized as the reinforced objective for the generator, which biases the generation towards high BLEU points. During training, both the dynamic discriminator and the static BLEU objective are employed to evaluate the generated sentences and feedback the evaluations to guide the learning of the generator. Experimental results show that the proposed model consistently outperforms the traditional RNNSearch and the newly emerged state-ofthe-art Transformer on English-German and Chinese-English translation tasks.", "title": "" }, { "docid": "231554e78d509e7bca2dfd4280b411bb", "text": "Layered models provide a compelling approach for estimating image motion and segmenting moving scenes. Previous methods, however, have failed to capture the structure of complex scenes, provide precise object boundaries, effectively estimate the number of layers in a scene, or robustly determine the depth order of the layers. Furthermore, previous methods have focused on optical flow between pairs of frames rather than longer sequences. We show that image sequences with more frames are needed to resolve ambiguities in depth ordering at occlusion boundaries; temporal layer constancy makes this feasible. Our generative model of image sequences is rich but difficult to optimize with traditional gradient descent methods. We propose a novel discrete approximation of the continuous objective in terms of a sequence of depth-ordered MRFs and extend graph-cut optimization methods with new “moves” that make joint layer segmentation and motion estimation feasible. Our optimizer, which mixes discrete and continuous optimization, automatically determines the number of layers and reasons about their depth ordering. We demonstrate the value of layered models, our optimization strategy, and the use of more than two frames on both the Middlebury optical flow benchmark and the MIT layer segmentation benchmark.", "title": "" } ]
scidocsrr
a5aed03c53584ff0d80bae4c3c78edb3
SenSprout: inkjet-printed soil moisture and leaf wetness sensor
[ { "docid": "3c55948ba5466b04c7b3c1005d4f749f", "text": "Energy harvesting is a key technique that can be used to overcome the barriers that prevent the real world deployment of wireless sensor networks (WSNs). In particular, solar energy harvesting has been commonly used to overcome this barrier. However, it should be noted that WSNs operating on solar power suffer form energy shortage during nighttimes. Therefore, to solve this problem, we exploit the use of TV broadcasts airwaves as energy sources to power wireless sensor nodes. We measured the output of a rectenna continuously for 7 days; from the results of this measurement, we showed that Radio Frequency (RF) energy can always be harvested. We developed an RF energy harvesting WSN prototype to show the effectiveness of RF energy harvesting for the usage of a WSN. We also proposed a duty cycle determination method for our system, and verified the validity of this method by implementing our system. This RF energy harvesting method is effective in a long period measurement application that do not require high power consumption.", "title": "" } ]
[ { "docid": "5b6daefbefd44eea4e317e673ad91da3", "text": "A three-dimensional (3-D) thermogram can provide spatial information; however, it is rarely applied because it lacks an accurate method in obtaining the intrinsic and extrinsic parameters of an infrared (IR) camera. Conventional methods cannot be used for such calibration because an IR camera cannot capture visible calibration patterns. Therefore, in the current study, a trinocular vision system composed of two visible cameras and an IR camera is constructed and a calibration board with miniature bulbs is designed. The two visible cameras compose a binocular vision system that obtains 3-D information from the miniature bulbs while the IR camera captures the calibration board to obtain the two dimensional subpixel coordinates of miniature bulbs. The corresponding algorithm is proposed to calibrate the IR camera based on the gathered information. Experimental results show that the proposed calibration can accurately obtain the intrinsic and extrinsic parameters of the IR camera, and meet the requirements of its application.", "title": "" }, { "docid": "7dd86bc341e2637505387a96c16ea9c8", "text": "This paper focuses on the relationship between fine art movements in the 20th C and the pioneers of digital art from 1956 to 1986. The research is part of a project called Digital Art Museum, which is an electronic archive devoted to the history and practice of computer art, and is also active in curating exhibitions of the work. While computer art genres never became mainstream art movements, there are clear areas of common interest, even when these are separated by some decades.", "title": "" }, { "docid": "be1ac1b39ed75cb2ae2739ea1a443821", "text": "In this paper, we consider the problems of generating all maximal (bipartite) cliques in a given (bipartite) graph G = (V, E) with n vertices and m edges. We propose two algorithms for enumerating all maximal cliques. One runs with O(M(n)) time delay and in O(n) space and the other runs with O(∆) time delay and in O(n + m) space, where ∆ denotes the maximum degree of G, M(n) denotes the time needed to multiply two n×n matrices, and the latter one requires O(nm) time as a preprocessing. For a given bipartite graph G, we propose three algorithms for enumerating all maximal bipartite cliques. The first algorithm runs with O(M(n)) time delay and in O(n) space, which immediately follows from the algorithm for the nonbipartite case. The second one runs with O(∆) time delay and in O(n + m) space, and the last one runs with O(∆) time delay and in O(n + m + N∆) space, where N denotes the number of all maximal bipartite cliques in G and both algorithms require O(nm) time as a preprocessing. Our algorithms improve upon all the existing algorithms, when G is either dense or sparse. Furthermore, computational experiments show that our algorithms for sparse graphs have significantly good performance for graphs which are generated randomly and appear in real-world problems.", "title": "" }, { "docid": "0642dd233fb6f25159eb0f7d030a1764", "text": "Integrating games into the computer science curriculum has been gaining acceptance in recent years, particularly when used to improve student engagement in introductory courses. This paper argues that games can also be useful in upper level courses, such as general artificial intelligence and machine learning. We provide a case study of using a Mario game in a machine learning class to provide one successful data point where both content-specific and general learning outcomes were successfully achieved.", "title": "" }, { "docid": "2936f8e1f9a6dcf2ba4fdbaee73684e2", "text": "Recently the world of the web has become more social and more real-time. Facebook and Twitter are perhaps the exemplars of a new generation of social, real-time web services and we believe these types of service provide a fertile ground for recommender systems research. In this paper we focus on one of the key features of the social web, namely the creation of relationships between users. Like recent research, we view this as an important recommendation problem -- for a given user, UT which other users might be recommended as followers/followees -- but unlike other researchers we attempt to harness the real-time web as the basis for profiling and recommendation. To this end we evaluate a range of different profiling and recommendation strategies, based on a large dataset of Twitter users and their tweets, to demonstrate the potential for effective and efficient followee recommendation.", "title": "" }, { "docid": "56fa6f96657182ff527e42655bbd0863", "text": "Nootropics or smart drugs are well-known compounds or supplements that enhance the cognitive performance. They work by increasing the mental function such as memory, creativity, motivation, and attention. Recent researches were focused on establishing a new potential nootropic derived from synthetic and natural products. The influence of nootropic in the brain has been studied widely. The nootropic affects the brain performances through number of mechanisms or pathways, for example, dopaminergic pathway. Previous researches have reported the influence of nootropics on treating memory disorders, such as Alzheimer's, Parkinson's, and Huntington's diseases. Those disorders are observed to impair the same pathways of the nootropics. Thus, recent established nootropics are designed sensitively and effectively towards the pathways. Natural nootropics such as Ginkgo biloba have been widely studied to support the beneficial effects of the compounds. Present review is concentrated on the main pathways, namely, dopaminergic and cholinergic system, and the involvement of amyloid precursor protein and secondary messenger in improving the cognitive performance.", "title": "" }, { "docid": "cb929b640f8ee7b550512dd4d0dc8e17", "text": "The existing machine translation systems, whether phrase-based or neural, have relied almost exclusively on word-level modelling with explicit segmentation. In this paper, we ask a fundamental question: can neural machine translation generate a character sequence without any explicit segmentation? To answer this question, we evaluate an attention-based encoder– decoder with a subword-level encoder and a character-level decoder on four language pairs–En-Cs, En-De, En-Ru and En-Fi– using the parallel corpora from WMT’15. Our experiments show that the models with a character-level decoder outperform the ones with a subword-level decoder on all of the four language pairs. Furthermore, the ensembles of neural models with a character-level decoder outperform the state-of-the-art non-neural machine translation systems on En-Cs, En-De and En-Fi and perform comparably on En-Ru.", "title": "" }, { "docid": "d3c7900e22ab8d4dd52fa12f47fbba09", "text": "In this paper, an obstacle-surmounting-enabled lower limb exoskeleton with novel linkage joints that perfectly mimicked human motions was proposed. Currently, most lower exoskeletons that use linear actuators have a direct connection between the wearer and the controlled part. Compared to the existing joints, the novel linkage joint not only fitted better into compact chasis, but also provided greater torque when the joint was at a large bend angle. As a result, it extended the angle range of joint peak torque output. With any given power, torque was prioritized over rotational speed, because instead of rotational speed, sufficiency of torque is the premise for most joint actions. With insufficient torque, the exoskeleton will be a burden instead of enhancement to its wearer. With optimized distribution of torque among the joints, the novel linkage method may contribute to easier exoskeleton movements.", "title": "" }, { "docid": "5a077d1d4d6c212b7f817cc115bf31bd", "text": "Focus group interviews are widely used in health research to explore phenomena and are accepted as a legitimate qualitative methodology. They are used to draw out interaction data from discussions among participants; researchers running these groups need to be skilled in interviewing and in managing groups, group dynamics and group discussions. This article follows Doody et al's (2013) article on the theory of focus group research; it addresses the preparation for focus groups relating to the research environment, interview process, duration, participation of group members and the role of the moderator. The article aims to assist researchers to prepare and plan for focus groups and to develop an understanding of them, so information from the groups can be used for academic studies or as part of a research proposal.", "title": "" }, { "docid": "2b53e3494d58b2208f95d5bb67589677", "text": "In his paper ‘Logic and conversation’ Grice (1989: 37) introduced a distinction between generalized and particularized conversational implicatures. His notion of a generalized conversational implicature (GCI) has been developed in two competing directions, by neo-Griceans such as Horn (1989) and Levinson (1983, 1987b, 1995, 2000) on the one hand, and relevance theorists such as Sperber & Wilson (1986) and Carston (1988, 1993, 1995, 1997, 1998a,b) on the other. Levinson defends the claim that GCIs are inferred on the basis of a set of default heuristics that are triggered by the presence of certain sorts of lexical items. These default inferences will be drawn unless something unusual in the context blocks them. Carston reconceives GCIs as contents that a speaker directly communicates, rather than as contents that are merely conversationally implicated. GCIs are treated as pragmatic developments of semantically underspecified logical forms. They are not the products of default inferences, since what is communicated depends heavily on the specific context, and not merely on the presence or absence of certain lexical items. We introduce two processing models, the Default Model and the Underspecified Model, that are inspired by these rival theoretical views. This paper describes an eye monitoring experiment that is intended to test the predictions of these two models. Our primary concern is to make a case for the claim that it is fruitful to apply an eye tracking methodology to an area of pragmatic research that has not previously been explored from a processing perspective.", "title": "" }, { "docid": "9bea0e85c3de06ef440c255700b041fd", "text": "Preterm birth and infants’ admission to neonatal intensive care units (NICU) are associated with significant emotional and psychological stresses on mothers that interfere with normal mother-infant relationship. Maternal selfefficacy in parenting ability may predict long-term outcome of mother-infant relationship as well as neurodevelopmental and behavioral development of preterm infants. The Perceived Maternal Parenting Self-Efficacy (PMP S-E) tool was developed to measure self-efficacy in mothers of premature infants in the United Kingdom. The present study determined if maternal and neonatal characteristics could predict PMP S-E scores of mothers who were administered to in a mid-west community medical center NICU. Mothers whose infants were born less than 37 weeks gestational age and admitted to a level III neonatal intensive care unit participated. Participants completed the PMP S-E and demographic survey prior to discharge. A logistic regression analysis was conducted from PMP SE scores involving 103 dyads using maternal education, race, breast feeding, maternal age, infant’s gestational age, Apgar 5-minute score, birth weight, mode of delivery and time from birth to completion of PMP S-E questionnaire. Time to completion of survey and gestational age were the significant predictors of PMP S-E scores. The finding of this study concerning the utilization of the PMP S-E in a United States mid-west tertiary neonatal center suggest that interpretation of the score requires careful consideration of these two variables.", "title": "" }, { "docid": "7ec6540b44b23a0380dcb848239ccac4", "text": "There is plenty of theoretical and empirical evidence that depth of neural networks is a crucial ingredient for their success. However, network training becomes more difficult with increasing depth and training of very deep networks remains an open problem. In this extended abstract, we introduce a new architecture designed to ease gradient-based training of very deep networks. We refer to networks with this architecture as highway networks, since they allow unimpeded information flow across several layers on information highways. The architecture is characterized by the use of gating units which learn to regulate the flow of information through a network. Highway networks with hundreds of layers can be trained directly using stochastic gradient descent and with a variety of activation functions, opening up the possibility of studying extremely deep and efficient architectures. Note: A full paper extending this study is available at http://arxiv.org/abs/1507.06228, with additional references, experiments and analysis.", "title": "" }, { "docid": "26f2e3918eb624ce346673d10b5d2eb7", "text": "We consider generation and comprehension of natural language referring expression for objects in an image. Unlike generic image captioning which lacks natural standard evaluation criteria, quality of a referring expression may be measured by the receivers ability to correctly infer which object is being described. Following this intuition, we propose two approaches to utilize models trained for comprehension task to generate better expressions. First, we use a comprehension module trained on human-generated expressions, as a critic of referring expression generator. The comprehension module serves as a differentiable proxy of human evaluation, providing training signal to the generation module. Second, we use the comprehension model in a generate-and-rerank pipeline, which chooses from candidate expressions generated by a model according to their performance on the comprehension task. We show that both approaches lead to improved referring expression generation on multiple benchmark datasets.", "title": "" }, { "docid": "8481bf05a0afc1de516d951474fb9d92", "text": "We propose an approach to Multitask Learning (MTL) to make deep learning models faster and lighter for applications in which multiple tasks need to be solved simultaneously, which is particularly useful in embedded, real-time systems. We develop a multitask model for both Object Detection and Semantic Segmentation and analyze the challenges that appear during its training. Our multitask network is 1.6x faster, lighter and uses less memory than deploying the single-task models in parallel. We conclude that MTL has the potential to give superior performance in exchange of a more complex training process that introduces challenges not present in single-task models.", "title": "" }, { "docid": "36b6eb29650479d45b8b0479d6fc0371", "text": "Cognizant of the research gap in the theorization of mobile learning, this paper conceptually explores how the theories and methodology of self-regulated learning (SRL), an active area in contemporary educational psychology, are inherently suited to address the issues originating from the defining characteristics of mobile learning: enabling student-centred, personal, and ubiquitous learning. These characteristics provide some of the conditions for learners to learn anywhere and anytime, and thus, entail learners to be motivated and to be able to self-regulate their own learning. We propose an analytic SRL model of mobile learning as a conceptual framework for understanding mobile learning, in which the notion of self-regulation as agency is at the core. The rationale behind this model is built on our recognition of the challenges in the current conceptualization of the mechanisms and processes of mobile learning, and the inherent relationship between mobile learning and SRL. We draw on work in a 3-year research project in developing and implementing a mobile learning environment in elementary science classes in Singapore to illustrate the application of SRL theories and methodology to understand and analyse mobile learning.", "title": "" }, { "docid": "90cfe22d4e436e9caa61a2ac198cb7f7", "text": "Deep Neural Networks (DNNs) are fast becoming ubiquitous for their ability to attain good accuracy in various machine learning tasks. A DNN’s architecture (i.e., its hyper-parameters) broadly determines the DNN’s accuracy and performance, and is often confidential. Attacking a DNN in the cloud to obtain its architecture can potentially provide major commercial value. Further, attaining a DNN’s architecture facilitates other, existing DNN attacks. This paper presents Cache Telepathy: a fast and accurate mechanism to steal a DNN’s architecture using the cache side channel. Our attack is based on the insight that DNN inference relies heavily on tiled GEMM (Generalized Matrix Multiply), and that DNN architecture parameters determine the number of GEMM calls and the dimensions of the matrices used in the GEMM functions. Such information can be leaked through the cache side channel. This paper uses Prime+Probe and Flush+Reload to attack VGG and ResNet DNNs running OpenBLAS and Intel MKL libraries. Our attack is effective in helping obtain the architectures by very substantially reducing the search space of target DNN architectures. For example, for VGG using OpenBLAS, it reduces the search space from more than 1035 architectures to just 16.", "title": "" }, { "docid": "7bea83a1ed940aa68bc67b5d046cf015", "text": "Natural languages are full of collocations, recurrent combinations of words that co-occur more often than expected by chance and that correspond to arbitrary word usages. Recent work in lexicography indicates that collocations are pervasive in English; apparently, they are common in all types of writing, including both technical and nontechnical genres. Several approaches have been proposed to retrieve various types of collocations from the analysis of large samples of textual data. These techniques automatically produce large numbers of collocations along with statistical figures intended to reflect the relevance of the associations. However, noue of these techniques provides functional information along with the collocation. Also, the results produced often contained improper word associations reflecting some spurious aspect of the training corpus that did not stand for true collocations. In this paper, we describe a set of techniques based on statistical methods for retrieving and identifying collocations from large textual corpora. These techniques produce a wide range of collocations and are based on some original filtering methods that allow the production of richer and higher-precision output. These techniques have been implemented and resulted in a lexicographic tool, Xtract. The techniques are described and some results are presented on a 10 million-word corpus of stock market news reports. A lexicographic evaluation of Xtract as a collocation retrieval tool has been made, and the estimated precision of Xtract is 80%.", "title": "" }, { "docid": "4d9c77845346d310d5b262e75d9cedba", "text": "Distributed database technology is expected to have a significant impact on data processing in the upcoming years. Today’s business environment has an increasing need for distributed database and Client/server applications as the desire for consistent, scalable, reliable and accessible information is steadily growing. Distributed processing is an effective way to improve reliability and performance of a database system. Distribution of data is a collection of fragmentation, allocation and replication processes. Previous research works provided fragmentation solution based on empirical data about the type and frequency of the queries submitted to a centralized system. These solutions are not suitable at the initial stage of a database design for a distributed system. The purpose of this work is to present an introduction to Distributed Databases which are becoming very popular now days with the description of distributed database environment, fragmentation and horizontal fragmentation technique. Horizontal fragmentation has an important impact in improving the applications performance that is strongly affected by distributed databases design phase. In this report, we have presented a fragmentation technique that can be applied at the initial stage as well as in later stages of a distributed database system for partitioning the relations. Allocation of fragments is done simultaneously in the algorithm. Result shows that proposed technique can solve initial fragmentation problem of relational databases for distributed systems properly.", "title": "" }, { "docid": "ed47a1a6c193b6c3699805f5be641555", "text": "Wind power generation differs from conventional thermal generation due to the stochastic nature of wind. Thus wind power forecasting plays a key role in dealing with the challenges of balancing supply and demand in any electricity system, given the uncertainty associated with the wind farm power output. Accurate wind power forecasting reduces the need for additional balancing energy and reserve power to integrate wind power. Wind power forecasting tools enable better dispatch, scheduling and unit commitment of thermal generators, hydro plant and energy storage plant and more competitive market trading as wind power ramps up and down on the grid. This paper presents an in-depth review of the current methods and advances in wind power forecasting and prediction. Firstly, numerical wind prediction methods from global to local scales, ensemble forecasting, upscaling and downscaling processes are discussed. Next the statistical and machine learning approach methods are detailed. Then the techniques used for benchmarking and uncertainty analysis of forecasts are overviewed, and the performance of various approaches over different forecast time horizons is examined. Finally, current research activities, challenges and potential future developments are appraised. 2011 Elsevier Ltd. All rights reserved.", "title": "" } ]
scidocsrr
9f3f6a7f77273a5f2de21be1d5f5ae3d
Smart Grid Cybersecurity: Standards and Technical Countermeasures
[ { "docid": "8d21369604ad890704d535785c8e3171", "text": "With the integration of advanced computing and communication technologies, smart grid is considered as the next-generation power system, which promises self healing, resilience, sustainability, and efficiency to the energy critical infrastructure. The smart grid innovation brings enormous challenges and initiatives across both industry and academia, in which the security issue emerges to be a critical concern. In this paper, we present a survey of recent security advances in smart grid, by a data driven approach. Compared with existing related works, our survey is centered around the security vulnerabilities and solutions within the entire lifecycle of smart grid data, which are systematically decomposed into four sequential stages: 1) data generation; 2) data acquisition; 3) data storage; and 4) data processing. Moreover, we further review the security analytics in smart grid, which employs data analytics to ensure smart grid security. Finally, an effort to shed light on potential future research concludes this paper.", "title": "" } ]
[ { "docid": "081e474c622f122832490a54657e5051", "text": "To defend a network from intrusion is a generic problem of all time. It is important to develop a defense mechanism to secure the network from anomalous activities. This paper presents a comprehensive survey of methods and systems introduced by researchers in the past two decades to protect network resources from intrusion. A detailed pros and cons analysis of these methods and systems is also reported in this paper. Further, this paper also provides a list of issues and research challenges in this evolving field of research. We believe that, this knowledge will help to create a defense system.", "title": "" }, { "docid": "8b6d5e7526e58ce66cf897d17b094a91", "text": "Regression testing is an expensive maintenance process used to revalidate modified software. Regression test selection (RTS) techniques try to lower the cost of regression testing by selecting and running a subset of the existing test cases. Many such techniques have been proposed and initial studies show that they can produce savings. We believe, however, that issues such as the frequency with which testing is done have a strong effect on the behavior of these techniques. Therefore, we conducted an experiment to assess the effects of test application frequency on the costs and benefits of regression test selection techniques. Our results expose essential tradeoffs that should be considered when using these techniques over a series of software releases.", "title": "" }, { "docid": "5491dd183e386ada396b237a41d907aa", "text": "The technique of scale multiplication is analyzed in the framework of Canny edge detection. A scale multiplication function is defined as the product of the responses of the detection filter at two scales. Edge maps are constructed as the local maxima by thresholding the scale multiplication results. The detection and localization criteria of the scale multiplication are derived. At a small loss in the detection criterion, the localization criterion can be much improved by scale multiplication. The product of the two criteria for scale multiplication is greater than that for a single scale, which leads to better edge detection performance. Experimental results are presented.", "title": "" }, { "docid": "046f2b6ec65903d092f8576cd210d7ee", "text": "Aim\nThe principal study objective was to investigate the pharmacokinetic characteristics and determine the absolute bioavailability and tolerability of a new sublingual (SL) buprenorphine wafer.\n\n\nMethods\nThe study was of open label, two-way randomized crossover design in 14 fasted healthy male and female volunteers. Each participant, under naltrexone block, received either a single intravenous dose of 300 mcg of buprenorphine as a constant infusion over five minutes or a sublingual dose of 800 mcg of buprenorphine in two treatment periods separated by a seven-day washout period. Blood sampling for plasma drug assay was taken on 16 occasions throughout a 48-hour period (predose and at 10, 20, 30, and 45 minutes, 1, 1.5, 2, 2.5, 3, 4, 6, 8, 12, 24 and 48 hours postdose). The pharmacokinetic parameters were determined by noncompartmental analyses of the buprenorphine plasma concentration-time profiles. Local tolerability was assessed using modified Likert scales.\n\n\nResults\nThe absolute bioavailability of SL buprenorphine was 45.4% (95% confidence interval = 37.8-54.3%). The median times to peak plasma concentration were 10 minutes and 60 minutes after IV and SL administration, respectively. The peak plasma concentration was 2.65 ng/mL and 0.74 ng/mL after IV and SL administration, respectively. The half-lives were 9.1 hours and 11.2 hours after IV and SL administration, respectively. The wafer had very good local tolerability.\n\n\nConclusions\nThis novel sublingual buprenorphine wafer has high bioavailability and reduced Tmax compared with other SL tablet formulations of buprenorphine. The wafer displayed very good local tolerability. The results suggest that this novel buprenorphine wafer may provide enhanced clinical utility in the management of both acute and chronic pain.\n\n\nBackground\nBuprenorphine is approved for use in pain management and opioid addiction. Sublingual administration of buprenorphine is a simple and noninvasive route of administration and has been available for many years. Improved sublingual formulations may lead to increased utilization of this useful drug for acute and chronic pain management.", "title": "" }, { "docid": "d32bdf27607455fb3416a4e3e3492f01", "text": "Photo-editing software restricts the control of objects in a photograph to the 2D image plane. We present a method that enables users to perform the full range of 3D manipulations, including scaling, rotation, translation, and nonrigid deformations, to an object in a photograph. As 3D manipulations often reveal parts of the object that are hidden in the original photograph, our approach uses publicly available 3D models to guide the completion of the geometry and appearance of the revealed areas of the object. The completion process leverages the structure and symmetry in the stock 3D model to factor out the effects of illumination, and to complete the appearance of the object. We demonstrate our system by producing object manipulations that would be impossible in traditional 2D photo-editing programs, such as turning a car over, making a paper-crane flap its wings, or manipulating airplanes in a historical photograph to change its story.", "title": "" }, { "docid": "bf8000b2119a5107041abf09762668ab", "text": "With the popularity of social media, people are more and more interested in mining opinions from it. Learning from social media not only has value for research, but also good for business use. RepLab 2012 had Profiling task and Monitoring task to understand the company related tweets. Profiling task aims to determine the Ambiguity and Polarity for tweets. In order to determine this Ambiguity and Polarity for the tweets in RepLab 2012 Profiling task, we built Google Adwords Filter for Ambiguity and several approaches like SentiWordNet, Happiness Score and Machine Learning for Polarity. We achieved good performance in the training set, and the performance in test set is also acceptable.", "title": "" }, { "docid": "8f6682ddcc435c95ae3ef35ebb84de7f", "text": "A series of 59 patients was treated and operated on for pain felt over the area of the ischial tuberosity and radiating down the back of the thigh. This condition was labeled as the \"hamstring syndrome.\" Pain was typically incurred by assuming a sitting position, stretching the affected posterior thigh, and running fast. The patients usually had a history of recurrent hamstring \"tears.\" Their symptoms were caused by the tight, tendinous structures of the lateral insertion area of the hamstring muscles to the ischial tuberosity. Upon division of these structures, complete relief was obtained in 52 of the 59 patients.", "title": "" }, { "docid": "9bd08edae8ab7b20aab40e24f6bdf968", "text": "Personalized Web browsing and search hope to provide Web information that matches a user’s personal interests and thus provide more effective and efficient information access. A key feature in developing successful personalized Web applications is to build user profiles that accurately represent a user’ s interests. The main goal of this research is to investigate techniques that implicitly build ontology-based user profiles. We build the profiles without user interaction, automatically monitoring the user’s browsing habits. After building the initial profile from visited Web pages, we investigate techniques to improve the accuracy of the user profile. In particular, we focus on how quickly we can achieve profile stability, how to identify the most important concepts, the effect of depth in the concept-hierarchy on the importance of a concept, and how many levels from the hierarchy should be used to represent the user. Our major findings are that ranking the concepts in the profiles by number of documents assigned to them rather than by accumulated weights provides better profile accuracy. We are also able to identify stable concepts in the profile, thus allowing us to detect long-term user interests. We found that the accuracy of concept detection decreases as we descend them in the concept hierarchy, however this loss of accuracy must be balanced against the detailed view of the user available only through the inclusion of lower-level concepts.", "title": "" }, { "docid": "90d5aca626d61806c2af3cc551b28c90", "text": "This paper presents two novel approaches to increase performance bounds of image steganography under the criteria of minimizing distortion. First, in order to efficiently use the images’ capacities, we propose using parallel images in the embedding stage. The result is then used to prove sub-optimality of the message distribution technique used by all cost based algorithms including HUGO, S-UNIWARD, and HILL. Second, a new distribution approach is presented to further improve the security of these algorithms. Experiments show that this distribution method avoids embedding in smooth regions and thus achieves a better performance, measured by state-of-the-art steganalysis, when compared with the current used distribution.", "title": "" }, { "docid": "cdf2235bea299131929700406792452c", "text": "Real-time detection of traffic signs, the task of pinpointing a traffic sign's location in natural images, is a challenging computer vision task of high industrial relevance. Various algorithms have been proposed, and advanced driver assistance systems supporting detection and recognition of traffic signs have reached the market. Despite the many competing approaches, there is no clear consensus on what the state-of-the-art in this field is. This can be accounted to the lack of comprehensive, unbiased comparisons of those methods. We aim at closing this gap by the “German Traffic Sign Detection Benchmark” presented as a competition at IJCNN 2013 (International Joint Conference on Neural Networks). We introduce a real-world benchmark data set for traffic sign detection together with carefully chosen evaluation metrics, baseline results, and a web-interface for comparing approaches. In our evaluation, we separate sign detection from classification, but still measure the performance on relevant categories of signs to allow for benchmarking specialized solutions. The considered baseline algorithms represent some of the most popular detection approaches such as the Viola-Jones detector based on Haar features and a linear classifier relying on HOG descriptors. Further, a recently proposed problem-specific algorithm exploiting shape and color in a model-based Houghlike voting scheme is evaluated. Finally, we present the best-performing algorithms of the IJCNN competition.", "title": "" }, { "docid": "95be4f5132cde3c637c5ee217b5c8405", "text": "In recent years, information communication and computation technologies are deeply converging, and various wireless access technologies have been successful in deployment. It can be predicted that the upcoming fifthgeneration mobile communication technology (5G) can no longer be defined by a single business model or a typical technical characteristic. 5G is a multi-service and multitechnology integrated network, meeting the future needs of a wide range of big data and the rapid development of numerous businesses, and enhancing the user experience by providing smart and customized services. In this paper, we propose a cloud-based wireless network architecture with four components, i.e., mobile cloud, cloud-based radio access network (Cloud RAN), reconfigurable network and big data centre, which is capable of providing a virtualized, reconfigurable, smart wireless network.", "title": "" }, { "docid": "db26d71ec62388e5367eb0f2bb45ad40", "text": "The linear programming (LP) is one of the most popular necessary optimization tool used for data analytics as well as in various scientific fields. However, the current state-of-art algorithms suffer from scalability issues when processing Big Data. For example, the commercial optimization software IBM CPLEX cannot handle an LP with more than hundreds of thousands variables or constraints. Existing algorithms are fundamentally hard to scale because they are inevitably too complex to parallelize. To address the issue, we study the possibility of using the Belief Propagation (BP) algorithm as an LP solver. BP has shown remarkable performances on various machine learning tasks and it naturally lends itself to fast parallel implementations. Despite this, very little work has been done in this area. In particular, while it is generally believed that BP implicitly solves an optimization problem, it is not well understood under what conditions the solution to a BP converges to that of a corresponding LP formulation. Our efforts consist of two main parts. First, we perform a theoretic study and establish the conditions in which BP can solve LP [1,2]. Although there has been several works studying the relation between BP and LP for certain instances, our work provides a generic condition unifying all prior works for generic LP. Second, utilizing our theoretical results, we develop a practical BP-based parallel algorithms for solving generic LPs, and it shows 71x speed up while sacrificing only 0.1% accuracy compared to the state-of-art exact algorithm [3, 4]. As a result of the study, the PIs have published two conference papers [1,3] and two follow-up journal papers [3,4] are under submission. We refer the readers to our published work [1,3] for details. Introduction: The main goal of our research is to develop a distributed and parallel algorithm for large-scale linear optimization (or programming). Considering the popularity and importance of linear optimizations in various fields, the proposed method has great potentials applicable to various big data analytics. Our approach is based on the Belief Propagation (BP) algorithm, which has shown remarkable performances on various machine learning tasks and naturally lends itself to fast parallel implementations. Our key contributions are summarized below: 1) We establish key theoretic foundations in the area of Belief Propagation. In particular, we show that BP converges to the solution of LP if some sufficient conditions are satisfied. Our DISTRIBUTION A. Approved for public release: distribution unlimited. conditions not only cover various prior studies including maximum weight matching, mincost network flow, shortest path, etc., but also discover new applications such as vertex cover and traveling salesman. 2) While the theoretic study provides understanding of the nature of BP, it falls short in slow convergence speed, oscillation and wrong convergence. To make BP-based algorithms more practical, we design a BP-based framework which uses BP as a ‘weight transformer’ to resolve the convergence issue of BP. We refer the readers to our published work [1, 3] for details. The rest of the report contains a summary of our work appeared in UAI (Uncertainty in Artificial Intelligence) and IEEE Conference in Big Data [1,3] and follow up work [2,4] under submission to major journals. Experiment: We first establish theoretical conditions when Belief Propagation (BP) can solve Linear Programming (LP), and second provide a practical distributed/parallel BP-based framework solving generic optimizations. We demonstrate the wide-applicability of our approach via popular combinatorial optimizations including maximum weight matching, shortest path, traveling salesman, cycle packing and vertex cover. Results and Discussion: Our contribution consists of two parts: Study 1 [1,2] looks at the theoretical conditions that BP converges to the solution of LP. Our theoretical result unify almost all prior result about BP for combinatorial optimization. Furthermore, our conditions provide a guideline for designing distributed algorithm for combinatorial optimization problems. Study 2 [3,4] focuses on building an optimal framework based on the theory of Study 1 for boosting the practical performance of BP. Our framework is generic, thus, it can be easily extended to various optimization problems. We also compare the empirical performance of our framework to other heuristics and state of the art algorithms for several combinatorial optimization problems. -------------------------------------------------------Study 1 -------------------------------------------------------We first introduce the background for our contributions. A joint distribution of � (binary) variables � = [��] ∈ {0,1}� is called graphical model (GM) if it factorizes as follows: for � = [��] ∈ {0,1}�, where ψψ� ,�� are some non-negative functions so called factors; � is a collection of subsets (each αα� is a subset of {1,⋯ ,�} with |��| ≥ 2; �� is the projection of � onto dimensions included in αα. Assignment �∗ is called maximum-a-posteriori (MAP) assignment if �∗maximizes the probability. The following figure depicts the graphical relation between factors � and variables �. DISTRIBUTION A. Approved for public release: distribution unlimited. Figure 1: Factor graph for the graphical model with factors αα1 = {1,3},�2 = {1,2,4},�3 = {2,3,4} Now we introduce the algorithm, (max-product) BP, for approximating MAP assignment in a graphical model. BP is an iterative procedure; at each iteration �, there are four messages between each variable �� and every associated αα ∈ ��, where ��: = {� ∈ �:� ∈ �}. Then, messages are updated as follows: Finally, given messages, BP marginal beliefs are computed as follows: Then, BP outputs the approximated MAP assignment ��� = [��] as Now, we are ready to introduce the main result of Study 1. Consider the following GM: for � = [��] ∈ {0,1}� and � = [��] ∈ ��, where the factor function ψψαα for αα ∈ � is defined as for some matrices ��,�� and vectors ��,��. Consider the Linear Programming (LP) corresponding the above GM: One can easily observe that the MAP assignments for GM corresponds to the (optimal) solution of the above LP if the LP has an integral solution �∗ ∈ {0,1}�. The following theorem is our main result of Study 1 which provide sufficient conditions so that BP can indeed find the LP solution DISTRIBUTION A. Approved for public release: distribution unlimited. Theorem 1 can be applied to several combinatorial optimization problems including matching, network flow, shortest path, vertex cover, etc. See [1,2] for the detailed proof of Theorem 1 and its applications to various combinatorial optimizations including maximum weight matching, min-cost network flow, shortest path, vertex cover and traveling salesman. -------------------------------------------------------Study 2 -------------------------------------------------------Study 2 mainly focuses on providing a distributed generic BP-based combinatorial optimization solver which has high accuracy and low computational complexity. In summary, the key contributions of Study 2 are as follows: 1) Practical BP-based algorithm design: To the best of our knowledge, this paper is the first to propose a generic concept for designing BP-based algorithms that solve large-scale combinatorial optimization problems. 2) Parallel implementation: We also demonstrate that the algorithm is easily parallelizable. For the maximum weighted matching problem, this translates to 71x speed up while sacrificing only 0.1% accuracy compared to the state-of-art exact algorithm. 3) Extensive empirical evaluation: We evaluate our algorithms on three different combinatorial optimization problems on diverse synthetic and real-world data-sets. Our evaluation shows that the framework shows higher accuracy compared to other known heuristics. Designing a BP-based algorithm for some problem is easy in general. However (a) it might diverge or converge very slowly, (b) even if it converges quickly, the BP decision might be not correct, and (c) even worse, BP might produce an infeasible solution, i.e., it does not satisfy the constraints of the problem. DISTRIBUTION A. Approved for public release: distribution unlimited. Figure 2: Overview of our generic BP-based framework To address these issues, we propose a generic BP-based framework that provides highly accurate approximate solutions for combinatorial optimization problems. The framework has two steps, as shown in Figure 2. In the first phase, it runs a BP algorithm for a fixed number of iterations without waiting for convergence. Then, the second phase runs a known heuristic using BP beliefs instead of the original weights to output a feasible solution. Namely, the first and second phases are respectively designed for ‘BP weight transforming’ and ‘post-processing’. Note that our evaluation mainly uses the maximum weight matching problem. The formal description of the maximum weight matching (MWM) problem is as follows: Given a graph � = (�,�) and edge weights � = [��] ∈ �|�|, it finds a set of edges such that each vertex is connected to at most one edge in the set and the sum of edge weights in the set is maximized. The problem is formulated as the following IP (Integer Programming): where δδ(�) is the set of edges incident to vertex � ∈ �. In the following paragraphs, we describe the two phases in more detail in reverse order. We first describe the post-processing phase. As we mentioned, one of the main issue of a BP-based algorithm is that the decision on BP beliefs might give an infeasible solution. To resolve the issue, we use post-processing by utilizing existing heuristics to the given problem that find a feasible solution. Applying post-processing ensures that the solution is at least feasible. In addition, our key idea is to replace the original weights by the logarithm of BP beliefs, i.e. function of (3). After th", "title": "" }, { "docid": "ce8f000fa9a9ec51b8b2b63e98cec5fb", "text": "The Berlin Brain-Computer Interface (BBCI) project develops a noninvasive BCI system whose key features are 1) the use of well-established motor competences as control paradigms, 2) high-dimensional features from 128-channel electroencephalogram (EEG), and 3) advanced machine learning techniques. As reported earlier, our experiments demonstrate that very high information transfer rates can be achieved using the readiness potential (RP) when predicting the laterality of upcoming left- versus right-hand movements in healthy subjects. A more recent study showed that the RP similarly accompanies phantom movements in arm amputees, but the signal strength decreases with longer loss of the limb. In a complementary approach, oscillatory features are used to discriminate imagined movements (left hand versus right hand versus foot). In a recent feedback study with six healthy subjects with no or very little experience with BCI control, three subjects achieved an information transfer rate above 35 bits per minute (bpm), and further two subjects above 24 and 15 bpm, while one subject could not achieve any BCI control. These results are encouraging for an EEG-based BCI system in untrained subjects that is independent of peripheral nervous system activity and does not rely on evoked potentials even when compared to results with very well-trained subjects operating other BCI systems.", "title": "" }, { "docid": "36b4097c3c394352dc2b7ac25ff4948f", "text": "An important task of opinion mining is to extract people’s opinions on features of an entity. For example, the sentence, “I love the GPS function of Motorola Droid” expresses a positive opinion on the “GPS function” of the Motorola phone. “GPS function” is the feature. This paper focuses on mining features. Double propagation is a state-of-the-art technique for solving the problem. It works well for medium-size corpora. However, for large and small corpora, it can result in low precision and low recall. To deal with these two problems, two improvements based on part-whole and “no” patterns are introduced to increase the recall. Then feature ranking is applied to the extracted feature candidates to improve the precision of the top-ranked candidates. We rank feature candidates by feature importance which is determined by two factors: feature relevance and feature frequency. The problem is formulated as a bipartite graph and the well-known web page ranking algorithm HITS is used to find important features and rank them high. Experiments on diverse real-life datasets show promising results.", "title": "" }, { "docid": "268e434cedbf5439612b2197be73a521", "text": "We have recently developed a chaotic gas turbine whose rotational motion might simulate turbulent Rayleigh-Bénard convection. The nondimensionalized equations of motion of our turbine are expressed as a star network of N Lorenz subsystems, referred to as augmented Lorenz equations. Here, we propose an application of the augmented Lorenz equations to chaotic cryptography, as a type of symmetric secret-key cryptographic method, wherein message encryption is performed by superimposing the chaotic signal generated from the equations on a plaintext in much the same way as in one-time pad cryptography. The ciphertext is decrypted by unmasking the chaotic signal precisely reproduced with a secret key consisting of 2N-1 (e.g., N=101) real numbers that specify the augmented Lorenz equations. The transmitter and receiver are assumed to be connected via both a quantum communication channel on which the secret key is distributed using a quantum key distribution protocol and a classical data communication channel on which the ciphertext is transmitted. We discuss the security and feasibility of our cryptographic method.", "title": "" }, { "docid": "62ff5888ad0c8065097603da8ff79cd6", "text": "Modern Internet systems often combine different applications (e.g., DNS, web, and database), span different administrative domains, and function in the context of network mechanisms like tunnels, VPNs, NATs, and overlays. Diagnosing these complex systems is a daunting challenge. Although many diagnostic tools exist, they are typically designed for a specific layer (e.g., traceroute) or application, and there is currently no tool for reconstructing a comprehensive view of service behavior. In this paper we propose X-Trace, a tracing framework that provides such a comprehensive view for systems that adopt it. We have implemented X-Trace in several protocols and software systems, and we discuss how it works in three deployed scenarios: DNS resolution, a three-tiered photo-hosting website, and a service accessed through an overlay network.", "title": "" }, { "docid": "3a4a875dc1cc491d8a7ce373043b3937", "text": "In many outlier detection tasks, only training data belonging to one class, i.e., the positive class, is available. The task is then to predict a new data point as belonging either to the positive class or to the negative class, in which case the data point is considered an outlier. For this task, we propose a novel corrupted Generative Adversarial Network (CorGAN). In the adversarial process of training CorGAN, the Generator generates outlier samples for the negative class, and the Discriminator is trained to distinguish the positive training data from the generated negative data. The proposed framework is evaluated using an image dataset and a real-world network intrusion dataset. Our outlier-detection method achieves state-of-the-art performance on both tasks. Keywords—Outlier detection, generative adversary networks, semi-supervised learning.", "title": "" }, { "docid": "20b7da7c9f630f12b0ef86d92ed7aa0f", "text": "In this paper, a Rectangular Dielectric Resonator Antenna (RDRA) with a modified feeding line is designed and investigated at 28GHz. The modified feed line is designed to excite the DR with relative permittivity of 10 which contributes to a wide bandwidth operation. The proposed single RDRA has been fabricated and mounted on a RT/Duroid 5880 (εr = 2.2 and tanδ = 0.0009) substrate. The optimized single element has been applied to array structure to improve the gain and achieve the required gain performance. The radiation pattern, impedance bandwidth and gain are simulated and measured accordingly. The number of elements and element spacing are studied for an optimum performance. The proposed antenna obtains a reflection coefficient response from 27.0GHz to 29.1GHz which cover the desired frequency band. This makes the proposed antenna achieve 2.1GHz impedance bandwidth and gain of 12.1 dB. Thus, it has potential for millimeter wave and 5G applications.", "title": "" }, { "docid": "b01436481aa77ebe7538e760132c5f3c", "text": "We propose two algorithms based on Bregman iteration and operator splitting technique for nonlocal TV regularization problems. The convergence of the algorithms is analyzed and applications to deconvolution and sparse reconstruction are presented.", "title": "" }, { "docid": "34f83c7dde28c720f82581804accfa71", "text": "The main threats to human health from heavy metals are associated with exposure to lead, cadmium, mercury and arsenic. These metals have been extensively studied and their effects on human health regularly reviewed by international bodies such as the WHO. Heavy metals have been used by humans for thousands of years. Although several adverse health effects of heavy metals have been known for a long time, exposure to heavy metals continues, and is even increasing in some parts of the world, in particular in less developed countries, though emissions have declined in most developed countries over the last 100 years. Cadmium compounds are currently mainly used in re-chargeable nickel-cadmium batteries. Cadmium emissions have increased dramatically during the 20th century, one reason being that cadmium-containing products are rarely re-cycled, but often dumped together with household waste. Cigarette smoking is a major source of cadmium exposure. In non-smokers, food is the most important source of cadmium exposure. Recent data indicate that adverse health effects of cadmium exposure may occur at lower exposure levels than previously anticipated, primarily in the form of kidney damage but possibly also bone effects and fractures. Many individuals in Europe already exceed these exposure levels and the margin is very narrow for large groups. Therefore, measures should be taken to reduce cadmium exposure in the general population in order to minimize the risk of adverse health effects. The general population is primarily exposed to mercury via food, fish being a major source of methyl mercury exposure, and dental amalgam. The general population does not face a significant health risk from methyl mercury, although certain groups with high fish consumption may attain blood levels associated with a low risk of neurological damage to adults. Since there is a risk to the fetus in particular, pregnant women should avoid a high intake of certain fish, such as shark, swordfish and tuna; fish (such as pike, walleye and bass) taken from polluted fresh waters should especially be avoided. There has been a debate on the safety of dental amalgams and claims have been made that mercury from amalgam may cause a variety of diseases. However, there are no studies so far that have been able to show any associations between amalgam fillings and ill health. The general population is exposed to lead from air and food in roughly equal proportions. During the last century, lead emissions to ambient air have caused considerable pollution, mainly due to lead emissions from petrol. Children are particularly susceptible to lead exposure due to high gastrointestinal uptake and the permeable blood-brain barrier. Blood levels in children should be reduced below the levels so far considered acceptable, recent data indicating that there may be neurotoxic effects of lead at lower levels of exposure than previously anticipated. Although lead in petrol has dramatically decreased over the last decades, thereby reducing environmental exposure, phasing out any remaining uses of lead additives in motor fuels should be encouraged. The use of lead-based paints should be abandoned, and lead should not be used in food containers. In particular, the public should be aware of glazed food containers, which may leach lead into food. Exposure to arsenic is mainly via intake of food and drinking water, food being the most important source in most populations. Long-term exposure to arsenic in drinking-water is mainly related to increased risks of skin cancer, but also some other cancers, as well as other skin lesions such as hyperkeratosis and pigmentation changes. Occupational exposure to arsenic, primarily by inhalation, is causally associated with lung cancer. Clear exposure-response relationships and high risks have been observed.", "title": "" } ]
scidocsrr
34f46ace4af41969e4e324ca76d8e028
Gut brain axis: diet microbiota interactions and implications for modulation of anxiety and depression.
[ { "docid": "d348d178b17d63ae49cfe6fd4e052758", "text": "BACKGROUND & AIMS\nChanges in gut microbiota have been reported to alter signaling mechanisms, emotional behavior, and visceral nociceptive reflexes in rodents. However, alteration of the intestinal microbiota with antibiotics or probiotics has not been shown to produce these changes in humans. We investigated whether consumption of a fermented milk product with probiotic (FMPP) for 4 weeks by healthy women altered brain intrinsic connectivity or responses to emotional attention tasks.\n\n\nMETHODS\nHealthy women with no gastrointestinal or psychiatric symptoms were randomly assigned to groups given FMPP (n = 12), a nonfermented milk product (n = 11, controls), or no intervention (n = 13) twice daily for 4 weeks. The FMPP contained Bifidobacterium animalis subsp Lactis, Streptococcus thermophiles, Lactobacillus bulgaricus, and Lactococcus lactis subsp Lactis. Participants underwent functional magnetic resonance imaging before and after the intervention to measure brain response to an emotional faces attention task and resting brain activity. Multivariate and region of interest analyses were performed.\n\n\nRESULTS\nFMPP intake was associated with reduced task-related response of a distributed functional network (49% cross-block covariance; P = .004) containing affective, viscerosensory, and somatosensory cortices. Alterations in intrinsic activity of resting brain indicated that ingestion of FMPP was associated with changes in midbrain connectivity, which could explain the observed differences in activity during the task.\n\n\nCONCLUSIONS\nFour-week intake of an FMPP by healthy women affected activity of brain regions that control central processing of emotion and sensation.", "title": "" }, { "docid": "bb008d90a8e5ea4262afc0cf784ccbb8", "text": "*Correspondence to: Michaël Messaoudi; Email: mmessaoudi@etap-lab.com In a recent clinical study, we demonstrated in the general population that Lactobacillus helveticus R0052 and Bifidobacterium longum R0175 (PF) taken in combination for 30 days decreased the global scores of hospital anxiety and depression scale (HADs), and the global severity index of the Hopkins symptoms checklist (HSCL90), due to the decrease of the sub-scores of somatization, depression and angerhostility spheres. Therefore, oral intake of PF showed beneficial effects on anxiety and depression related behaviors in human volunteers. From there, it is interesting to focus on the role of this probiotic formulation in the subjects with the lowest urinary free cortisol levels at baseline. This addendum presents a secondary analysis of the effects of PF in a subpopulation of 25 subjects with urinary free cortisol (UFC) levels less than 50 ng/ml at baseline, on psychological distress based on the percentage of change of the perceived stress scale (PSs), the HADs and the HSCL-90 scores between baseline and follow-up. The data show that PF improves the same scores as in the general population (the HADs global score, the global severity index of the HSCL-90 and three of its sub-scores, i.e., somatization, depression and anger-hostility), as well as the PSs score and three other subscores of the HSCL-90, i.e., “obsessive compulsive,” “anxiety” and “paranoidideation.” Moreover, in the HSCL-90, Beneficial psychological effects of a probiotic formulation (Lactobacillus helveticus R0052 and Bifidobacterium longum R0175) in healthy human volunteers", "title": "" }, { "docid": "92d271da0c5dff6e130e55168c64d2b0", "text": "New therapeutic targets for noncognitive reductions in energy intake, absorption, or storage are crucial given the worldwide epidemic of obesity. The gut microbial community (microbiota) is essential for processing dietary polysaccharides. We found that conventionalization of adult germ-free (GF) C57BL/6 mice with a normal microbiota harvested from the distal intestine (cecum) of conventionally raised animals produces a 60% increase in body fat content and insulin resistance within 14 days despite reduced food intake. Studies of GF and conventionalized mice revealed that the microbiota promotes absorption of monosaccharides from the gut lumen, with resulting induction of de novo hepatic lipogenesis. Fasting-induced adipocyte factor (Fiaf), a member of the angiopoietin-like family of proteins, is selectively suppressed in the intestinal epithelium of normal mice by conventionalization. Analysis of GF and conventionalized, normal and Fiaf knockout mice established that Fiaf is a circulating lipoprotein lipase inhibitor and that its suppression is essential for the microbiota-induced deposition of triglycerides in adipocytes. Studies of Rag1-/- animals indicate that these host responses do not require mature lymphocytes. Our findings suggest that the gut microbiota is an important environmental factor that affects energy harvest from the diet and energy storage in the host. Data deposition: The sequences reported in this paper have been deposited in the GenBank database (accession nos. AY 667702--AY 668946).", "title": "" } ]
[ { "docid": "de8f5656f17151c43e2454aa7b8f929f", "text": "No wonder you activities are, reading will be always needed. It is not only to fulfil the duties that you need to finish in deadline time. Reading will encourage your mind and thoughts. Of course, reading will greatly develop your experiences about everything. Reading concrete mathematics a foundation for computer science is also a way as one of the collective books that gives many advantages. The advantages are not only for you, but for the other peoples with those meaningful benefits.", "title": "" }, { "docid": "e465b9a38e7649f541ab9e419103b362", "text": "Spoken language based intelligent assistants (IAs) have been developed for a number of domains but their functionality has mostly been confined to the scope of a given app. One reason is that it’s is difficult for IAs to infer a user’s intent without access to relevant context and unless explicitly implemented, context is not available across app boundaries. We describe context-aware multi-app dialog systems that can learn to 1) identify meaningful user intents; 2) produce natural language representation for the semantics of such intents; and 3) predict user intent as they engage in multi-app tasks. As part of our work we collected data from the smartphones of 14 users engaged in real-life multi-app tasks. We found that it is reasonable to group tasks into high-level intentions. Based on the dialog content, IA can generate useful phrases to describe the intention. We also found that, with readily available contexts, IAs can effectively predict user’s intents during conversation, with accuracy at 58.9%.", "title": "" }, { "docid": "73ece9a0404ecb0cf59c7c5a1f9586d7", "text": "BACKGROUND\nAlthough there is abundant evidence to recommend a physically active lifestyle, adult physical activity (PA) levels have declined over the past two decades. In order to understand why this happens, numerous studies have been conducted to uncover the reasons for people's participation in PA. Often, the measures used were not broad enough to reflect all the reasons for participation in PA. The Physical Activity and Leisure Motivation Scale (PALMS) was created to be a comprehensive tool measuring motives for participating in PA. This 40-item scale related to participation in sport and PA is designed for adolescents and adults. Five items constitute each of the eight sub-scales (mastery, enjoyment, psychological condition, physical condition, appearance, other's expectations, affiliation, competition/ego) reflecting motives for participation in PA that can be categorized as features of intrinsic and extrinsic motivation based on self-determination theory. The aim of the current study was to validate the PALMS in the cultural context of Malaysia, including to assess how well the PALMS captures the same information as the Recreational Exercise Motivation Measure (REMM).\n\n\nMETHOD\nTo do so, 502 Malaysian volunteer participants, aged 18 to 67 years (mean ± SD; 31.55 ± 11.87 years), from a variety of PA categories, including individual sports, team sports, martial arts and exercise, completed the study.\n\n\nRESULTS\nThe hypothesized 8-factor model demonstrated a good fit with the data (CMIN/DF = 2.820, NFI = 0.90, CFI = 0.91, RMSEA = 0.06). Cronbach's alpha coefficient (α = 0.79) indicated good internal consistency for the overall measure. Internal consistency for the PALMS subscales was sound, ranging from 0.78 to 0.82. The correlations between each PALMS sub-scale and the corresponding sub-scale on the validated REMM (the 73-item questionnaire from which the PALMS was developed) were also high and varied from 0.79 to 0.95. Also, test-retest reliability for the questionnaire sub-scales was between 0.78 and 0.94 over a 4-week period.\n\n\nCONCLUSIONS\nIn this sample, the PALMS demonstrated acceptable factor structure, internal consistency, test-retest reliability, and criterion validity. It was applicable to diverse physical activity contexts.", "title": "" }, { "docid": "6af7bb1d2a7d8d44321a5b162c9781a2", "text": "In this paper, we propose a deep metric learning (DML) approach for robust visual tracking under the particle filter framework. Unlike most existing appearance-based visual trackers, which use hand-crafted similarity metrics, our DML tracker learns a nonlinear distance metric to classify the target object and background regions using a feed-forward neural network architecture. Since there are usually large variations in visual objects caused by varying deformations, illuminations, occlusions, motions, rotations, scales, and cluttered backgrounds, conventional linear similarity metrics cannot work well in such scenarios. To address this, our proposed DML tracker first learns a set of hierarchical nonlinear transformations in the feed-forward neural network to project both the template and particles into the same feature space where the intra-class variations of positive training pairs are minimized and the interclass variations of negative training pairs are maximized simultaneously. Then, the candidate that is most similar to the template in the learned deep network is identified as the true target. Experiments on the benchmark data set including 51 challenging videos show that our DML tracker achieves a very competitive performance with the state-of-the-art trackers.", "title": "" }, { "docid": "49c1924821c326f803cefff58ca7ab67", "text": "Dynamic binary analysis is a prevalent and indispensable technique in program analysis. While several dynamic binary analysis tools and frameworks have been proposed, all suffer from one or more of: prohibitive performance degradation, a semantic gap between the analysis code and the program being analyzed, architecture/OS specificity, being user-mode only, and lacking APIs. We present DECAF, a virtual machine based, multi-target, whole-system dynamic binary analysis framework built on top of QEMU. DECAF provides Just-In-Time Virtual Machine Introspection and a plugin architecture with a simple-to-use event-driven programming interface. DECAF implements a new instruction-level taint tracking engine at bit granularity, which exercises fine control over the QEMU Tiny Code Generator (TCG) intermediate representation to accomplish on-the-fly optimizations while ensuring that the taint propagation is sound and highly precise. We perform a formal analysis of DECAF's taint propagation rules to verify that most instructions introduce neither false positives nor false negatives. We also present three platform-neutral plugins—Instruction Tracer, Keylogger Detector, and API Tracer, to demonstrate the ease of use and effectiveness of DECAF in writing cross-platform and system-wide analysis tools. Implementation of DECAF consists of 9,550 lines of C++ code and 10,270 lines of C code and we evaluate DECAF using CPU2006 SPEC benchmarks and show average overhead of 605 percent for system wide tainting and 12 percent for VMI.", "title": "" }, { "docid": "4f3f3873e8eb89f0665fbeb456fbf477", "text": "STUDY DESIGN\nControlled laboratory study.\n\n\nOBJECTIVES\nTo clarify whether differences in surface stability influence trunk muscle activity.\n\n\nBACKGROUND\nLumbar stabilization exercises on unstable surfaces are performed widely. One perceived advantage in performing stabilization exercises on unstable surfaces is the potential for increased muscular demand. However, there is little evidence in the literature to help establish whether this assumption is correct.\n\n\nMETHODS\nNine healthy male subjects performed lumbar stabilization exercises. Pairs of intramuscular fine-wire or surface electrodes were used to record the electromyographic signal amplitude of the rectus abdominis, the external obliques, the transversus abdominis, the erector spinae, and lumbar multifidus. Five exercises were performed on the floor and on an unstable surface: elbow-toe, hand-knee, curl-up, side bridge, and back bridge. The EMG data were normalized as the percentage of the maximum voluntary contraction, and data between doing each exercise on the stable versus unstable surface were compared using a Wilcoxon signed-rank test.\n\n\nRESULTS\nWith the elbow-toe exercise, the activity level for all muscles was enhanced when performed on the unstable surface. When performing the hand-knee and side bridge exercises, activity level of the more global muscles was enhanced when performed on an unstable surface. Performing the curl-up exercise on an unstable surface, increased the activity of the external obliques but reduced transversus abdominis activation.\n\n\nCONCLUSION\nThis study indicates that lumbar stabilization exercises on an unstable surface enhanced the activities of trunk muscles, except for the back bridge exercise.", "title": "" }, { "docid": "87614469fe3251a547fe5795dd255230", "text": "Automatic detecting and counting vehicles in unsupervised video on highways is a very challenging problem in computer vision with important practical applications such as to monitor activities at traffic intersections for detecting congestions, and then predict the traffic flow which assists in regulating traffic. Manually reviewing the large amount of data they generate is often impractical. The background subtraction and image segmentation based on morphological transformation for tracking and counting vehicles on highways is proposed. This algorithm uses erosion followed by dilation on various frames. Proposed algorithm segments the image by preserving important edges which improves the adaptive background mixture model and makes the system learn faster and more accurately, as well as adapt effectively to changing environments.", "title": "" }, { "docid": "728a06d89a57261cf0560ec3513f2ae6", "text": "This paper reports on our review of published research relating to how teams work together to execute Big Data projects. Our findings suggest that there is no agreed upon standard for executing these projects but that there is a growing research focus in this area and that an improved process methodology would be useful. In addition, our synthesis also provides useful suggestions to help practitioners execute their projects, specifically our identified list of 33 important success factors for executing Big Data efforts, which are grouped by our six identified characteristics of a mature Big Data organization.", "title": "" }, { "docid": "ab2c4d5317d2e10450513283c21ca6d3", "text": "We present DEC0DE, a system for recovering information from phones with unknown storage formats, a critical problem for forensic triage. Because phones have myriad custom hardware and software, we examine only the stored data. Via flexible descriptions of typical data structures, and using a classic dynamic programming algorithm, we are able to identify call logs and address book entries in phones across varied models and manufacturers. We designed DEC0DE by examining the formats of one set of phone models, and we evaluate its performance on other models. Overall, we are able to obtain high performance for these unexamined models: an average recall of 97% and precision of 80% for call logs; and average recall of 93% and precision of 52% for address books. Moreover, at the expense of recall dropping to 14%, we can increase precision of address book recovery to 94% by culling results that don’t match between call logs and address book entries on the same phone.", "title": "" }, { "docid": "49b0ba019f6f968804608aeacec2a959", "text": "In this paper, we introduce a novel problem of audio-visual event localization in unconstrained videos. We define an audio-visual event as an event that is both visible and audible in a video segment. We collect an Audio-Visual Event (AVE) dataset to systemically investigate three temporal localization tasks: supervised and weakly-supervised audio-visual event localization, and cross-modality localization. We develop an audio-guided visual attention mechanism to explore audio-visual correlations, propose a dual multimodal residual network (DMRN) to fuse information over the two modalities, and introduce an audio-visual distance learning network to handle the cross-modality localization. Our experiments support the following findings: joint modeling of auditory and visual modalities outperforms independent modeling, the learned attention can capture semantics of sounding objects, temporal alignment is important for audio-visual fusion, the proposed DMRN is effective in fusing audio-visual features, and strong correlations between the two modalities enable cross-modality localization.", "title": "" }, { "docid": "a4f5fcd7aab7d1d48f462f680336c905", "text": "The authors experienced a case with ocular ischemia with hypotony following injection of a dermal filler for augmentation rhinoplasty. Immediately after injection, the patient demonstrated a permanent visual loss with typical fundus features of central retinal artery occlusion. Multiple crusted ulcerative patches around the nose and left periorbit developed, and the left eye became severely inflamed, ophthalmoplegic, and hypotonic. Signs of anterior and posterior segment ischemia were observed including severe cornea edema, iris atrophy, and chorioretinal swelling. The retrograde arterial embolization of hyaluronic acid gel from vascular branches of nasal tip to central retinal artery and long posterior ciliary artery was highly suspicious. After 6 months of follow up, skin lesions and eyeball movement became normalized, but progressive exudative and tractional retinal detachment was causing phthisis bulbi.", "title": "" }, { "docid": "d7e61562c913fa9fa265fd8ef5288cb5", "text": "For our project, we consider the task of classifying the gender of an author of a blog, novel, tweet, post or comment. Previous attempts have considered traditional NLP models such as bag of words and n-grams to capture gender differences in authorship, and apply it to a specific media (e.g. formal writing, books, tweets, or blogs). Our project takes a novel approach by applying deep learning models developed by Lai et al to directly learn the gender of blog authors. We further refine their models and present a new deep learning model, the Windowed Recurrent Convolutional Neural Network (WRCNN), for gender classification. Our approaches are tested and trained on several datasets: a blog dataset used by Mukherjee et al, and two datasets representing 19th and 20th century authors, respectively. We report an accuracy of 86% on the blog dataset with our WRCNN model, comparable with state-of-the-art implementations.", "title": "" }, { "docid": "0344917c6b44b85946313957a329bc9c", "text": "Recently, Haas and Hellerstein proposed the hash ripple join algorithm in the context of online aggregation. Although the algorithm rapidly gives a good estimate for many join-aggregate problem instances, the convergence can be slow if the number of tuples that satisfy the join predicate is small or if there are many groups in the output. Furthermore, if memory overflows (for example, because the user allows the algorithm to run to completion for an exact answer), the algorithm degenerates to block ripple join and performance suffers. In this paper, we build on the work of Haas and Hellerstein and propose a new algorithm that (a) combines parallelism with sampling to speed convergence, and (b) maintains good performance in the presence of memory overflow. Results from a prototype implementation in a parallel DBMS show that its rate of convergence scales with the number of processors, and that when allowed to run to completion, even in the presence of memory overflow, it is competitive with the traditional parallel hybrid hash join algorithm.", "title": "" }, { "docid": "e9f9a7c506221bacf966808f54c4f056", "text": "Reconfigurable antennas, with the ability to radiate more than one pattern at different frequencies and polarizations, are necessary in modern telecommunication systems. The requirements for increased functionality (e.g., direction finding, beam steering, radar, control, and command) within a confined volume place a greater burden on today's transmitting and receiving systems. Reconfigurable antennas are a solution to this problem. This paper discusses the different reconfigurable components that can be used in an antenna to modify its structure and function. These reconfiguration techniques are either based on the integration of radio-frequency microelectromechanical systems (RF-MEMS), PIN diodes, varactors, photoconductive elements, or on the physical alteration of the antenna radiating structure, or on the use of smart materials such as ferrites and liquid crystals. Various activation mechanisms that can be used in each different reconfigurable implementation to achieve optimum performance are presented and discussed. Several examples of reconfigurable antennas for both terrestrial and space applications are highlighted, such as cognitive radio, multiple-input-multiple-output (MIMO) systems, and satellite communication.", "title": "" }, { "docid": "348488fc6dd8cea52bd7b5808209c4c0", "text": "Information Technology (IT) within Secretariat General of The Indonesian House of Representatives has important role to support the Member of Parliaments (MPs) duties and functions and therefore needs to be well managed to become enabler in achieving organization goals. In this paper, IT governance at Secretariat General of The Indonesian House of Representatives is evaluated using COBIT 5 framework to get their current capabilities level which then followed by recommendations to improve their level. The result of evaluation shows that IT governance process of Secretariat General of The Indonesian House of Representatives is 1.1 (Performed Process), which means that IT processes have been implemented and achieved their purpose. Recommendations for process improvement are derived based on three criteria (Stakeholder's support, IT human resources, and Achievement target time) resulting three processes in COBIT 5 that need to be prioritized: APO13 (Manage Security), BAI01 (Manage Programmes and Projects), and EDM01 (Ensure Governance Framework Setting and Maintenance).", "title": "" }, { "docid": "ed8fef21796713aba1a6375a840c8ba3", "text": "PURPOSE\nThe novel self-paced maximal-oxygen-uptake (VO2max) test (SPV) may be a more suitable alternative to traditional maximal tests for elite athletes due to the ability to self-regulate pace. This study aimed to examine whether the SPV can be administered on a motorized treadmill.\n\n\nMETHODS\nFourteen highly trained male distance runners performed a standard graded exercise test (GXT), an incline-based SPV (SPVincline), and a speed-based SPV (SPVspeed). The GXT included a plateau-verification stage. Both SPV protocols included 5×2-min stages (and a plateau-verification stage) and allowed for self-pacing based on fixed increments of rating of perceived exertion: 11, 13, 15, 17, and 20. The participants varied their speed and incline on the treadmill by moving between different marked zones in which the tester would then adjust the intensity.\n\n\nRESULTS\nThere was no significant difference (P=.319, ES=0.21) in the VO2max achieved in the SPVspeed (67.6±3.6 mL·kg(-1)·min(-1), 95%CI=65.6-69.7 mL·kg(-1)·min(-1)) compared with that achieved in the GXT (68.6±6.0 mL·kg(-1)·min(-1), 95%CI=65.1-72.1 mL·kg(-1)·min(-1)). Participants achieved a significantly higher VO2max in the SPVincline (70.6±4.3 mL·kg(-1)·min(-1), 95%CI=68.1-73.0 mL·kg(-1)·min(-1)) than in either the GXT (P=.027, ES=0.39) or SPVspeed (P=.001, ES=0.76).\n\n\nCONCLUSIONS\nThe SPVspeed protocol produces VO2max values similar to those obtained in the GXT and may represent a more appropriate and athlete-friendly test that is more oriented toward the variable speed found in competitive sport.", "title": "" }, { "docid": "f7dd0d86e674e41903fac0badb3686b9", "text": "Context. Software defect prediction aims to reduce the large costs involved with faults in a software system. A wide range of traditional software metrics have been evaluated as potential defect indicators. These traditional metrics are derived from the source code or from the software development process. Studies have shown that no metric clearly out performs another and identifying defect-prone code using traditional metrics has reached a performance ceiling. Less traditional metrics have been studied, with these metrics being derived from the natural language of the source code. These newer, less traditional and finer grained metrics have shown promise within defect prediction. Aims. The aim of this dissertation is to study the relationship between short Java constructs and the faultiness of source code. To study this relationship this dissertation introduces the concept of a Java sequence and Java code snippet. Sequences are created by using the Java abstract syntax tree. The ordering of the nodes within the abstract syntax tree creates the sequences, while small subsequences of this sequence are the code snippets. The dissertation tries to find a relationship between the code snippets and faulty and non-faulty code. This dissertation also looks at the evolution of the code snippets as a system matures, to discover whether code snippets significantly associated with faulty code change over time. Methods. To achieve the aims of the dissertation, two main techniques have been developed; finding defective code and extracting Java sequences and code snippets. Finding defective code has been split into two areas finding the defect fix and defect insertion points. To find the defect fix points an implementation of the bug-linking algorithm has been developed, called S + e . Two algorithms were developed to extract the sequences and the code snippets. The code snippets are analysed using the binomial test to find which ones are significantly associated with faulty and non-faulty code. These techniques have been performed on five different Java datasets; ArgoUML, AspectJ and three releases of Eclipse.JDT.core Results. There are significant associations between some code snippets and faulty code. Frequently occurring fault-prone code snippets include those associated with identifiers, method calls and variables. There are some code snippets significantly associated with faults that are always in faulty code. There are 201 code snippets that are snippets significantly associated with faults across all five of the systems. The technique is unable to find any significant associations between code snippets and non-faulty code. The relationship between code snippets and faults seems to change as the system evolves with more snippets becoming fault-prone as Eclipse.JDT.core evolved over the three releases analysed. Conclusions. This dissertation has introduced the concept of code snippets into software engineering and defect prediction. The use of code snippets offers a promising approach to identifying potentially defective code. Unlike previous approaches, code snippets are based on a comprehensive analysis of low level code features and potentially allow the full set of code defects to be identified. Initial research into the relationship between code snippets and faults has shown that some code constructs or features are significantly related to software faults. The significant associations between code snippets and faults has provided additional empirical evidence to some already researched bad constructs within defect prediction. The code snippets have shown that some constructs significantly associated with faults are located in all five systems, and although this set is small finding any defect indicators that transfer successfully from one system to another is rare.", "title": "" }, { "docid": "861b170e5da6941e2cf55d8b7d9799b6", "text": "Scaling wireless charging to power levels suitable for heavy duty passenger vehicles and mass transit bus requires indepth assessment of wireless power transfer (WPT) architectures, component sizing and stress, package size, electrical insulation requirements, parasitic loss elements, and cost minimization. It is demonstrated through an architecture comparison that the voltage rating of the power inverter semiconductors will be higher for inductor-capacitor-capacitor (LCC) than for a more conventional Series-Parallel (S-P) tuning. Higher voltage at the source inverter dc bus facilitates better utilization of the semiconductors, hence lower cost. Electrical and thermal stress factors of the passive components are explored, in particular the compensating capacitors and coupling coils. Experimental results are presented for a prototype, precommercial, 10 kW wireless charger designed for heavy duty (HD) vehicle application. Results are in good agreement with theory and validate a design that minimizes component stress.", "title": "" }, { "docid": "8bae8e7937f4c9a492a7030c62d7d9f4", "text": "Although there is considerable interest in the advance bookings model as a forecasting method in the hotel industry, there has been little research analyzing the use of an advance booking curve in forecasting hotel reservations. The mainstream of advance booking models reviewed in the literature uses only the bookings-on-hand data on a certain day and ignores the previous booking data. This empirical study analyzes the entire booking data set for one year provided by the Hotel ICON in Hong Kong, and identifies the trends and patterns in the data. The analysis demonstrates the use of an advance booking curve in forecasting hotel reservations at property level.", "title": "" }, { "docid": "8005d1bd2065a14097cf5da85b941fc1", "text": "The American Psychological Association's (APA's) stance on the psychological maturity of adolescents has been criticized as inconsistent. In its Supreme Court amicus brief in Roper v. Simmons (2005), which abolished the juvenile death penalty, APA described adolescents as developmentally immature. In its amicus brief in Hodgson v. Minnesota (1990), however, which upheld adolescents' right to seek an abortion without parental involvement, APA argued that adolescents are as mature as adults. The authors present evidence that adolescents demonstrate adult levels of cognitive capability earlier than they evince emotional and social maturity. On the basis of this research, the authors argue that it is entirely reasonable to assert that adolescents possess the necessary skills to make an informed choice about terminating a pregnancy but are nevertheless less mature than adults in ways that mitigate criminal responsibility. The notion that a single line can be drawn between adolescence and adulthood for different purposes under the law is at odds with developmental science. Drawing age boundaries on the basis of developmental research cannot be done sensibly without a careful and nuanced consideration of the particular demands placed on the individual for \"adult-like\" maturity in different domains of functioning.", "title": "" } ]
scidocsrr
0d8bb5e4e9f9c79d2ac85ba47e2e990c
Image Segmentation using Fuzzy C Means Clustering: A survey
[ { "docid": "2c8e7bfcd41924d0fe8f66166d366751", "text": "-Many image segmentation techniques are available in the literature. Some of these techniques use only the gray level histogram, some use spatial details while others use fuzzy set theoretic approaches. Most of these techniques are not suitable for noisy environments. Some works have been done using the Markov Random Field (MRF) model which is robust to noise, but is computationally involved. Neural network architectures which help to get the output in real time because of their parallel processing ability, have also been used for segmentation and they work fine even when the noise level is very high. The literature on color image segmentation is not that rich as it is for gray tone images. This paper critically reviews and summarizes some of these techniques. Attempts have been made to cover both fuzzy and non-fuzzy techniques including color image segmentation and neural network based approaches. Adequate attention is paid to segmentation of range images and magnetic resonance images. It also addresses the issue of quantitative evaluation of segmentation results. Image segmentation Fuzzy sets Markov Random Field Thresholding Edge detection Clustering Relaxation", "title": "" } ]
[ { "docid": "9c0d65ee42ccfaa291b576568bad59e0", "text": "BACKGROUND\nThe WHO International Classification of Diseases, 11th version (ICD-11), has proposed two related diagnoses following exposure to traumatic events; Posttraumatic Stress Disorder (PTSD) and Complex PTSD (CPTSD). We set out to explore whether the newly developed ICD-11 Trauma Questionnaire (ICD-TQ) can distinguish between classes of individuals according to the PTSD and CPTSD symptom profiles as per ICD-11 proposals based on latent class analysis. We also hypothesized that the CPTSD class would report more frequent and a greater number of different types of childhood trauma as well as higher levels of functional impairment. Methods Participants in this study were a sample of individuals who were referred for psychological therapy to a National Health Service (NHS) trauma centre in Scotland (N=193). Participants completed the ICD-TQ as well as measures of life events and functioning.\n\n\nRESULTS\nOverall, results indicate that using the newly developed ICD-TQ, two subgroups of treatment-seeking individuals could be empirically distinguished based on different patterns of symptom endorsement; a small group high in PTSD symptoms only and a larger group high in CPTSD symptoms. In addition, CPTSD was more strongly associated with more frequent and a greater accumulation of different types of childhood traumatic experiences and poorer functional impairment.\n\n\nLIMITATIONS\nSample predominantly consisted of people who had experienced childhood psychological trauma or been multiply traumatised in childhood and adulthood.\n\n\nCONCLUSIONS\nCPTSD is highly prevalent in treatment seeking populations who have been multiply traumatised in childhood and adulthood and appropriate interventions should now be developed to aid recovery from this debilitating condition.", "title": "" }, { "docid": "e50b074abe37cc8caec8e3922347e0d9", "text": "Subjectivity and sentiment analysis (SSA) has recently gained considerable attention, but most of the resources and systems built so far are tailored to English and other Indo-European languages. The need for designing systems for other languages is increasing, especially as blogging and micro-blogging websites become popular throughout the world. This paper surveys different techniques for SSA for Arabic. After a brief synopsis about Arabic, we describe the main existing techniques and test corpora for Arabic SSA that have been introduced in the literature.", "title": "" }, { "docid": "6afad353d7dec9fce0e5e4531fd08cf3", "text": "This paper describes some new developments in the application of power electronics to automotive power generation and control. A new load-matching technique is introduced that uses a simple switched-mode rectifier to achieve dramatic increases in peak and average power output from a conventional Lundell alternator, along with substantial improvements in efficiency. Experimental results demonstrate these capability improvements. Additional performance and functionality improvements of particular value for high-voltage (e.g., 42 V) alternators are also demonstrated. Tight load-dump transient suppression can be achieved using this new architecture. It is also shown that the alternator system can be used to implement jump charging (the charging of the high-voltage system battery from a low-voltage source). Dual-output extensions of the technique (e.g., 42/14 V) are also introduced. The new technology preserves the simplicity and low cost of conventional alternator designs, and can be implemented within the existing manufacturing infrastructure.", "title": "" }, { "docid": "b09cacfb35cd02f6a5345c206347c6ae", "text": "Facebook, as one of the most popular social networking sites among college students, provides a platform for people to manage others' impressions of them. People tend to present themselves in a favorable way on their Facebook profile. This research examines the impact of using Facebook on people's perceptions of others' lives. It is argued that those with deeper involvement with Facebook will have different perceptions of others than those less involved due to two reasons. First, Facebook users tend to base judgment on examples easily recalled (the availability heuristic). Second, Facebook users tend to attribute the positive content presented on Facebook to others' personality, rather than situational factors (correspondence bias), especially for those they do not know personally. Questionnaires, including items measuring years of using Facebook, time spent on Facebook each week, number of people listed as their Facebook \"friends,\" and perceptions about others' lives, were completed by 425 undergraduate students taking classes across various academic disciplines at a state university in Utah. Surveys were collected during regular class period, except for two online classes where surveys were submitted online. The multivariate analysis indicated that those who have used Facebook longer agreed more that others were happier, and agreed less that life is fair, and those spending more time on Facebook each week agreed more that others were happier and had better lives. Furthermore, those that included more people whom they did not personally know as their Facebook \"friends\" agreed more that others had better lives.", "title": "" }, { "docid": "23ffdf5e7797e7f01c6d57f1e5546026", "text": "Classroom experiments that evaluate the effectiveness of educational technologies do not typically examine the effects of classroom contextual variables (e.g., out-of-software help-giving and external distractions). Yet these variables may influence students' instructional outcomes. In this paper, we introduce the Spatial Classroom Log Explorer (SPACLE): a prototype tool that facilitates the rapid discovery of relationships between within-software and out-of-software events. Unlike previous tools for retrospective analysis, SPACLE replays moment-by-moment analytics about student and teacher behaviors in their original spatial context. We present a data analysis workflow using SPACLE and demonstrate how this workflow can support causal discovery. We share the results of our initial replay analyses using SPACLE, which highlight the importance of considering spatial factors in the classroom when analyzing ITS log data. We also present the results of an investigation into the effects of student-teacher interactions on student learning in K-12 blended classrooms, using our workflow, which combines replay analysis with SPACLE and causal modeling. Our findings suggest that students' awareness of being monitored by their teachers may promote learning, and that \"gaming the system\" behaviors may extend outside of educational software use.", "title": "" }, { "docid": "71819107f543aa2b20b070e322cf1bbb", "text": "Despite the recent success of end-to-end learned representations, hand-crafted optical flow features are still widely used in video analysis tasks. To fill this gap, we propose TVNet, a novel end-to-end trainable neural network, to learn optical-flow-like features from data. TVNet subsumes a specific optical flow solver, the TV-L1 method, and is initialized by unfolding its optimization iterations as neural layers. TVNet can therefore be used directly without any extra learning. Moreover, it can be naturally concatenated with other task-specific networks to formulate an end-to-end architecture, thus making our method more efficient than current multi-stage approaches by avoiding the need to pre-compute and store features on disk. Finally, the parameters of the TVNet can be further fine-tuned by end-to-end training. This enables TVNet to learn richer and task-specific patterns beyond exact optical flow. Extensive experiments on two action recognition benchmarks verify the effectiveness of the proposed approach. Our TVNet achieves better accuracies than all compared methods, while being competitive with the fastest counterpart in terms of features extraction time.", "title": "" }, { "docid": "857e9430ebc5cf6aad2737a0ce10941e", "text": "Despite a long tradition of effectiveness in laboratory tests, normative messages have had mixed success in changing behavior in field contexts, with some studies showing boomerang effects. To test a theoretical account of this inconsistency, we conducted a field experiment in which normative messages were used to promote household energy conservation. As predicted, a descriptive normative message detailing average neighborhood usage produced either desirable energy savings or the undesirable boomerang effect, depending on whether households were already consuming at a low or high rate. Also as predicted, adding an injunctive message (conveying social approval or disapproval) eliminated the boomerang effect. The results offer an explanation for the mixed success of persuasive appeals based on social norms and suggest how such appeals should be properly crafted.", "title": "" }, { "docid": "166ea8466f5debc7c09880ba17c819e1", "text": "Lymphoepithelioma-like carcinoma (LELCA) of the urinary bladder is a rare variant of bladder cancer characterized by a malignant epithelial component densely infiltrated by lymphoid cells. It is characterized by indistinct cytoplasmic borders and a syncytial growth pattern. These neoplasms deserve recognition and attention, chiefly because they may be responsive to chemotherapy. We report on the clinicopathologic features of 13 cases of LELCA recorded since 1981. The chief complaint in all 13 patients was hematuria. Their ages ranged from 58 years to 82 years. All tumors were muscle invasive. A significant lymphocytic reaction was present in all of these tumors. There were three pure LELCA and six predominant LELCA with a concurrent transitional cell carcinoma (TCC). The remainder four cases had a focal LELCA component admixed with TCC. Immunohistochemistry showed LELCA to be reactive against epithelial membrane antigen and several cytokeratins (CKs; AE1/AE3, AE1, AE3, CK7, and CK8). CK20 and CD44v6 stained focally. The lymphocytic component was composed of a mixture of T and B cells intermingled with some dendritic cells and histiocytes. Latent membrane protein 1 (LMP1) immunostaining and in situ hybridization for Epstein-Barr virus were negative in all 13 cases. DNA ploidy of these tumors gave DNA histograms with diploid peaks (n=7) or non-diploid peaks (aneuploid or tetraploid; n=6). All patients with pure and 66% with predominant LELCA were alive, while all patients having focal LELCA died of disease. Our data suggest that pure and predominant LELCA of the bladder appear to be morphologically and clinically different from other bladder (undifferentiated and poorly differentiated conventional TCC) carcinomas and should be recognized as separate clinicopathological variants of TCC with heavy lymphocytic reaction relevant in patient management.", "title": "" }, { "docid": "6d50ff00babb00d36a30fdc769091b7e", "text": "The purpose of Advanced Driver Assistance Systems (ADAS) is that driver error will be reduced or even eliminated, and efficiency in traffic and transport is enhanced. The benefits of ADAS implementations are potentially considerable because of a significant decrease in human suffering, economical cost and pollution. However, there are also potential problems to be expected, since the task of driving a ordinary motor vehicle is changing in nature, in the direction of supervising a (partly) automated moving vehicle.", "title": "" }, { "docid": "bb295b25353ecdf85a104ee5a928c313", "text": "There is growing conviction that the future of computing depends on our ability to exploit big data on theWeb to enhance intelligent systems. This includes encyclopedic knowledge for factual details, common sense for human-like reasoning and natural language generation for smarter communication. With recent chatbots conceivably at the verge of passing the Turing Test, there are calls for more common sense oriented alternatives, e.g., the Winograd Schema Challenge. The Aristo QA system demonstrates the lack of common sense in current systems in answering fourth-grade science exam questions. On the language generation front, despite the progress in deep learning, current models are easily confused by subtle distinctions that may require linguistic common sense, e.g.quick food vs. fast food. These issues bear on tasks such as machine translation and should be addressed using common sense acquired from text. Mining common sense from massive amounts of data and applying it in intelligent systems, in several respects, appears to be the next frontier in computing. Our brief overview of the state of Commonsense Knowledge (CSK) in Machine Intelligence provides insights into CSK acquisition, CSK in natural language, applications of CSK and discussion of open issues. This paper provides a report of a tutorial at a recent conference with a brief survey of topics.", "title": "" }, { "docid": "066eef8e511fac1f842c699f8efccd6b", "text": "In this paper, we propose a new model that is capable of recognizing overlapping mentions. We introduce a novel notion of mention separators that can be effectively used to capture how mentions overlap with one another. On top of a novel multigraph representation that we introduce, we show that efficient and exact inference can still be performed. We present some theoretical analysis on the differences between our model and a recently proposed model for recognizing overlapping mentions, and discuss the possible implications of the differences. Through extensive empirical analysis on standard datasets, we demonstrate the effectiveness of our approach.", "title": "" }, { "docid": "c207f2c0dfc1ecee332df70ec5810459", "text": "Hierarchical organization-the recursive composition of sub-modules-is ubiquitous in biological networks, including neural, metabolic, ecological, and genetic regulatory networks, and in human-made systems, such as large organizations and the Internet. To date, most research on hierarchy in networks has been limited to quantifying this property. However, an open, important question in evolutionary biology is why hierarchical organization evolves in the first place. It has recently been shown that modularity evolves because of the presence of a cost for network connections. Here we investigate whether such connection costs also tend to cause a hierarchical organization of such modules. In computational simulations, we find that networks without a connection cost do not evolve to be hierarchical, even when the task has a hierarchical structure. However, with a connection cost, networks evolve to be both modular and hierarchical, and these networks exhibit higher overall performance and evolvability (i.e. faster adaptation to new environments). Additional analyses confirm that hierarchy independently improves adaptability after controlling for modularity. Overall, our results suggest that the same force-the cost of connections-promotes the evolution of both hierarchy and modularity, and that these properties are important drivers of network performance and adaptability. In addition to shedding light on the emergence of hierarchy across the many domains in which it appears, these findings will also accelerate future research into evolving more complex, intelligent computational brains in the fields of artificial intelligence and robotics.", "title": "" }, { "docid": "37ed4c0703266525a7d62ca98dd65e0f", "text": "Social cognition in humans is distinguished by psychological processes that allow us to make inferences about what is going on inside other people-their intentions, feelings, and thoughts. Some of these processes likely account for aspects of human social behavior that are unique, such as our culture and civilization. Most schemes divide social information processing into those processes that are relatively automatic and driven by the stimuli, versus those that are more deliberative and controlled, and sensitive to context and strategy. These distinctions are reflected in the neural structures that underlie social cognition, where there is a recent wealth of data primarily from functional neuroimaging. Here I provide a broad survey of the key abilities, processes, and ways in which to relate these to data from cognitive neuroscience.", "title": "" }, { "docid": "98729fc6a6b95222e6a6a12aa9a7ded7", "text": "What good is self-control? We incorporated a new measure of individual differences in self-control into two large investigations of a broad spectrum of behaviors. The new scale showed good internal consistency and retest reliability. Higher scores on self-control correlated with a higher grade point average, better adjustment (fewer reports of psychopathology, higher self-esteem), less binge eating and alcohol abuse, better relationships and interpersonal skills, secure attachment, and more optimal emotional responses. Tests for curvilinearity failed to indicate any drawbacks of so-called overcontrol, and the positive effects remained after controlling for social desirability. Low self-control is thus a significant risk factor for a broad range of personal and interpersonal problems.", "title": "" }, { "docid": "5c64b25ae243ad010ee15e39e5d824e3", "text": "This paper examines the work and interactions between camera operators and a vision mixer during an ice hockey match, and presents an interaction analysis using video data. We analyze video-mediated indexical gestures in the collaborative production of live sport on television between distributed team members. The findings demonstrate how video forms the topic, resource and product of collabora-tion: whilst it shapes the nature of the work (editing), it is simultaneously also the primary resource for supporting mutual orientation and negotiating shot transitions between remote participants (co-ordination), as well as its end prod-uct (broadcast). Our analysis of current professional activi-ties is used to develop implications for the design of future services for live collaborative video production.", "title": "" }, { "docid": "ec85dafd4c0f04d3e573941b397b3f10", "text": "The future of communication resides in Internet of Things, which is certainly the most sought after technology today. The applications of IoT are diverse, and range from ordinary voice recognition to critical space programmes. Recently, a lot of efforts have been made to design operating systems for IoT devices because neither traditional Windows/Unix, nor the existing Real Time Operating Systems are able to meet the demands of heterogeneous IoT applications. This paper presents a survey of operating systems that have been designed so far for IoT devices and also outlines a generic framework that brings out the essential features desired in an OS tailored for IoT devices.", "title": "" }, { "docid": "5ee5f4450ecc89b684e90e7b846f8365", "text": "This study scrutinizes the predictive relationship between three referral channels, search engine, social medial, and third-party advertising, and online consumer search and purchase. The results derived from vector autoregressive models suggest that the three channels have differential predictive relationship with sale measures. The predictive power of the three channels is also considerably different in referring customers among competing online shopping websites. In the short run, referrals from all three channels have a significantly positive predictive relationship with the focal online store’s sales amount and volume, but having no significant relationship with conversion. Only referrals from search engines to the rival website have a significantly negative predictive relationship with the focal website’s sales and volume. In the long run, referrals from all three channels have a significant positive predictive relationship with the focal website’s sales, conversion and sales volume. In contrast, referrals from all three channels to the competing online stores have a significant negative predictive relationship with the focal website’s sales, conversion and sales volume. Our results also show that search engine referrals explains the most of the variance in sales, while social media referrals explains the most of the variance in conversion and third party ads referrals explains the most of the variance in sales volume. This study offers new insights for IT and marketing practitioners in respect to better and deeper understanding on marketing attribution and how different channels perform in order to optimize the media mix and overall performance.", "title": "" }, { "docid": "1615e93f027c6f6f400ce1cc7a1bb8aa", "text": "In the recent years, we have witnessed the rapid adoption of social media platforms, such as Twitter, Facebook and YouTube, and their use as part of the everyday life of billions of people worldwide. Given the habit of people to use these platforms to share thoughts, daily activities and experiences it is not surprising that the amount of user generated content has reached unprecedented levels, with a substantial part of that content being related to real-world events, i.e. actions or occurrences taking place at a certain time and location. Figure 1 illustrates three main categories of events along with characteristic photos from Flickr for each of them: a) news-related events, e.g. demonstrations, riots, public speeches, natural disasters, terrorist attacks, b) entertainment events, e.g. sports, music, live shows, exhibitions, festivals, and c) personal events, e.g. wedding, birthday, graduation ceremonies, vacations, and going out. Depending on the event, different types of multimedia and social media platform are more popular. For instance, news-related events are extensively published in the form of text updates, images and videos on Twitter and YouTube, entertainment and social events are often captured in the form of images and videos and shared on Flickr and YouTube, while personal events are mostly represented by images that are shared on Facebook and Instagram. Given the key role of events in our life, the task of annotating and organizing social media content around them is of crucial importance for ensuring real-time and future access to multimedia content about an event of interest. However, the vast amount of noisy and non-informative social media posts, in conjunction with their large scale, makes that task very challenging. For instance, in the case of popular events that are covered live on Twitter, there are often millions of posts referring to a single event, as in the case of the World Cup Final 2014 between Brazil and Germany, which produced approximately 32.1 million tweets with a rate of 618,725 tweets per minute. Processing, aggregating and selecting the most informative, entertaining and representative tweets among such a large dataset is a very challenging multimedia retrieval problem. In other", "title": "" }, { "docid": "7203aedbdb4c3b42c34dafdefe082b63", "text": "We discuss silver ink as a low cost option for manufacturing RFID tags at ultra high frequencies (UHF). An analysis of two different RFID tag antennas, made from silver ink and from copper, is presented at UHF. The influence of each material on tag performance is discussed along with simulation results and measurement data which are in good agreement. It is observed that RFID tag performance depends both on material and on the shape of the antenna. For some classes of antennas, silver ink with higher conductivity performs as well as copper, which makes it an attractive low cost alternative material to copper for RFID tag antennas.", "title": "" }, { "docid": "e35194cb3fdd3edee6eac35c45b2da83", "text": "The availability of high-resolution Digital Surface Models of coastal environments is of increasing interest for scientists involved in the study of the coastal system processes. Among the range of terrestrial and aerial methods available to produce such a dataset, this study tests the utility of the Structure from Motion (SfM) approach to low-altitude aerial imageries collected by Unmanned Aerial Vehicle (UAV). The SfM image-based approach was selected whilst searching for a rapid, inexpensive, and highly automated method, able to produce 3D information from unstructured aerial images. In particular, it was used to generate a dense point cloud and successively a high-resolution Digital Surface Models (DSM) of a beach dune system in Marina di Ravenna (Italy). The quality of the elevation dataset produced by the UAV-SfM was initially evaluated by comparison with point cloud generated by a Terrestrial Laser Scanning (TLS) surveys. Such a comparison served to highlight an average difference in the vertical values of 0.05 m (RMS = 0.19 m). However, although the points cloud comparison is the best approach to investigate the absolute or relative correspondence between UAV and TLS OPEN ACCESS Remote Sens. 2013, 5 6881 methods, the assessment of geomorphic features is usually based on multi-temporal surfaces analysis, where an interpolation process is required. DSMs were therefore generated from UAV and TLS points clouds and vertical absolute accuracies assessed by comparison with a Global Navigation Satellite System (GNSS) survey. The vertical comparison of UAV and TLS DSMs with respect to GNSS measurements pointed out an average distance at cm-level (RMS = 0.011 m). The successive point by point direct comparison between UAV and TLS elevations show a very small average distance, 0.015 m, with RMS = 0.220 m. Larger values are encountered in areas where sudden changes in topography are present. The UAV-based approach was demonstrated to be a straightforward one and accuracy of the vertical dataset was comparable with results obtained by TLS technology.", "title": "" } ]
scidocsrr
84ccd2ad9d82da02eecfcea23401f585
Learning of Coordination Policies for Robotic Swarms
[ { "docid": "1847cce79f842a7d01f1f65721c1f007", "text": "Many tasks in AI require the collaboration of multiple agents. Typically, the communication protocol between agents is manually specified and not altered during training. In this paper we explore a simple neural model, called CommNN, that uses continuous communication for fully cooperative tasks. The model consists of multiple agents and the communication between them is learned alongside their policy. We apply this model to a diverse set of tasks, demonstrating the ability of the agents to learn to communicate amongst themselves, yielding improved performance over non-communicative agents and baselines. In some cases, it is possible to interpret the language devised by the agents, revealing simple but effective strategies for solving the task at hand.", "title": "" } ]
[ { "docid": "97a13a2a11db1b67230ab1047a43e1d6", "text": "Road detection from the perspective of moving vehicles is a challenging issue in autonomous driving. Recently, many deep learning methods spring up for this task, because they can extract high-level local features to find road regions from raw RGB data, such as convolutional neural networks and fully convolutional networks (FCNs). However, how to detect the boundary of road accurately is still an intractable problem. In this paper, we propose siamesed FCNs (named “s-FCN-loc”), which is able to consider RGB-channel images, semantic contours, and location priors simultaneously to segment the road region elaborately. To be specific, the s-FCN-loc has two streams to process the original RGB images and contour maps, respectively. At the same time, the location prior is directly appended to the siamesed FCN to promote the final detection performance. Our contributions are threefold: 1) An s-FCN-loc is proposed that learns more discriminative features of road boundaries than the original FCN to detect more accurate road regions. 2) Location prior is viewed as a type of feature map and directly appended to the final feature map in s-FCN-loc to promote the detection performance effectively, which is easier than other traditional methods, namely, different priors for different inputs (image patches). 3) The convergent speed of training s-FCN-loc model is 30% faster than the original FCN because of the guidance of highly structured contours. The proposed approach is evaluated on the KITTI road detection benchmark and one-class road detection data set, and achieves a competitive result with the state of the arts.", "title": "" }, { "docid": "46a4e4dbcb9b6656414420a908b51cc5", "text": "We review Bacry and Lévy-Leblond’s work on possible kinematics as applied to 2-dimensional spacetimes, as well as the nine types of 2-dimensional Cayley–Klein geometries, illustrating how the Cayley–Klein geometries give homogeneous spacetimes for all but one of the kinematical groups. We then construct a two-parameter family of Clifford algebras that give a unified framework for representing both the Lie algebras as well as the kinematical groups, showing that these groups are true rotation groups. In addition we give conformal models for these spacetimes.", "title": "" }, { "docid": "2b3335d6fb1469c4848a201115a78e2c", "text": "Laser grooving is used for the singulation of advanced CMOS wafers since it is believed that it exerts lower mechanical stress than traditional blade dicing. The very local heating of wafers, however, might result in high thermal stress around the heat affected zone. In this work we present a model to predict the temperature distribution, material removal, and the resulting stress, in a sandwiched structure of metals and dielectric materials that are commonly found in the back-end of line of semiconductor wafers. Simulation results on realistic three dimensional back-end structures reveal that the presence of metals clearly affects both the ablation depth, and the stress in the material. Experiments showed a similar observation for the ablation depth. The shape of the crater, however, was found to be more uniform than predicted by simulations, which is probably due to the redistribution of molten metal.", "title": "" }, { "docid": "e561ff9b3f836c0d005db1ffdacd6f56", "text": "A new era of Information Warfare has arrived. Various actors, including state-sponsored ones, are weaponizing information on Online Social Networks to run false information campaigns with targeted manipulation of public opinion on specific topics. These false information campaigns can have dire consequences to the public: mutating their opinions and actions, especially with respect to critical world events like major elections. Evidently, the problem of false information on the Web is a crucial one, and needs increased public awareness, as well as immediate attention from law enforcement agencies, public institutions, and in particular, the research community. In this paper, we make a step in this direction by providing a typology of the Web’s false information ecosystem, comprising various types of false information, actors, and their motives. We report a comprehensive overview of existing research on the false information ecosystem by identifying several lines of work: 1) how the public perceives false information; 2) understanding the propagation of false information; 3) detecting and containing false information on the Web; and 4) false information on the political stage. In this work, we pay particular attention to political false information as: 1) it can have dire consequences to the community (e.g., when election results are mutated) and 2) previous work show that this type of false information propagates faster and further when compared to other types of false information. Finally, for each of these lines of work, we report several future research directions that can help us better understand and mitigate the emerging problem of false information dissemination on the Web.", "title": "" }, { "docid": "b759613b1eedd29d32fbbc118767b515", "text": "Deep learning has been shown successful in a number of domains, ranging from acoustics, images to natural language processing. However, applying deep learning to the ubiquitous graph data is non-trivial because of the unique characteristics of graphs. Recently, a significant amount of research efforts have been devoted to this area, greatly advancing graph analyzing techniques. In this survey, we comprehensively review different kinds of deep learning methods applied to graphs. We divide existing methods into three main categories: semi-supervised methods including Graph Neural Networks and Graph Convolutional Networks, unsupervised methods including Graph Autoencoders, and recent advancements including Graph Recurrent Neural Networks and Graph Reinforcement Learning. We then provide a comprehensive overview of these methods in a systematic manner following their history of developments. We also analyze the differences of these methods and how to composite different architectures. Finally, we briefly outline their applications and discuss potential future directions.", "title": "" }, { "docid": "473d8cbcd597c961819c5be6ab2e658e", "text": "Mobile terrestrial laser scanners (MTLS), based on light detection and ranging sensors, are used worldwide in agricultural applications. MTLS are applied to characterize the geometry and the structure of plants and crops for technical and scientific purposes. Although MTLS exhibit outstanding performance, their high cost is still a drawback for most agricultural applications. This paper presents a low-cost alternative to MTLS based on the combination of a Kinect v2 depth sensor and a real time kinematic global navigation satellite system (GNSS) with extended color information capability. The theoretical foundations of this system are exposed along with some experimental results illustrating their performance and limitations. This study is focused on open-field agricultural applications, although most conclusions can also be extrapolated to similar outdoor uses. The developed Kinect-based MTLS system allows to select different acquisition frequencies and fields of view (FOV), from one to 512 vertical slices. The authors conclude that the better performance is obtained when a FOV of a single slice is used, but at the price of a very low measuring speed. With that particular configuration, plants, crops, and objects are reproduced accurately. Future efforts will be directed to increase the scanning efficiency by improving both the hardware and software components and to make it feasible using both partial and full FOV.", "title": "" }, { "docid": "ade9860157680b2ca6820042f0cda302", "text": "This chapter has two main objectives: to review influential ideas and findings in the literature and to outline the organization and content of the volume. The first part of the chapter lays a conceptual and empirical foundation for other chapters in the volume. Specifically, the chapter defines and distinguishes the key concepts of prejudice, stereotypes, and discrimination, highlighting how bias can occur at individual, institutional, and cultural levels. We also review different theoretical perspectives on these phenomena, including individual differences, social cognition, functional relations between groups, and identity concerns. We offer a broad overview of the field, charting how this area has developed over previous decades and identify emerging trends and future directions. The second part of the chapter focuses specifically on the coverage of the area in the present volume. It explains the organization of the book and presents a brief synopsis of the chapters in the volume. Throughout psychology’s history, researchers have evinced strong interest in understanding prejudice, stereotyping, and discrimination (Brewer & Brown, 1998; Dovidio, 2001; Duckitt, 1992; Fiske, 1998), as well as the phenomenon of intergroup bias more generally (Hewstone, Rubin, & Willis, 2002). Intergroup bias generally refers to the systematic tendency to evaluate one’s own membership group (the ingroup) or its members more favorably than a non-membership group (the outgroup) or its members. These topics have a long history in the disciplines of anthropology and sociology (e.g., Sumner, 1906). However, social psychologists, building on the solid foundations of Gordon Allport’s (1954) masterly volume, The Nature of Prejudice, have developed a systematic and more nuanced analysis of bias and its associated phenomena. Interest in prejudice, stereotyping, and discrimination is currently shared by allied disciplines such as sociology and political science, and emerging disciplines such as neuroscience. The practical implications of this 4 OVERVIEW OF THE TOPIC large body of research are widely recognized in the law (Baldus, Woodworth, & Pulaski, 1990; Vidmar, 2003), medicine (Institute of Medicine, 2003), business (e.g., Brief, Dietz, Cohen, et al., 2000), the media, and education (e.g., Ben-Ari & Rich, 1997; Hagendoorn &", "title": "" }, { "docid": "dc8f5af4c7681fa2065a11c26cf05e2b", "text": "Bitcoin is the first e-cash system to see widespread adoption. While Bitcoin offers the potential for new types of financial interaction, it has significant limitations regarding privacy. Specifically, because the Bitcoin transaction log is completely public, users' privacy is protected only through the use of pseudonyms. In this paper we propose Zerocoin, a cryptographic extension to Bitcoin that augments the protocol to allow for fully anonymous currency transactions. Our system uses standard cryptographic assumptions and does not introduce new trusted parties or otherwise change the security model of Bitcoin. We detail Zerocoin's cryptographic construction, its integration into Bitcoin, and examine its performance both in terms of computation and impact on the Bitcoin protocol.", "title": "" }, { "docid": "c4caa735537ccd82c83a330fa85e142d", "text": "We propose a unified product embedded representation that is optimized for the task of retrieval-based product recommendation. To this end, we introduce a new way to fuse modality-specific product embeddings into a joint product embedding, in order to leverage both product content information, such as textual descriptions and images, and product collaborative filtering signal. By introducing the fusion step at the very end of our architecture, we are able to train each modality separately, allowing us to keep a modular architecture that is preferable in real-world recommendation deployments. We analyze our performance on normal and hard recommendation setups such as cold-start and cross-category recommendations and achieve good performance on a large product shopping dataset.", "title": "" }, { "docid": "8b3a58dc4f3aceae7723c17895775a1a", "text": "While the technology acceptance model (TAM), introduced in 1986, continues to be the most widely applied theoretical model in the IS field, few previous efforts examined its accomplishments and limitations. This study traces TAM’s history, investigates its findings, and cautiously predicts its future trajectory. One hundred and one articles published by leading IS journals and conferences in the past eighteen years are examined and summarized. An openended survey of thirty-two leading IS researchers assisted in critically examining TAM and specifying future directions.", "title": "" }, { "docid": "4107e9288ea64d039211acf48a091577", "text": "The trisomy 18 syndrome can result from a full, mosaic, or partial trisomy 18. The main clinical findings of full trisomy 18 consist of prenatal and postnatal growth deficiency, characteristic facial features, clenched hands with overriding fingers and nail hypoplasia, short sternum, short hallux, major malformations, especially of the heart, andprofound intellectual disability in the surviving older children. The phenotype of partial trisomy 18 is extremely variable. The aim of this article is to systematically review the scientific literature on patients with partial trisomy 18 in order to identify regions of chromosome 18 that may be responsible for the specific clinical features of the trisomy 18 syndrome. We confirmed that trisomy of the short arm of chromosome 18 does not seem to cause the major features. However, we found candidate regions on the long arm of chromosome 18 for some of the characteristic clinical features, and a thus a phenotypic map is proposed. Our findings confirm the hypothesis that single critical regions/candidate genes are likely to be responsible for specific characteristics of the syndrome, while a single critical region for the whole Edwards syndrome phenotype is unlikely to exist.", "title": "" }, { "docid": "a7ac6803295b7359f5c8c0fcdd26e0e7", "text": "The Internet of Things (IoT), the idea of getting real-world objects connected with each other, will change the way users organize, obtain and consume information radically. Internet of Things (IoT) enables various applications (crop growth monitoring and selection, irrigation decision support, etc.) in Digital Agriculture domain. The Wireless Sensors Network (WSN) is widely used to build decision support systems. These systems overcomes many problems in the real-world. One of the most interesting fields having an increasing need of decision support systems is Precision Agriculture (PA). Through sensor networks, agriculture can be connected to the IoT, which allows us to create connections among agronomists, farmers and crops regardless of their geographical differences. With the help of this approach which provides real-time information about the lands and crops that will help farmers make right decisions. The major advantage is implementation of WSN in Precision Agriculture (PA) will optimize the usage of water fertilizers while maximizing the yield of the crops and also will help in analyzing the weather conditions of the field.", "title": "" }, { "docid": "1d0241833add973cc7cf6117735b7a1a", "text": "This paper describes the conception and the construction of a low cost spin coating machine incorporating inexpensive electronic components and open-source technology based on Arduino platform. We present and discuss the details of the electrical, mechanical and control parts. This system will coat thin film in a micro level thickness and the microcontroller ATM 328 circuit controls and adjusts the spinning speed. We prepare thin films with good uniformity for various thicknesses by this spin coating system. The thickness and uniformity of deposited films were verified by determining electronic absorption spectra. We show that thin film thickness depends on the spin speed in the range of 2000–3500 rpm. We compare the results obtained on TiO2 layers deposited by our developed system to those grown by using a standard commercial spin coating systems.", "title": "" }, { "docid": "d6c95e47caf4e01fa5934b861a962f6e", "text": "Whereas theoretical work suggests that deep architectures might be more efficient at representing highly-varying functions, training deep architectures was unsuccessful until the recent advent of algorithms based on unsupervised pretraining. Even though these new algorithms have enabled training deep models, many questions remain as to the nature of this difficult learning problem. Answering these questions is important if learning in deep architectures is to be further improved. We attempt to shed some light on these questions through extensive simulations. The experiments confirm and clarify the advantage of unsupervised pre-training. They demonstrate the robustness of the training procedure with respect to the random initialization, the positive effect of pre-training in terms of optimization and its role as a regularizer. We empirically show the influence of pre-training with respect to architecture depth, model capacity, and number of training examples.", "title": "" }, { "docid": "08c97484fe3784e2f1fd42606b915f83", "text": "In the present study we manipulated the importance of performing two event-based prospective memory tasks. In Experiment 1, the event-based task was assumed to rely on relatively automatic processes, whereas in Experiment 2 the event-based task was assumed to rely on a more demanding monitoring process. In contrast to the first experiment, the second experiment showed that importance had a positive effect on prospective memory performance. In addition, the occurrence of an importance effect on prospective memory performance seemed to be mainly due to the features of the prospective memory task itself, and not to the characteristics of the ongoing tasks that only influenced the size of the importance effect. The results suggest that importance instructions may improve prospective memory if the prospective task requires the strategic allocation of attentional monitoring resources.", "title": "" }, { "docid": "da33a718aa9dbf6e9feaff5e63765639", "text": " This paper introduces a new frequency-domain approach to describe the relationships (direction of information flow) between multivariate time series based on the decomposition of multivariate partial coherences computed from multivariate autoregressive models. We discuss its application and compare its performance to other approaches to the problem of determining neural structure relations from the simultaneous measurement of neural electrophysiological signals. The new concept is shown to reflect a frequency-domain representation of the concept of Granger causality.", "title": "" }, { "docid": "9a4e9c73465d1026c2f5c91ec17eaf74", "text": "Devising an expressive question taxonomy is a central problem in question generation. Through examination of a corpus of human-human taskoriented tutoring, we have found that existing question taxonomies do not capture all of the tutorial questions present in this form of tutoring. We propose a hierarchical question classification scheme for tutorial questions in which the top level corresponds to the tutor’s goal and the second level corresponds to the question type. The application of this hierarchical classification scheme to a corpus of keyboard-to-keyboard tutoring of introductory computer science yielded high inter-rater reliability, suggesting that such a scheme is appropriate for classifying tutor questions in design-oriented tutoring. We discuss numerous open issues that are highlighted by the current analysis.", "title": "" }, { "docid": "db2e7cc9ea3d58e0c625684248e2ef80", "text": "PURPOSE\nTo review applications of Ajzen's theory of planned behavior in the domain of health and to verify the efficiency of the theory to explain and predict health-related behaviors.\n\n\nMETHODS\nMost material has been drawn from Current Contents (Social and Behavioral Sciences and Clinical Medicine) from 1985 to date, together with all peer-reviewed articles cited in the publications thus identified.\n\n\nFINDINGS\nThe results indicated that the theory performs very well for the explanation of intention; an averaged R2 of .41 was observed. Attitude toward the action and perceived behavioral control were most often the significant variables responsible for this explained variation in intention. The prediction of behavior yielded an averaged R2 of .34. Intention remained the most important predictor, but in half of the studies reviewed perceived behavioral control significantly added to the prediction.\n\n\nCONCLUSIONS\nThe efficiency of the model seems to be quite good for explaining intention, perceived behavioral control being as important as attitude across health-related behavior categories. The efficiency of the theory, however, varies between health-related behavior categories.", "title": "" }, { "docid": "4630ade03760cb8ec1da11b16703b3f1", "text": "Dengue infection is a major cause of morbidity and mortality in Malaysia. To date, much research on dengue infection conducted in Malaysia have been published. One hundred and sixty six articles related to dengue in Malaysia were found from a search through a database dedicated to indexing all original data relevant to medicine published between the years 2000-2013. Ninety articles with clinical relevance and future research implications were selected and reviewed. These papers showed evidence of an exponential increase in the disease epidemic and a varying pattern of prevalent dengue serotypes at different times. The early febrile phase of dengue infection consist of an undifferentiated fever. Clinical suspicion and ability to identify patients at risk of severe dengue infection is important. Treatment of dengue infection involves judicious use of volume expander and supportive care. Potential future research areas are discussed to narrow our current knowledge gaps on dengue infection.", "title": "" }, { "docid": "fdbdac5f319cd46aeb73be06ed64cbb9", "text": "Recently deep neural networks (DNNs) have been used to learn speaker features. However, the quality of the learned features is not sufficiently good, so a complex back-end model, either neural or probabilistic, has to be used to address the residual uncertainty when applied to speaker verification. This paper presents a convolutional time-delay deep neural network structure (CT-DNN) for speaker feature learning. Our experimental results on the Fisher database demonstrated that this CT-DNN can produce high-quality speaker features: even with a single feature (0.3 seconds including the context), the EER can be as low as 7.68%. This effectively confirmed that the speaker trait is largely a deterministic short-time property rather than a longtime distributional pattern, and therefore can be extracted from just dozens of frames.", "title": "" } ]
scidocsrr
e40df438e69e0665fae60b6b5e0f60cb
Guided HTM: Hierarchical Topic Model with Dirichlet Forest Priors
[ { "docid": "c698f7d6b487cc7c87d7ff215d7f12b2", "text": "This paper reports a controlled study with statistical signi cance tests on ve text categorization methods: the Support Vector Machines (SVM), a k-Nearest Neighbor (kNN) classi er, a neural network (NNet) approach, the Linear Leastsquares Fit (LLSF) mapping and a Naive Bayes (NB) classier. We focus on the robustness of these methods in dealing with a skewed category distribution, and their performance as function of the training-set category frequency. Our results show that SVM, kNN and LLSF signi cantly outperform NNet and NB when the number of positive training instances per category are small (less than ten), and that all the methods perform comparably when the categories are su ciently common (over 300 instances).", "title": "" }, { "docid": "96c10ca887c0210615d16655f62665e0", "text": "The two key challenges in hierarchical classification are to leverage the hierarchical dependencies between the class-labels for improving performance, and, at the same time maintaining scalability across large hierarchies. In this paper we propose a regularization framework for large-scale hierarchical classification that addresses both the problems. Specifically, we incorporate the hierarchical dependencies between the class-labels into the regularization structure of the parameters thereby encouraging classes nearby in the hierarchy to share similar model parameters. Furthermore, we extend our approach to scenarios where the dependencies between the class-labels are encoded in the form of a graph rather than a hierarchy. To enable large-scale training, we develop a parallel-iterative optimization scheme that can handle datasets with hundreds of thousands of classes and millions of instances and learning terabytes of parameters. Our experiments showed a consistent improvement over other competing approaches and achieved state-of-the-art results on benchmark datasets.", "title": "" } ]
[ { "docid": "30874858a0395085bbae6bab78696d97", "text": "In recent years, open architecture motion controllers, including those for CNC machines and robots, have received much interest and support among the global control and automation community. This paper presents work done in extending a well-known and supported open-source control software called LinuxCNC for the control of a Delta robot, a translational parallel mechanism. Key features in the development process are covered and discussed and the final customized system based on LinuxCNC described.", "title": "" }, { "docid": "4c3e4da0a2423a184911dfed7f4e7234", "text": "Pseudo-relevance feedback (PRF) has been proven to be an effective query expansion strategy to improve retrieval performance. Several PRF methods have so far been proposed for many retrieval models. Recent theoretical studies of PRF methods show that most of the PRF methods do not satisfy all necessary constraints. Among all, the log-logistic model has been shown to be an effective method that satisfies most of the PRF constraints. In this paper, we first introduce two new PRF constraints. We further analyze the log-logistic feedback model and show that it does not satisfy these two constraints as well as the previously proposed \"relevance effect\" constraint. We then modify the log-logistic formulation to satisfy all these constraints. Experiments on three TREC newswire and web collections demonstrate that the proposed modification significantly outperforms the original log-logistic model, in all collections.", "title": "" }, { "docid": "7d646fdb10b1ef9d332b6bb80bc40920", "text": "Online financial textual information contains a large amount of investor sentiment, i.e. subjective assessment and discussion with respect to financial instruments. An effective solution to automate the sentiment analysis of such large amounts of online financial texts would be extremely beneficial. This paper presents a natural language processing (NLP) based pre-processing approach both for noise removal from raw online financial texts and for organizing such texts into an enhanced format that is more usable for feature extraction. The proposed approach integrates six NLP processing steps, including a developed syntactic and semantic combined negation handling algorithm, to reduce noise in the online informal text. Three-class sentiment classification is also introduced in each system implementation. Experimental results show that the proposed pre-processing approach outperforms other pre-processing methods. The combined negation handling algorithm is also evaluated against three standard negation handling approaches.", "title": "" }, { "docid": "926e91c6db2cdb01da0d4795a7ce059f", "text": "BACKGROUND\nSeveral behaviors, besides psychoactive substance ingestion, produce short-term reward that may engender persistent behavior, despite knowledge of adverse consequences, i.e., diminished control over the behavior. These disorders have historically been conceptualized in several ways. One view posits these disorders as lying along an impulsive-compulsive spectrum, with some classified as impulse control disorders. An alternate, but not mutually exclusive, conceptualization considers the disorders as non-substance or \"behavioral\" addictions.\n\n\nOBJECTIVES\nInform the discussion on the relationship between psychoactive substance and behavioral addictions.\n\n\nMETHODS\nWe review data illustrating similarities and differences between impulse control disorders or behavioral addictions and substance addictions. This topic is particularly relevant to the optimal classification of these disorders in the forthcoming fifth edition of the American Psychiatric Association Diagnostic and Statistical Manual of Mental Disorders (DSM-V).\n\n\nRESULTS\nGrowing evidence suggests that behavioral addictions resemble substance addictions in many domains, including natural history, phenomenology, tolerance, comorbidity, overlapping genetic contribution, neurobiological mechanisms, and response to treatment, supporting the DSM-V Task Force proposed new category of Addiction and Related Disorders encompassing both substance use disorders and non-substance addictions. Current data suggest that this combined category may be appropriate for pathological gambling and a few other better studied behavioral addictions, e.g., Internet addiction. There is currently insufficient data to justify any classification of other proposed behavioral addictions.\n\n\nCONCLUSIONS AND SCIENTIFIC SIGNIFICANCE\nProper categorization of behavioral addictions or impulse control disorders has substantial implications for the development of improved prevention and treatment strategies.", "title": "" }, { "docid": "ce901f6509da9ab13d66056319c15bd8", "text": "In this survey we overview graph-based clustering and its applications in computational linguistics. We summarize graph-based clustering as a five-part story: hypothesis, modeling, measure, algorithm and evaluation. We then survey three typical NLP problems in which graph-based clustering approaches have been successfully applied. Finally, we comment on the strengths and weaknesses of graph-based clustering and envision that graph-based clustering is a promising solution for some emerging NLP problems.", "title": "" }, { "docid": "01683120a2199b55d8f4aaca27098a47", "text": "As the microblogging service (such as Weibo) is becoming popular, spam becomes a serious problem of affecting the credibility and readability of Online Social Networks. Most existing studies took use of a set of features to identify spam, but without the consideration of the overlap and dependency among different features. In this study, we investigate the problem of spam detection by analyzing real spam dataset collections of Weibo and propose a novel hybrid model of spammer detection, called SDHM, which utilizing significant features, i.e. user behavior information, online social network attributes and text content characteristics, in an organic way. Experiments on real Weibo dataset demonstrate the power of the proposed hybrid model and the promising performance.", "title": "" }, { "docid": "b53ca6bf9197c32fc52cc8bf80ee92f7", "text": "Program code stored on the Ethereum blockchain is considered immutable, but this does not imply that its control flow cannot be modified. This bears the risk of loopholes whenever parties encode binding agreements in smart contracts. In order to quantify the issue, we define a heuristic indicator of control flow immutability, evaluate it based on a call graph of all smart contracts deployed on Ethereum, and find that two out of five smart contracts require trust in at least one third party. Besides, the analysis reveals that significant parts of the Ethereum blockchain are interspersed with debris from past attacks against the platform. We leverage the call graph to develop a method for data cleanup, which allows for less biased statistics of Ethereum use in practice.", "title": "" }, { "docid": "99d76fafe2a238a061e67e4c5e5bea52", "text": "F/OSS software has been described by many as a puzzle. In the past five years, it has stimulated the curiosity of scholars in a variety of fields, including economics, law, psychology, anthropology and computer science, so that the number of contributions on the subject has increased exponentially. The purpose of this paper is to provide a sufficiently comprehensive account of these contributions in order to draw some general conclusions on the state of our understanding of the phenomenon and identify directions for future research. The exercise suggests that what is puzzling about F/OSS is not so much the fact that people freely contribute to a good they make available to all, but rather the complexity of its institutional structure and its ability to organizationally evolve over time. JEL Classification: K11, L22, L23, L86, O31, O34.", "title": "" }, { "docid": "468dca8012f6bc16bd3a5388dadd07b0", "text": "Cloud computing is an emerging concept combining many fields of computing. The foundation of cloud computing is the delivery of services, software and processing capacity over the Internet, reducing cost, increasing storage, automating systems, decoupling of service delivery from underlying technology, and providing flexibility and mobility of information. However, the actual realization of these benefits is far from being achieved for mobile applications and open many new research questions. In order to better understand how to facilitate the building of mobile cloud-based applications, we have surveyed existing work in mobile computing through the prism of cloud computing principles. We give a definition of mobile cloud coputing and provide an overview of the results from this review, in particular, models of mobile cloud applications. We also highlight research challenges in the area of mobile cloud computing. We conclude with recommendations for how this better understanding of mobile cloud computing can help building more powerful mobile applications.", "title": "" }, { "docid": "131a866cba7a8b2e4f66f2496a80cb41", "text": "The Python language is highly dynamic, most notably due to late binding. As a consequence, programs using Python typically run an order of magnitude slower than their C counterpart. It is also a high level language whose semantic can be made more static without much change from a user point of view in the case of mathematical applications. In that case, the language provides several vectorization opportunities that are studied in this paper, and evaluated in the context of Pythran, an ahead-of-time compiler that turns Python module into C++ meta-programs.", "title": "" }, { "docid": "d3a97a5015e27e0b2a043dc03d20228b", "text": "The exponential growth of cyber-physical systems (CPS), especially in safety-critical applications, has imposed several security threats (like manipulation of communication channels, hardware components, and associated software) due to complex cybernetics and the interaction among (independent) CPS domains. These security threats have led to the development of different static as well as adaptive detection and protection techniques on different layers of the CPS stack, e.g., cross-layer and intra-layer connectivity. This paper first presents a brief overview of various security threats at different CPS layers, their respective threat models and associated research challenges to develop robust security measures. Moreover, this paper provides a brief yet comprehensive survey of the state-of-the-art static and adaptive techniques for detection and prevention, and their inherent limitations, i.e., incapability to capture the dormant or uncertainty-based runtime security attacks. To address these challenges, this paper also discusses the intelligent security measures (using machine learning-based techniques) against several characterized attacks on different layers of the CPS stack. Furthermore, we identify the associated challenges and open research problems in developing intelligent security measures for CPS. Towards the end, we provide an overview of our project on security for smart CPS along with important analyses.", "title": "" }, { "docid": "a0d1d59fc987d90e500b3963ac11b2ad", "text": "The purpose of this paper is to present the applicability of THOMAS, an architecture specially designed to model agent-based virtual organizations, in the development of a multiagent system for managing and planning routes for clients in a mall. In order to build virtual organizations, THOMAS offers mechanisms to take into account their structure, behaviour, dynamic, norms and environment. Moreover, one of the primary characteristics of the THOMAS architecture is the use of agents with reasoning and planning capabilities. These agents can perform a dynamic reorganization when they detect changes in the environment. The proposed architecture is composed of a set of related modules that are appropriate for developing systems in highly volatile environments similar to the one presented in this study. This paper presents THOMAS as well as the results obtained after having applied the system to a case study. & 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "9b2066a48425cee0d2e31a48e13e5456", "text": "© 2013 Emerenciano et al., licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Biofloc Technology (BFT): A Review for Aquaculture Application and Animal Food Industry", "title": "" }, { "docid": "06856cf61207a99146782e9e6e0911ef", "text": "Customer ratings are valuable sources to understand their satisfaction and are critical for designing better customer experiences and recommendations. The majority of customers, however, do not respond to rating surveys, which makes the result less representative. To understand overall satisfaction, this paper aims to investigate how likely customers without responses had satisfactory experiences compared to those respondents. To infer customer satisfaction of such unlabeled sessions, we propose models using recurrent neural networks (RNNs) that learn continuous representations of unstructured text conversation. By analyzing online chat logs of over 170,000 sessions from Samsung’s customer service department, we make a novel finding that while labeled sessions contributed by a small fraction of customers received overwhelmingly positive reviews, the majority of unlabeled sessions would have received lower ratings by customers. The data analytics presented in this paper not only have practical implications for helping detect dissatisfied customers on live chat services but also make theoretical contributions on discovering the level of biases in online rating platforms. ACM Reference Format: Kunwoo Park, Meeyoung Cha, and Eunhee Rhim. 2018. Positivity Bias in Customer Satisfaction Ratings. InWWW ’18 Companion: The 2018 Web Conference Companion, April 23–27, 2018, Lyon, France. ACM, New York, NY, USA, 8 pages. https://doi.org/10.1145/3184558.3186579", "title": "" }, { "docid": "3e5e9eecab5937dc1ec7ab835b045445", "text": "Kombucha is a beverage of probable Manchurian origins obtained from fermented tea by a microbial consortium composed of several bacteria and yeasts. This mixed consortium forms a powerful symbiosis capable of inhibiting the growth of potentially contaminating bacteria. The fermentation process also leads to the formation of a polymeric cellulose pellicle due to the activity of certain strains of Acetobacter sp. The tea fermentation process by the microbial consortium was able to show an increase in certain biological activities which have been already studied; however, little information is available on the characterization of its active components and their evolution during fermentation. Studies have also reported that the use of infusions from other plants may be a promising alternative.\n\n\nPRACTICAL APPLICATION\nKombucha is a traditional fermented tea whose consumption has increased in the recent years due to its multiple functional properties such as anti-inflammatory potential and antioxidant activity. The microbiological composition of this beverage is quite complex and still more research is needed in order to fully understand its behavior. This study comprises the chemical and microbiological composition of the tea and the main factors that may affect its production.", "title": "" }, { "docid": "2210176bcb0f139e3f7f7716447f3920", "text": "Automatic metadata generation provides scalability and usability for digital libraries and their collections. Machine learning methods offer robust and adaptable automatic metadata extraction. We describe a Support Vector Machine classification-based method for metadata extraction from header part of research papers and show that it outperforms other machine learning methods on the same task. The method first classifies each line of the header into one or more of 15 classes. An iterative convergence procedure is then used to improve the line classification by using the predicted class labels of its neighbor lines in the previous round. Further metadata extraction is done by seeking the best chunk boundaries of each line. We found that discovery and use of the structural patterns of the data and domain based word clustering can improve the metadata extraction performance. An appropriate feature normalization also greatly improves the classification performance. Our metadata extraction method was originally designed to improve the metadata extraction quality of the digital libraries Citeseer [17] and EbizSearch[24]. We believe it can be generalized to other digital libraries.", "title": "" }, { "docid": "b8e921733ef4ab77abcb48b0a1f04dbb", "text": "This paper demonstrates the efficiency of kinematic redundancy used to increase the useable workspace of planar parallel mechanisms. As examples, we propose kinematically redundant schemes of the well known planar 3RRR and 3RPR mechanisms denoted as 3(P)RRR and 3(P)RPR. In both cases, a prismatic actuator is added allowing a usually fixed base joint to move linearly. Hence, reconfigurations can be performed selectively in order to avoid singularities and to affect the mechanisms' performance directly. Using an interval-based method the useable workspace, i.e. the singularity-free workspace guaranteeing a desired performance, is obtained. Due to the interval analysis any uncertainties can be implemented within the algorithm leading to practical and realistic results. It is shown that due to the additional prismatic actuator the useable workspace increases significantly. Several analysis examples clarify the efficiency of the proposed kinematically redundant mechanisms.", "title": "" }, { "docid": "4465a375859cfe6ed4c242d6896a1042", "text": "Despite tremendous variation in the appearance of visual objects, primates can recognize a multitude of objects, each in a fraction of a second, with no apparent effort. However, the brain mechanisms that enable this fundamental ability are not understood. Drawing on ideas from neurophysiology and computation, we present a graphical perspective on the key computational challenges of object recognition, and argue that the format of neuronal population representation and a property that we term 'object tangling' are central. We use this perspective to show that the primate ventral visual processing stream achieves a particularly effective solution in which single-neuron invariance is not the goal. Finally, we speculate on the key neuronal mechanisms that could enable this solution, which, if understood, would have far-reaching implications for cognitive neuroscience.", "title": "" }, { "docid": "f3115abc9b159be833560ee5276c06b7", "text": "This paper describes a strategy on learning from time series data and on using learned model for forecasting. Time series forecasting, which analyzes and predicts a variable changing over time, has received much attention due to its use for forecasting stock prices, but it can also be used for pattern recognition and data mining. Our method for learning from time series data consists of detecting patterns within the data, describing the detected patterns, clustering the patterns, and creating a model to describe the data. It uses a change-point detection method to partition a time series into segments, each of the segments is then described by an autoregressive model. Then, it partitions all the segments into clusters, each of the clusters is considered as a state for a Markov model. It then creates the transitions between states in the Markov model based on the transitions between segments as the time series progressing. Our method for using the learned model for forecasting consists of indentifying current state, forecasting trends, and adapting to changes. It uses a moving window to monitor real-time data and creates an autoregressive model for the recently observed data, which is then matched to a state of the learned Markov model. Following the transitions of the model, it forecasts future trends. It also continues to monitor real-time data and makes corrections if necessary for adapting to changes. We implemented and successfully tested the methods for an application of load balancing on a parallel computing system.", "title": "" }, { "docid": "32f72bb01626c69aaf7c3464f938c2d4", "text": "The ability of algorithms to evolve or learn (compositional) communication protocols has traditionally been studied in the language evolution literature through the use of emergent communication tasks. Here we scale up this research by using contemporary deep learning methods and by training reinforcement-learning neural network agents on referential communication games. We extend previous work, in which agents were trained in symbolic environments, by developing agents which are able to learn from raw pixel data, a more challenging and realistic input representation. We find that the degree of structure found in the input data affects the nature of the emerged protocols, and thereby corroborate the hypothesis that structured compositional language is most likely to emerge when agents perceive the world as being structured.", "title": "" } ]
scidocsrr
90885f9853c39111993466d3d1402a4c
Neural Programming Language
[ { "docid": "7b232b0ac1a4e7249b33bd54ddeba2b3", "text": "We present an analysis of how the generalization performance (expected test set error) relates to the expected training set error for nonlinear learning systems, such as multilayer perceptrons and radial basis functions. The principal result is the following relationship (computed to second order) between the expected test set and tlaining set errors: (1) Here, n is the size of the training sample e, u;f f is the effective noise variance in the response variable( s), ,x is a regularization or weight decay parameter, and Peff(,x) is the effective number of parameters in the nonlinear model. The expectations ( ) of training set and test set errors are taken over possible training sets e and training and test sets e' respectively. The effective number of parameters Peff(,x) usually differs from the true number of model parameters P for nonlinear or regularized models; this theoretical conclusion is supported by Monte Carlo experiments. In addition to the surprising result that Peff(,x) ;/; p, we propose an estimate of (1) called the generalized prediction error (GPE) which generalizes well established estimates of prediction risk such as Akaike's F P E and AI C, Mallows Cp, and Barron's PSE to the nonlinear setting.! lCPE and Peff(>\") were previously introduced in Moody (1991). 847", "title": "" }, { "docid": "430026742eb346d5a20e3e2ba34d0544", "text": "High-order neural networks have been shown to have impressive computational, storage, and learning capabilities. This performance is because the order or structure of a high-order neural network can be tailored to the order or structure of a problem. Thus, a neural network designed for a particular class of problems becomes specialized but also very efficient in solving those problems. Furthermore, a priori knowledge, such as geometric invariances, can be encoded in high-order networks. Because this knowledge does not have to be learned, these networks are very efficient in solving problems that utilize this knowledge.", "title": "" } ]
[ { "docid": "97c3860dfb00517f744fd9504c4e7f9f", "text": "The plastic film surface treatment load is considered as a nonlinear capacitive load, which is rather difficult for designing of an inverter. The series resonant inverter (SRI) connected to the load via transformer has been found effective for it's driving. In this paper, a surface treatment based on a pulse density modulation (PDM) and pulse frequency modulation (PFM) hybrid control scheme is described. The PDM scheme is used to regulate the output power of the inverter and the PFM scheme is used to compensate for temperature and other environmental influences on the discharge. Experimental results show that the PDM and PFM hybrid control series-resonant inverter (SRI) makes the corona discharge treatment simple and compact, thus leading to higher efficiency.", "title": "" }, { "docid": "282ace724b3c9a2e8b051499ba5e4bfe", "text": "Fog computing, being an extension to cloud computing has addressed some issues found in cloud computing by providing additional features, such as location awareness, low latency, mobility support, and so on. Its unique features have also opened a way toward security challenges, which need to be focused for making it bug-free for the users. This paper is basically focusing on overcoming the security issues encountered during the data outsourcing from fog client to fog node. We have added Shibboleth also known as security and cross domain access control protocol between fog client and fog node for improved and secure communication between the fog client and fog node. Furthermore to prove whether Shibboleth meets the security requirement needed to provide the secure outsourcing. We have also formally verified the protocol against basic security properties using high level Petri net.", "title": "" }, { "docid": "e2d25382acd23c9431ccd3905d8bf13a", "text": "Temporal segmentation of human motion into plausible motion primitives is central to understanding and building computational models of human motion. Several issues contribute to the challenge of discovering motion primitives: the exponential nature of all possible movement combinations, the variability in the temporal scale of human actions, and the complexity of representing articulated motion. We pose the problem of learning motion primitives as one of temporal clustering, and derive an unsupervised hierarchical bottom-up framework called hierarchical aligned cluster analysis (HACA). HACA finds a partition of a given multidimensional time series into m disjoint segments such that each segment belongs to one of k clusters. HACA combines kernel k-means with the generalized dynamic time alignment kernel to cluster time series data. Moreover, it provides a natural framework to find a low-dimensional embedding for time series. HACA is efficiently optimized with a coordinate descent strategy and dynamic programming. Experimental results on motion capture and video data demonstrate the effectiveness of HACA for segmenting complex motions and as a visualization tool. We also compare the performance of HACA to state-of-the-art algorithms for temporal clustering on data of a honey bee dance. The HACA code is available online.", "title": "" }, { "docid": "8f3c0a8098ae76755b0e2f1dc9cfc8ea", "text": "This paper presents a new approach to structural topology optimization. We represent the structural boundary by a level set model that is embedded in a scalar function of a higher dimension. Such level set models are flexible in handling complex topological changes and are concise in describing the boundary shape of the structure. Furthermore, a wellfounded mathematical procedure leads to a numerical algorithm that describes a structural optimization as a sequence of motions of the implicit boundaries converging to an optimum solution and satisfying specified constraints. The result is a 3D topology optimization technique that demonstrates outstanding flexibility of handling topological changes, fidelity of boundary representation and degree of automation. We have implemented the algorithm with the use of several robust and efficient numerical techniques of level set methods. The benefit and the advantages of the proposed method are illustrated with several 2D examples that are widely used in the recent literature of topology optimization, especially in the homogenization based methods. 2002 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "2d30da3bf7d89e8e515c7896153c2dea", "text": "The Flexible Display Center (FDC) at Arizona State University (ASU) was founded in 2004 as a partnership between academia, industry, and government to collaborate on the development of a new generation of innovative displays and electronic circuits that are flexible, lightweight, low power, and rugged [1]. Due to the increasing need for flexible and lightweight electronic systems, FDC aims to develop materials and structural platforms that allow flexible backplane electronics to be integrated with display components that are economical for mass-production [2]. Currently, FDC is focusing on the incorporation of antenna structures, which can function cooperatively with the other flexible circuit elements. Design of flexible antennas, as a part of flexible electronic circuits, may have a very wide spectrum of applications in military and civilian wireless communication, which can allow people to wear antenna structures instead of carry them. Hence, flexible and fluidic antennas have a great potential [3]. In this paper, the design, fabrication, simulation and measurements of a bow-tie antenna with a flexible substrate is discussed. The antenna is modeled and simulated with Ansoft HFSS, and the simulations are compared with measurements performed in the Electromagnetic Anechoic Chamber (EMAC) at ASU.", "title": "" }, { "docid": "f72ffa55e939b4c28498075916f937cc", "text": "Compressed sensing is now established as an effective method for dimension reduction when the underlying signals are sparse or compressible with respect to some suitable basis or frame. One important, yet under-addressed problem regarding the compressive acquisition of analog signals is how to perform quantization. This is directly related to the important issues of how “compressed” compressed sensing is (in terms of the total number of bits one ends up using after acquiring the signal) and ultimately whether compressed sensing can be used to obtain compressed representations of suitable signals. In this paper, we propose a concrete and practicable method for performing “analog-to-information conversion”. Following a compressive signal acquisition stage, the proposed method consists of a quantization stage, based on $ \\Sigma \\Delta $ (sigma-delta) quantization, and a subsequent encoding (compression) stage that fits within the framework of compressed sensing seamlessly. We prove that, using this method, we can convert analog compressive samples to compressed digital bitstreams and decode using tractable algorithms based on convex optimization. We prove that the proposed analog-to-information converter (AIC) provides a nearly optimal encoding of sparse and compressible signals. Finally, we present numerical experiments illustrating the effectiveness of the proposed AIC.", "title": "" }, { "docid": "2acb0196e14d70717836bf0b37fcb191", "text": "Dictionaries are very useful objects for data analysis, as they enable a compact representation of large sets of objects through the combination of atoms. Dictionary-based techniques have also particularly benefited from the recent advances in machine learning, which has allowed for data-driven algorithms to take advantage of the redundancy in the input dataset and discover relations between objects without human supervision or hard-coded rules. Despite the success of dictionary-based techniques on a wide range of tasks in geometric modeling and geometry processing, the literature is missing a principled state-of-the-art of the current knowledge in this field. To fill this gap, we provide in this survey an overview of data-driven dictionary-based methods in geometric modeling. We structure our discussion by application domain: surface reconstruction, compression, and synthesis. Contrary to previous surveys, we place special emphasis on dictionary-based methods suitable for 3D data synthesis, with applications in geometric modeling and design. Our ultimate goal is to enlight the fact that these techniques can be used to combine the data-driven paradigm with design intent to synthesize new plausible objects with minimal human intervention. This is the main motivation to restrict the scope of the present survey to techniques handling point clouds and meshes, making use of dictionaries whose definition depends on the input data, and enabling shape reconstruction or synthesis through the combination of atoms. CCS Concepts •Computing methodologies → Shape modeling; Mesh models; Mesh geometry models; Point-based models; Shape analysis;", "title": "" }, { "docid": "7fd48dcff3d5d0e4bfccc3be67db8c00", "text": "Criollo cacao (Theobroma cacao ssp. cacao) was cultivated by the Mayas over 1500 years ago. It has been suggested that Criollo cacao originated in Central America and that it evolved independently from the cacao populations in the Amazon basin. Cacao populations from the Amazon basin are included in the second morphogeographic group: Forastero, and assigned to T. cacao ssp. sphaerocarpum. To gain further insight into the origin and genetic basis of Criollo cacao from Central America, RFLP and microsatellite analyses were performed on a sample that avoided mixing pure Criollo individuals with individuals classified as Criollo but which might have been introgressed with Forastero genes. We distinguished these two types of individuals as Ancient and Modern Criollo. In contrast to previous studies, Ancient Criollo individuals formerly classified as ‘wild’, were found to form a closely related group together with Ancient Criollo individuals from South America. The Ancient Criollo trees were also closer to Colombian-Ecuadorian Forastero individuals than these Colombian-Ecuadorian trees were to other South American Forastero individuals. RFLP and microsatellite analyses revealed a high level of homozygosity and significantly low genetic diversity within the Ancient Criollo group. The results suggest that the Ancient Criollo individuals represent the original Criollo group. The results also implies that this group does not represent a separate subspecies and that it probably originated from a few individuals in South America that may have been spread by man within Central America.", "title": "" }, { "docid": "061e91fba7571b8e601b54e1cfc1d71e", "text": "The training of medical image analysis systems using machine learning approaches follows a common script: collect and annotate a large dataset, train the classifier on the training set, and test it on a hold-out test set. This process bears no direct resemblance with radiologist training, which is based on solving a series of tasks of increasing difficulty, where each task involves the use of significantly smaller datasets than those used in machine learning. In this paper, we propose a novel training approach inspired by how radiologists are trained. In particular, we explore the use of meta-training that models a classifier based on a series of tasks. Tasks are selected using teacher-student curriculum learning, where each task consists of simple classification problems containing small training sets. We hypothesize that our proposed meta-training approach can be used to pre-train medical image analysis models. This hypothesis is tested on the automatic breast screening classification from DCE-MRI trained with weakly labeled datasets. The classification performance achieved by our approach is shown to be the best in the field for that application, compared to state of art baseline approaches: DenseNet, multiple instance learning and multi-task learning.", "title": "" }, { "docid": "9233195d4f25e21a4de1a849d8f47932", "text": "For the first time, the DRAM device composed of 6F/sup 2/ open-bit-line memory cell with 80nm feature size is developed. Adopting 6F/sup 2/ scheme instead of customary 8F/sup 2/ scheme made it possible to reduce chip size by up to nearly 20%. However, converting the cell scheme to 6F/sup 2/ accompanies some difficulties such as decrease of the cell capacitance, and more compact core layout. To overcome this strict obstacles which are originally stemming from the conversion of cell scheme to 6F/sup 2/, TIT structure with AHO (AfO/AlO/AfO) is adopted for higher cell capacitance, and bar-type contact is adopted for adjusting to compact core layout. Moreover, to lower cell V/sub th/ so far as suitable for characteristic of low power operation, the novel concept, S-RCAT (sphere-shaped-recess-channel-array transistor) is introduced. It is the improved scheme of RCAT used in 8F/sup 2/ scheme. By adopting S-RCAT, V/sub th/ can be lowered, SW, DIBL are improved. Additionally, data retention time characteristic can be improved.", "title": "" }, { "docid": "869889e8be00663e994631b17061479b", "text": "In this study we approach the problem of distinguishing general profanity from hate speech in social media, something which has not been widely considered. Using a new dataset annotated specifically for this task, we employ supervised classification along with a set of features that includes n-grams, skip-grams and clustering-based word representations. We apply approaches based on single classifiers as well as more advanced ensemble classifiers and stacked generalization, achieving the best result of 80% accuracy for this 3-class classification task. Analysis of the results reveals that discriminating hate speech and profanity is not a simple task, which may require features that capture a deeper understanding of the text not always possible with surface n-grams. The variability of gold labels in the annotated data, due to differences in the subjective adjudications of the annotators, is also an issue. Other directions for future work are discussed.", "title": "" }, { "docid": "e3853e259c3ae6739dcae3143e2074a8", "text": "A new reference collection of patent documents for training and testing automated categorization systems is established and described in detail. This collection is tailored for automating the attribution of international patent classification codes to patent applications and is made publicly available for future research work. We report the results of applying a variety of machine learning algorithms to the automated categorization of English-language patent documents. This procedure involves a complex hierarchical taxonomy, within which we classify documents into 114 classes and 451 subclasses. Several measures of categorization success are described and evaluated. We investigate how best to resolve the training problems related to the attribution of multiple classification codes to each patent document.", "title": "" }, { "docid": "a6d0c3a9ca6c2c4561b868baa998dace", "text": "Diprosopus or duplication of the lower lip and mandible is a very rare congenital anomaly. We report this unusual case occurring in a girl who presented to our hospital at the age of 4 months. Surgery and problems related to this anomaly are discussed.", "title": "" }, { "docid": "c9c9af3680df50d4dd72c73c90a41893", "text": "BACKGROUND\nVideo games provide extensive player involvement for large numbers of children and adults, and thereby provide a channel for delivering health behavior change experiences and messages in an engaging and entertaining format.\n\n\nMETHOD\nTwenty-seven articles were identified on 25 video games that promoted health-related behavior change through December 2006.\n\n\nRESULTS\nMost of the articles demonstrated positive health-related changes from playing the video games. Variability in what was reported about the games and measures employed precluded systematically relating characteristics of the games to outcomes. Many of these games merged the immersive, attention-maintaining properties of stories and fantasy, the engaging properties of interactivity, and behavior-change technology (e.g., tailored messages, goal setting). Stories in video games allow for modeling, vicarious identifying experiences, and learning a story's \"moral,\" among other change possibilities.\n\n\nCONCLUSIONS\nResearch is needed on the optimal use of game-based stories, fantasy, interactivity, and behavior change technology in promoting health-related behavior change.", "title": "" }, { "docid": "6494669dc199660c50e22d4eb62646fb", "text": "Recent advances in the instrumentation technology of sensory substitution have presented new opportunities to develop systems for compensation of sensory loss. In sensory substitution (e.g. of sight or vestibular function), information from an artificial receptor is coupled to the brain via a human-machine interface. The brain is able to use this information in place of that usually transmitted from an intact sense organ. Both auditory and tactile systems show promise for practical sensory substitution interface sites. This research provides experimental tools for examining brain plasticity and has implications for perceptual and cognition studies more generally.", "title": "" }, { "docid": "13a8cd624d30c0bb022eed43c69af565", "text": "This paper presents a design procedure of an ultra wideband three section slot coupled hybrid coupler employing a parametric analysis of different design parameters. The coupler configuration is composed of a modified hexagonal shape at the top and bottom conductor plane along with a hexagonal slot at the common ground plane. The coupler performance for different design parameters is studied through full wave simulations. A final design providing a return loss and isolation better than 20dB, an amplitude imbalance between output ports of less than 0.9dB and a phase imbalance of ±1.9° across the 3.1-10.6 GHz band is confirmed.", "title": "" }, { "docid": "1778e5f82da9e90cbddfa498d68e461e", "text": "Today’s business environment is characterized by fast and unexpected changes, many of which are driven by technological advancement. In such environment, the ability to respond effectively and adapt to the new requirements is not only desirable but essential to survive. Comprehensive and quick understanding of intricacies of market changes facilitates firm’s faster and better response. Two concepts contribute to the success of this scenario; organizational agility and business intelligence (BI). As of today, despite BI’s capabilities to foster organizational agility and consequently improve organizational performance, a clear link between BI and organizational agility has not been established. In this paper we argue that BI solutions have the potential to be facilitators for achieving agility. We aim at showing how BI capabilities can help achieve agility at operational, portfolio, and strategic levels.", "title": "" }, { "docid": "553e476ad6a0081aed01775f995f4d16", "text": "This document describes the findings of the Second Workshop on Neural Machine Translation and Generation, held in concert with the annual conference of the Association for Computational Linguistics (ACL 2018). First, we summarize the research trends of papers presented in the proceedings, and note that there is particular interest in linguistic structure, domain adaptation, data augmentation, handling inadequate resources, and analysis of models. Second, we describe the results of the workshop’s shared task on efficient neural machine translation (NMT), where participants were tasked with creating NMT systems that are both accurate and efficient.", "title": "" }, { "docid": "570e48e839bd2250473d4332adf2b53f", "text": "Autologous stem cell transplant can be a curative therapy to restore normal hematopoiesis after myeloablative treatments in patients with malignancies. Aim: To evaluate the effect of rehabilitation program for caregivers about patients’ post autologous bone marrow transplantation Research Design: A quasi-experimental design was used. Setting: The study was conducted in Sheikh Zayed Specialized Hospital at Oncology Outpatient Clinic of Bone Marrow Transplantation Unit. Sample: A purposive sample comprised; a total number of 60 patients, their age ranged from 21 to 50 years, free from any other chronic disease and the caregivers are living with the patients in the same home. Tools: Two tools were used for data collection. First tool: An interviewing autologous bone marrow transplantation questionnaire for the patients and their caregivers was divided into five parts; Including: Socio-demographic data, knowledge of caregivers regarding autologous bone marrow transplant and side effect of chemotherapy, family caregivers’ practices according to their providing care related to post bone marrow transplantation, signs and symptoms, activities of daily living for patients and home environmental sanitation for the patients. Second tool: deals with physical examination assessment of the patients from head to toe. Results: 61.7% of patients aged 30˂40 years, and 68.3 % were female. Regarding the type of relationship with the patients, 48.3% were the mother, 58.3% of patients who underwent autologous bone marrow transplantation had a sanitary environment and there were highly statistically significant differences between caregivers’ knowledge and practices pre/post program. Conclusion: There were highly statistically significant differences between family caregivers' total knowledge, their practices, as well as their total caregivers’ knowledge, practices and patients’ independency level pre/post rehabilitation program. . Recommendations: Counseling for family caregivers of patients who underwent autologous bone marrow transplantation and carrying out rehabilitation program for the patients and their caregivers to be performed properly during the rehabilitation period at caner hospitals such as 57357 Hospital and The National Cancer Institute in Cairo.", "title": "" }, { "docid": "1564a94998151d52785dd0429b4ee77d", "text": "Location management refers to the problem of updating and searching the current location of mobile nodes in a wireless network. To make it efficient, the sum of update costs of location database must be minimized. Previous work relying on fixed location databases is unable to fully exploit the knowledge of user mobility patterns in the system so as to achieve this minimization. The study presents an intelligent location management approach which has interacts between intelligent information system and knowledge-base technologies, so we can dynamically change the user patterns and reduce the transition between the VLR and HLR. The study provides algorithms are ability to handle location registration and call delivery.", "title": "" } ]
scidocsrr
60cd53823c660a62dc62a36d1925ffab
Healthcare Insurance Fraud Detection Leveraging Big Data Analytics
[ { "docid": "a0f8af71421d484cbebb550a0bf59a6d", "text": "researchers and practitioners doing work in these three related areas. Risk management, fraud detection, and intrusion detection all involve monitoring the behavior of populations of users (or their accounts) to estimate, plan for, avoid, or detect risk. In his paper, Til Schuermann (Oliver, Wyman, and Company) categorizes risk into market risk, credit risk, and operating risk (or fraud). Similarly, Barry Glasgow (Metropolitan Life Insurance Co.) discusses inherent risk versus fraud. This workshop focused primarily on what might loosely be termed “improper behavior,” which includes fraud, intrusion, delinquency, and account defaulting. However, Glasgow does discuss the estimation of “inherent risk,” which is the bread and butter of insurance firms. Problems of predicting, preventing, and detecting improper behavior share characteristics that complicate the application of existing AI and machine-learning technologies. In particular, these problems often have or require more than one of the following that complicate the technical problem of automatically learning predictive models: large volumes of (historical) data, highly skewed distributions (“improper behavior” occurs far less frequently than “proper behavior”), changing distributions (behaviors change over time), widely varying error costs (in certain contexts, false positive errors are far more costly than false negatives), costs that change over time, adaptation of undesirable behavior to detection techniques, changing patterns of legitimate behavior, the trad■ The 1997 AAAI Workshop on AI Approaches to Fraud Detection and Risk Management brought together over 50 researchers and practitioners to discuss problems of fraud detection, computer intrusion detection, and risk scoring. This article presents highlights, including discussions of problematic issues that are common to these application domains, and proposed solutions that apply a variety of AI techniques.", "title": "" } ]
[ { "docid": "ac56668cdaad25e9df31f71bc6d64995", "text": "Hand-crafted illustrations are often more effective than photographs for conveying the shape and important features of an object, but they require expertise and time to produce. We describe an image compositing system and user interface that allow an artist to quickly and easily create technical illustrations from a set of photographs of an object taken from the same point of view under variable lighting conditions. Our system uses a novel compositing process in which images are combined using spatially-varying light mattes, enabling the final lighting in each area of the composite to be manipulated independently. We describe an interface that provides for the painting of local lighting effects (e.g. shadows, highlights, and tangential lighting to reveal texture) directly onto the composite. We survey some of the techniques used in illustration and lighting design to convey the shape and features of objects and describe how our system can be used to apply these techniques.", "title": "" }, { "docid": "f3b76c5ad1841a56e6950f254eda8b17", "text": "Due to the complexity of human languages, most of sentiment classification algorithms are suffered from a huge-scale dimension of vocabularies which are mostly noisy and redundant. Deep Belief Networks (DBN) tackle this problem by learning useful information in input corpus with their several hidden layers. Unfortunately, DBN is a time-consuming and computationally expensive process for large-scale applications. In this paper, a semi-supervised learning algorithm, called Deep Belief Networks with Feature Selection (DBNFS) is developed. Using our chi-squared based feature selection, the complexity of the vocabulary input is decreased since some irrelevant features are filtered which makes the learning phase of DBN more efficient. The experimental results of our proposed DBNFS shows that the proposed DBNFS can achieve higher classification accuracy and can speed up training time compared with others well-known semi-supervised learning algorithms.", "title": "" }, { "docid": "0e068a4e7388ed456de4239326eb9b08", "text": "The Web so far has been incredibly successful at delivering information to human users. So successful actually, that there is now an urgent need to go beyond a browsing human. Unfortunately, the Web is not yet a well organized repository of nicely structured documents but rather a conglomerate of volatile HTML pages. To address this problem, we present the World Wide Web Wrapper Factory (W4F), a toolkit for the generation of wrappers for Web sources, that offers: (1) an expressive language to specify the extraction of complex structures from HTML pages; (2) a declarative mapping to various data formats like XML; (3) some visual tools to make the engineering of wrappers faster and easier.", "title": "" }, { "docid": "b07978a3871f0ba26fd6d1eb568b1b0a", "text": "This paper presents an intermodulation distortion measurement system based on automated feedforward cancellation that achieves 113 dB of broadband spurious-free dynamic range for discrete tone separations down to 100 Hz. For 1-Hz tone separation, the dynamic range is 106 dB, limited by carrier phase noise. A single-tone cancellation formula is developed requiring only the power of the probing signal and the power of the combined probe and cancellation signal so that the phase shift required for cancellation can be predicted. The technique is applied to a two-path feedforward cancellation system in a bridge configuration. The effects of reflected signals and of group delay on system performance is discussed. Spurious frequency content and interchannel coupling are analyzed with respect to system linearity. Feedforward cancellation and consideration of electromagnetic radiation coupling and reverse-wave isolation effects extends the dynamic range of spectrum and vector analyzers by at least 40 dB. Application of the technique to the measurement of correlated and uncorrelated nonlinear distortion of an amplified wideband code-division multiple-access signal is presented.", "title": "" }, { "docid": "5e9cc7e7933f85b6cffe103c074105d4", "text": "Substrate-integrated waveguides (SIWs) maintain the advantages of planar circuits (low loss, low profile, easy manufacturing, and integration in a planar circuit board) and improve the quality factor of filter resonators. Empty substrate-integrated waveguides (ESIWs) substantially reduce the insertion losses, because waves propagate through air instead of a lossy dielectric. The first ESIW used a simple tapering transition that cannot be used for thin substrates. A new transition has recently been proposed, which includes a taper also in the microstrip line, not only inside the ESIW, and so it can be used for all substrates, although measured return losses are only 13 dB. In this letter, the cited transition is improved by placing via holes that prevent undesired radiation, as well as two holes that help to ensure good accuracy in the mechanization of the input iris, thus allowing very good return losses (over 20 dB) in the measured results. A design procedure that allows the successful design of the proposed new transition is also provided. A back-to-back configuration of the improved new transition has been successfully manufactured and measured.", "title": "" }, { "docid": "fd94c0639346e760cf2c19aab7847270", "text": "During the last two decades, a great number of applications for the dc-to-dc converters have been reported [1]. Many applications are found in computers, telecommunications, aeronautics, commercial, and industrial applications. The basic topologies buck, boost, and buck-boost, are widely used in the dc-to-dc conversion. These converters, as well as other converters, provide low voltages and currents for loads at a constant switching frequency. In recent years, there has been a need for wider conversion ratios with a corresponding reduction in size and weight. For example, advances in the field of semiconductors have motivated the development of new integrated circuits, which require 3.3 or 1.5 V power supplies. The automotive industry is moving from 12 V (14 V) to 36 V (42 V), the above is due to the electric-electronic load in automobiles has been growing rapidly and is starting to exceed the practical capacity of present-day electrical systems. Today, the average 12 V (14 V) load is between 750 W to 1 kW, while the peak load can be 2 kW, depending of the type of car and its accessories. By 2005, peak loads above 2 kW, even as high as 12 kW, will be common. To address this challenge, it is widely agreed that a", "title": "" }, { "docid": "edaeccfe6263c1625765574443b79e68", "text": "The elongated structure of the hippocampus is critically involved in brain functions of profound importance. The segregation of functions along the longitudinal (septotemporal or dorsoventral) axis of the hippocampus is a slowly developed concept and currently is a widely accepted idea. The segregation of neuroanatomical connections along the hippocampal long axis can provide a basis for the interpretation of the functional segregation. However, an emerging and growing body of data strongly suggests the existence of endogenous diversification in the properties of the local neural network along the long axis of the hippocampus. In particular, recent electrophysiological research provides compelling evidence demonstrating constitutively increased network excitability in the ventral hippocampus with important implications for the endogenous initiation and propagation of physiological hippocampal oscillations yet, under favorable conditions it can also drive the local network towards hyperexcitability. In addition, important specializations in the properties of dorsal and ventral hippocampal synapses may support an optimal signal processing that contributes to the effective execution of the distinct functional roles played by the two hippocampal segments.", "title": "" }, { "docid": "f102cc8d3ba32f9a16f522db25143e2d", "text": "As technology advances man-machine interaction is becoming an unavoidable activity. So an effective method of communication with machines enhances the quality of life. If it is able to operate a system by simply commanding, then it will be a great blessing to the users. Speech is the most effective mode of communication used by humans. So by introducing voice user interfaces the interaction with the machines can be made more user friendly. This paper implements a speaker independent speech recognition system for limited vocabulary Malayalam Words in Raspberry Pi. Mel Frequency Cepstral Coefficients (MFCC) are the features for classification and this paper proposes Radial Basis Function (RBF) kernel in Support Vector Machine (SVM) classifier gives better accuracy in speech recognition than linear kernel. An overall accuracy of 91.8% is obtained with this work.", "title": "" }, { "docid": "9c80e8db09202335f427ebf02659eac3", "text": "The present paper reviews and critiques studies assessing the relation between sleep patterns, sleep quality, and school performance of adolescents attending middle school, high school, and/or college. The majority of studies relied on self-report, yet the researchers approached the question with different designs and measures. Specifically, studies looked at (1) sleep/wake patterns and usual grades, (2) school start time and phase preference in relation to sleep habits and quality and academic performance, and (3) sleep patterns and classroom performance (e.g., examination grades). The findings strongly indicate that self-reported shortened total sleep time, erratic sleep/wake schedules, late bed and rise times, and poor sleep quality are negatively associated with academic performance for adolescents from middle school through the college years. Limitations of the current published studies are also discussed in detail in this review.", "title": "" }, { "docid": "8fa135e5d01ba2480dea4621ceb1e9f4", "text": "With the advent of Grid and application technologies, scientists and engineers are building more and more complex applications to manage and process large data sets, and execute scientific experiments on distributed resources. Such application scenarios require means for composing and executing complex workflows. Therefore, many efforts have been made towards the development of workflow management systems for Grid computing. In this paper, we propose a taxonomy that characterizes and classifies various approaches for building and executing workflows on Grids. The taxonomy not only highlights the design and engineering similarities and differences of state-of-the-art in Grid workflow systems, but also identifies the areas that need further research.", "title": "" }, { "docid": "49b0ba019f6f968804608aeacec2a959", "text": "In this paper, we introduce a novel problem of audio-visual event localization in unconstrained videos. We define an audio-visual event as an event that is both visible and audible in a video segment. We collect an Audio-Visual Event (AVE) dataset to systemically investigate three temporal localization tasks: supervised and weakly-supervised audio-visual event localization, and cross-modality localization. We develop an audio-guided visual attention mechanism to explore audio-visual correlations, propose a dual multimodal residual network (DMRN) to fuse information over the two modalities, and introduce an audio-visual distance learning network to handle the cross-modality localization. Our experiments support the following findings: joint modeling of auditory and visual modalities outperforms independent modeling, the learned attention can capture semantics of sounding objects, temporal alignment is important for audio-visual fusion, the proposed DMRN is effective in fusing audio-visual features, and strong correlations between the two modalities enable cross-modality localization.", "title": "" }, { "docid": "c11b77f1392c79f4a03f9633c8f97f4d", "text": "The paper introduces and discusses a concept of syntactic n-grams (sn-grams) that can be applied instead of traditional n-grams in many NLP tasks. Sn-grams are constructed by following paths in syntactic trees, so sngrams allow bringing syntactic knowledge into machine learning methods. Still, previous parsing is necessary for their construction. We applied sn-grams in the task of authorship attribution for corpora of three and seven authors with very promising results.", "title": "" }, { "docid": "dcee61dad66f59b2450a3e154726d6b1", "text": "Mussels are marine organisms that have been mimicked due to their exceptional adhesive properties to all kind of surfaces, including rocks, under wet conditions. The proteins present on the mussel's foot contain 3,4-dihydroxy-l-alanine (DOPA), an amino acid from the catechol family that has been reported by their adhesive character. Therefore, we synthesized a mussel-inspired conjugated polymer, modifying the backbone of hyaluronic acid with dopamine by carbodiimide chemistry. Ultraviolet-visible (UV-vis) spectroscopy and nuclear magnetic resonance (NMR) techniques confirmed the success of this modification. Different techniques have been reported to produce two-dimensional (2D) or three-dimensional (3D) systems capable to support cells and tissue regeneration; among others, multilayer systems allow the construction of hierarchical structures from nano- to macroscales. In this study, the layer-by-layer (LbL) technique was used to produce freestanding multilayer membranes made uniquely of chitosan and dopamine-modified hyaluronic acid (HA-DN). The electrostatic interactions were found to be the main forces involved in the film construction. The surface morphology, chemistry, and mechanical properties of the freestanding membranes were characterized, confirming the enhancement of the adhesive properties in the presence of HA-DN. The MC3T3-E1 cell line was cultured on the surface of the membranes, demonstrating the potential of these freestanding multilayer systems to be used for bone tissue engineering.", "title": "" }, { "docid": "6da745a03e290f312f7cd2960ebe54b8", "text": "INTRODUCTION\nThe aim of effective clinical handover is seamless transfer of information between care providers. Handover between paramedics and the trauma team provides challenges in ensuring that information loss does not occur. Handover is often time-pressured and paramedics' clinical notes are often delayed in reaching the trauma team. Documentation by trauma team members must be accurate. This study evaluated information loss and discordance as patients were transferred from the scene of an incident to the Trauma Centre.\n\n\nMETHODS\nTwenty-five trauma patients presenting by ambulance to a tertiary Emergency and Trauma Centre were randomly selected. Audiotaped (pre-hospital) and videotaped (in-hospital) handover was compared with written documentation.\n\n\nRESULTS\nIn the pre-hospital setting 171/228 (75%) of data items handed over by paramedics to the trauma team were documented and in the in-hospital handover 335/498 (67%) of information was documented. Information least likely to be documented by trauma team members (1) in the pre-hospital setting related to treatment provided and (2) in the in-hospital setting related to signs and symptoms. While 79% of information was subsequently documented by paramedics, 9% (n=59) of information was not documented either by trauma team members or paramedics and constitutes information loss. Information handed over was not congruent with documentation on seven occasions. Discrepancies included a patient's allergy status and sites of injury (n=2). Demographic details were most likely to be documented but not handed over by paramedics.\n\n\nCONCLUSION\nBy documenting where deficits in handover occur we can identify points of vulnerability and strategies to capture this information.", "title": "" }, { "docid": "6b1a1c36fa583391eb8b142368837bc3", "text": "In this paper, we present design and simulation of a compact grid array microstrip patch antenna. In the design of antenna a RT/duroid 5880 substrate having relative permittivity, thickness and loss tangent of 2.2, 1.57 mm and 0.0009 respectively, has been used. The simulated antenna performance was obtained by Computer Simulation Technology Microwave Studio (CST MWS). The antenna performance was investigated by analyzing its return loss (S11), radiation pattern, voltage standing wave ratio (VSWR) parameters. The simulated S11 parameter has shown that antenna operates for Industrial, Scientific and Medical (ISM) band and Wireless Body Area Network (WBAN) applications at 2.45 GHZ ISM, 6.25 GHZ, 8.25 GHZ and 10.45 GHZ ultra-wideband (UWB) four resonance frequencies with bandwidth > 500MHz (S11 < −10dB). The antenna directivity increased towards higher frequencies. The VSWR of resonance frequency bands is also achieved succesfully less than 2. It has been observed that the simulation result values of the antenna are suitable for WBAN applications.", "title": "" }, { "docid": "8b77db6a84911c1e4d6eeb6859e16f87", "text": "As portable electronic devices are widely used in wireless communication, analysis of RF interference becomes an essential step for IC designers. In order to test electromagnetic compatibility (EMC) of IC operating at high frequencies, IC stripline method is proposed in IEC standard. This method can be applied up to 3 GHz and covers the testing of ICs and small component. This paper represents simulation results of the open version of IC stripline in 3D EM solver. Also, the coupling effect of IC stripline method is analyzed with S-parameter results. The distributed lumped-element equivalent model is presented for explaining coupling relation between IC stripline and package. This model can be used for quick analysis for EMC of ICs.", "title": "" }, { "docid": "325bbe7b00513793a1daacdc627f1974", "text": "Perioperative coagulation management is a complex task that has a significant impact on the perioperative journey of patients. Anaesthesia providers play a critical role in the decision-making on transfusion and/or haemostatic therapy in the surgical setting. Various tests are available in identifying coagulation abnormalities in the perioperative period. While the rapidly available bedside haemoglobin measurements can guide the transfusion of red blood cells, blood product administration is guided by many in vivo and in vitro tests. The introduction of newer anticoagulant medications and the implementation of the modified in vivo coagulation cascade have given a new dimension to the field of perioperative transfusion medicine. A proper understanding of the application and interpretation of the coagulation tests is vital for a good perioperative outcome.", "title": "" }, { "docid": "9f5d77e73fb63235a6e094d437f1be7e", "text": "An improved zero-voltage and zero-current-switching (ZVZCS) full bridge dc-dc converter is proposed based on phase shift control. With an auxiliary center tapped rectifier at the secondary side, an auxiliary voltage source is applied to reset the primary current of the transformer winding. Therefore, zero-voltage switching for the leading leg switches and zero-current switching for the lagging leg switches can be achieved, respectively, without any increase of current and voltage stresses. Since the primary current in the circulating interval for the phase shift full bridge converter is eliminated, the conduction loss in primary switches is reduced. A 1 kW prototype is made to verify the theoretical analysis.", "title": "" }, { "docid": "390e9e2bfb8e94d70d1dbcfbede6dd46", "text": "Modern software-based services are implemented as distributed systems with complex behavior and failure modes. Many large tech organizations are using experimentation to verify such systems' reliability. Netflix engineers call this approach chaos engineering. They've determined several principles underlying it and have used it to run experiments. This article is part of a theme issue on DevOps.", "title": "" }, { "docid": "8a20feb22ce8797fa77b5d160919789c", "text": "We proposed the concept of hardware software co-simulation for image processing using Xilinx system generator. Recent advances in synthesis tools for SIMULINK suggest a feasible high-level approach to algorithm implementation for embedded DSP systems. An efficient FPGA based hardware design for enhancement of color and grey scale images in image and video processing. The top model – based visual development process of SIMULINK facilitates host side simulation and validation, as well as synthesis of target specific code, furthermore, legacy code written in MATLAB or ANCI C can be reuse in custom blocks. However, the code generated for DSP platforms is often not very efficient. We are implemented the Image processing applications on FPGA it can be easily design.", "title": "" } ]
scidocsrr
f8a8f28015bc1794573d988f067cc1e4
Crowdsourced semantic annotation of scientific publications and tabular data in PDF
[ { "docid": "ffc09744f2668e52ce84ac28887fd5fe", "text": "As the number of research papers available on the Web has increased enormously over the years, paper recommender systems have been proposed to help researchers on automatically finding works of interest. The main problem with the current approaches is that they assume that recommending algorithms are provided with a rich set of evidence (e.g., document collections, citations, profiles) which is normally not widely available. In this paper we propose a novel source independent framework for research paper recommendation. The framework requires as input only a single research paper and generates several potential queries by using terms in that paper, which are then submitted to existing Web information sources that hold research papers. Once a set of candidate papers for recommendation is generated, the framework applies content-based recommending algorithms to rank the candidates in order to recommend the ones most related to the input paper. This is done by using only publicly available metadata (i.e., title and abstract). We evaluate our proposed framework by performing an extensive experimentation in which we analyzed several strategies for query generation and several ranking strategies for paper recommendation. Our results show that good recommendations can be obtained with simple and low cost strategies.", "title": "" }, { "docid": "9eea7c3b36bf91ae439e84a051a190bb", "text": "Recently practical approaches for managing and supporting the life-cycle of semantic content on the Web of Data made quite some progress. However, the currently least developed aspect of the semantic content life-cycle is the user-friendly manual and semi-automatic creation of rich semantic content. In this paper we present the RDFaCE approach for combining WYSIWYG text authoring with the creation of rich semantic annotations. Our approach is based on providing four different views to the content authors: a classical WYSIWYG view, a WYSIWYM (What You See Is What You Mean) view making the semantic annotations visible, a fact view and the respective HTML/RDFa source code view. The views are synchronized such that changes made in one of the views automatically update the others. They provide different means of semantic content authoring for the different personas involved in the content creation life-cycle. For bootstrapping the semantic annotation process we integrate five different text annotation services. We evaluate their accuracy and empirically show that a combination of them yields superior results.", "title": "" } ]
[ { "docid": "94ea3cbf3df14d2d8e3583cb4714c13f", "text": "The emergence of computers as an essential tool in scientific research has shaken the very foundations of differential modeling. Indeed, the deeply-rooted abstraction of smoothness, or differentiability, seems to inherently clash with a computer's ability of storing only finite sets of numbers. While there has been a series of computational techniques that proposed discretizations of differential equations, the geometric structures they are supposed to simulate are often lost in the process.", "title": "" }, { "docid": "3f1a2efdff6be4df064f3f5b978febee", "text": "D-galactose injection has been shown to induce many changes in mice that represent accelerated aging. This mouse model has been widely used for pharmacological studies of anti-aging agents. The underlying mechanism of D-galactose induced aging remains unclear, however, it appears to relate to glucose and 1ipid metabolic disorders. Currently, there has yet to be a study that focuses on investigating gene expression changes in D-galactose aging mice. In this study, integrated analysis of gas chromatography/mass spectrometry-based metabonomics and gene expression profiles was used to investigate the changes in transcriptional and metabolic profiles in mimetic aging mice injected with D-galactose. Our findings demonstrated that 48 mRNAs were differentially expressed between control and D-galactose mice, and 51 potential biomarkers were identified at the metabolic level. The effects of D-galactose on aging could be attributed to glucose and 1ipid metabolic disorders, oxidative damage, accumulation of advanced glycation end products (AGEs), reduction in abnormal substance elimination, cell apoptosis, and insulin resistance.", "title": "" }, { "docid": "b72f4554f2d7ac6c5a8000d36a099e67", "text": "Sign Language Recognition (SLR) has been an active research field for the last two decades. However, most research to date has considered SLR as a naive gesture recognition problem. SLR seeks to recognize a sequence of continuous signs but neglects the underlying rich grammatical and linguistic structures of sign language that differ from spoken language. In contrast, we introduce the Sign Language Translation (SLT) problem. Here, the objective is to generate spoken language translations from sign language videos, taking into account the different word orders and grammar. We formalize SLT in the framework of Neural Machine Translation (NMT) for both end-to-end and pretrained settings (using expert knowledge). This allows us to jointly learn the spatial representations, the underlying language model, and the mapping between sign and spoken language. To evaluate the performance of Neural SLT, we collected the first publicly available Continuous SLT dataset, RWTH-PHOENIX-Weather 2014T1. It provides spoken language translations and gloss level annotations for German Sign Language videos of weather broadcasts. Our dataset contains over .95M frames with >67K signs from a sign vocabulary of >1K and >99K words from a German vocabulary of >2.8K. We report quantitative and qualitative results for various SLT setups to underpin future research in this newly established field. The upper bound for translation performance is calculated at 19.26 BLEU-4, while our end-to-end frame-level and gloss-level tokenization networks were able to achieve 9.58 and 18.13 respectively.", "title": "" }, { "docid": "9042a72bc42bdfd3b2f2a1fc6145b7f1", "text": "In this paper we introduce a framework for learning from RDF data using graph kernels that count substructures in RDF graphs, which systematically covers most of the existing kernels previously defined and provides a number of new variants. Our definitions include fast kernel variants that are computed directly on the RDF graph. To improve the performance of these kernels we detail two strategies. The first strategy involves ignoring the vertex labels that have a low frequency among the instances. Our second strategy is to remove hubs to simplify the RDF graphs. We test our kernels in a number of classification experiments with real-world RDF datasets. Overall the kernels that count subtrees show the best performance. However, they are closely followed by simple bag of labels baseline kernels. The direct kernels substantially decrease computation time, while keeping performance the same. For the walks counting kernel the decrease in computation time of the approximation is so large that it thereby becomes a computationally viable kernel to use. Ignoring low frequency labels improves the performance for all datasets. The hub removal algorithm increases performance on two out of three of our smaller datasets, but has little impact when used on our larger datasets.", "title": "" }, { "docid": "38ecb51f7fca71bd47248987866a10d2", "text": "Machine Translation has been a topic of research from the past many years. Many methods and techniques have been proposed and developed. However, quality of translation has always been a matter of concern. In this paper, we outline a target language generation mechanism with the help of language English-Sanskrit language pair using rule based machine translation technique [1]. Rule Based Machine Translation provides high quality translation and requires in depth knowledge of the language apart from real world knowledge and the differences in cultural background and conceptual divisions. A string of English sentence can be translated into string of Sanskrit ones. The methodology for design and development is implemented in the form of software named as “EtranS”. KeywordsAnalysis, Machine translation, translation theory, Interlingua, language divergence, Sanskrit, natural language processing.", "title": "" }, { "docid": "eaead3c8ac22ff5088222bb723d8b758", "text": "Discrete-Time Markov Chains (DTMCs) are a widely-used formalism to model probabilistic systems. On the one hand, available tools like PRISM or MRMC offer efficient model checking algorithms and thus support the verification of DTMCs. However, these algorithms do not provide any diagnostic information in the form of counterexamples, which are highly important for the correction of erroneous systems. On the other hand, there exist several approaches to generate counterexamples for DTMCs, but all these approaches require the model checking result for completeness. In this paper we introduce a model checking algorithm for DTMCs that also supports the generation of counterexamples. Our algorithm, based on the detection and abstraction of strongly connected components, offers abstract counterexamples, which can be interactively refined by the user.", "title": "" }, { "docid": "f8527ea496666ef875805d376fbd2d5d", "text": "The rapid development of computer and robotic technologies in the last decade is giving hope to perform earlier and more accurate diagnoses of the Autism Spectrum Disorder (ASD), and more effective, consistent, and cost-conscious treatment. Besides the reduced cost, the main benefit of using technology to facilitate treatment is that stimuli produced during each session of the treatment can be controlled, which not only guarantees consistency across different sessions, but also makes it possible to focus on a single phenomenon, which is difficult even for a trained professional to perform, and deliver the stimuli according to the treatment plan. In this article, we provide a comprehensive review of research on recent technology-facilitated diagnosis and treat of children and adults with ASD. Different from existing reviews on this topic, which predominantly concern clinical issues, we focus on the engineering perspective of autism studies. All technology facilitated systems used for autism studies can be modeled as human machine interactive systems where one or more participants would constitute as the human component, and a computer-based or a robotic-based system would be the machine component. Based on this model, we organize our review with the following questions: (1) What are presented to the participants in the studies and how are the content and delivery methods enabled by technologies? (2) How are the reactions/inputs collected from the participants in response to the stimuli in the studies? (3) Are the experimental procedure and programs presented to participants dynamically adjustable based on the responses from the participants, and if so, how? and (4) How are the programs assessed?", "title": "" }, { "docid": "5b149ce093d0e546a3e99f92ef1608a0", "text": "Smartphones have been becoming ubiquitous and mobile users are increasingly relying on them to store and handle personal information. However, recent studies also reveal the disturbing fact that users’ personal information is put at risk by (rogue) smartphone applications. Existing solutions exhibit limitations in their capabilities in taming these privacy-violating smartphone applications. In this paper, we argue for the need of a new privacy mode in smartphones. The privacy mode can empower users to flexibly control in a fine-grained manner what kinds of personal information will be accessible to an application. Also, the granted access can be dynamically adjusted at runtime in a fine-grained manner to better suit a user’s needs in various scenarios (e.g., in a different time or location). We have developed a system called TISSA that implements such a privacy mode on Android. The evaluation with more than a dozen of information-leaking Android applications demonstrates its effectiveness and practicality. Furthermore, our evaluation shows that TISSA introduces negligible performance overhead.", "title": "" }, { "docid": "0e5f4253ea4fba9c9c42dd579cbba76c", "text": "Binary code search has received much attention recently due to its impactful applications, e.g., plagiarism detection, malware detection and software vulnerability auditing. However, developing an effective binary code search tool is challenging due to the gigantic syntax and structural differences in binaries resulted from different compilers, architectures and OSs. In this paper, we propose BINGO — a scalable and robust binary search engine supporting various architectures and OSs. The key contribution is a selective inlining technique to capture the complete function semantics by inlining relevant library and user-defined functions. In addition, architecture and OS neutral function filtering is proposed to dramatically reduce the irrelevant target functions. Besides, we introduce length variant partial traces to model binary functions in a program structure agnostic fashion. The experimental results show that BINGO can find semantic similar functions across architecture and OS boundaries, even with the presence of program structure distortion, in a scalable manner. Using BINGO, we also discovered a zero-day vulnerability in Adobe PDF Reader, a COTS binary.", "title": "" }, { "docid": "6b04fddb55b413306c0706642c81c621", "text": "With the proliferation of the Internet and World Wide Web applications, people are increasingly interacting with government to citizen (G2C) e-government systems. It is, therefore, important to measure the success of G2C e-government systems from citizens’ perspective. While information systems (IS) success models have received much attention among researchers, little research has been conducted to assess the success of e-government systems. Whether traditional IS success models can be extended to investigating e-government systems success needs to be addressed. This study provides the first empirical test of an adaptation of DeLone and McLean’s IS success model in the context of G2C e-government. The model consists of six dimensions: Information Quality, System Quality, Service Quality, Use, User Satisfaction, and Perceived Net Benefit. Structural equation modeling techniques were applied to data collected by questionnaire from 119 users of G2C e-government systems in Taiwan. Except the link from System Quality to Use, the hypothesized relationships between the six success variables were significantly or marginally supported by the data. The findings of this study provide several important implications for e-government research and practice. This paper concludes by discussing limitations that could be addressed in future studies.", "title": "" }, { "docid": "bd320ffcd9c28e2c3ea2d69039bfdbe9", "text": "3D LiDAR scanners are playing an increasingly important role in autonomous driving as they can generate depth information of the environment. However, creating large 3D LiDAR point cloud datasets with point-level labels requires a significant amount of manual annotation. This jeopardizes the efficient development of supervised deep learning algorithms which are often data-hungry. We present a framework to rapidly create point clouds with accurate point-level labels from a computer game. To our best knowledge, this is the first publication on LiDAR point cloud simulation framework for autonomous driving. The framework supports data collection from both auto-driving scenes and user-configured scenes. Point clouds from auto-driving scenes can be used as training data for deep learning algorithms, while point clouds from user-configured scenes can be used to systematically test the vulnerability of a neural network, and use the falsifying examples to make the neural network more robust through retraining. In addition, the scene images can be captured simultaneously in order for sensor fusion tasks, with a method proposed to do automatic registration between the point clouds and captured scene images. We show a significant improvement in accuracy (+9%) in point cloud segmentation by augmenting the training dataset with the generated synthesized data. Our experiments also show by testing and retraining the network using point clouds from user-configured scenes, the weakness/blind spots of the neural network can be fixed.", "title": "" }, { "docid": "a95ca56f64150700cd899a5b0ee1c4b8", "text": "Due to the pervasiveness of digital technologies in all aspects of human lives, it is increasingly unlikely that a digital device is involved as goal, medium or simply ’witness’ of a criminal event. Forensic investigations include recovery, analysis and presentation of information stored in digital devices and related to computer crimes. These activities often involve the adoption of a wide range of imaging and analysis tools and the application of different techniques on different devices, with the consequence that the reconstruction and presentation activities result complicated. This work presents a method, based on Semantic Web technologies, that helps digital investigators to correlate and present information acquired from forensic data, with the aim to get a more valuable reconstruction of events or actions in order to reach case conclusions.", "title": "" }, { "docid": "8292d5c1e13042aa42f1efb60058ef96", "text": "The epithelial-to-mesenchymal transition (EMT) is a vital control point in metastatic breast cancer (MBC). TWIST1, SNAIL1, SLUG, and ZEB1, as key EMT-inducing transcription factors (EMT-TFs), are involved in MBC through different signaling cascades. This updated meta-analysis was conducted to assess the correlation between the expression of EMT-TFs and prognostic value in MBC patients. A total of 3,218 MBC patients from fourteen eligible studies were evaluated. The pooled hazard ratios (HR) for EMT-TFs suggested that high EMT-TF expression was significantly associated with poor prognosis in MBC patients (HRs = 1.72; 95% confidence intervals (CIs) = 1.53-1.93; P = 0.001). In addition, the overexpression of SLUG was the most impactful on the risk of MBC compared with TWIST1 and SNAIL1, which sponsored fixed models. Strikingly, the increased risk of MBC was less associated with ZEB1 expression. However, the EMT-TF expression levels significantly increased the risk of MBC in the Asian population (HR = 2.11, 95% CI = 1.70-2.62) without any publication bias (t = 1.70, P = 0.11). These findings suggest that the overexpression of potentially TWIST1, SNAIL1 and especially SLUG play a key role in the aggregation of MBC treatment as well as in the improvement of follow-up plans in Asian MBC patients.", "title": "" }, { "docid": "39ab78b58f6ace0fc29f18a1c4ed8ebc", "text": "We survey recent developments in the design of large-capacity content-addressable memory (CAM). A CAM is a memory that implements the lookup-table function in a single clock cycle using dedicated comparison circuitry. CAMs are especially popular in network routers for packet forwarding and packet classification, but they are also beneficial in a variety of other applications that require high-speed table lookup. The main CAM-design challenge is to reduce power consumption associated with the large amount of parallel active circuitry, without sacrificing speed or memory density. In this paper, we review CAM-design techniques at the circuit level and at the architectural level. At the circuit level, we review low-power matchline sensing techniques and searchline driving approaches. At the architectural level we review three methods for reducing power consumption.", "title": "" }, { "docid": "5a0cfbd3d8401d4d8e437ec1a1e9458f", "text": "Ehlers-Danlos syndrome is an inherited heterogeneous group of connective tissue disorders, characterized by abnormal collagen synthesis, affecting skin, ligaments, joints, blood vessels and other organs. It is one of the oldest known causes of bruising and bleeding and was first described by Hipprocrates in 400 BC. Edvard Ehlers, in 1901, recognized the condition as a distinct entity. In 1908, Henri-Alexandre Danlos suggested that skin extensibility and fragility were the cardinal features of the syndrome. In 1998, Beighton published the classification of Ehlers-Danlos syndrome according to the Villefranche nosology. From the 1960s the genetic make up was identified. Management of bleeding problems associated with Ehlers-Danlos has been slow to progress.", "title": "" }, { "docid": "0bd96a4b417b3482a6accac0f7f927ca", "text": "“Little languages” such as configuration files or HTML documents are commonplace in computing. This paper divides the work of implementing a little language into four parts, and presents a framework which can be used to easily conquer the implementation of each. The pieces of the framework have the unusual property that they may be extended through normal object-oriented means, allowing features to be added to a little language simply by subclassing parts of its compiler.", "title": "" }, { "docid": "6cc99565a0e9081a94e82be93a67482e", "text": "The existing shortage of therapists and caregivers assisting physically disabled individuals at home is expected to increase and become serious problem in the near future. The patient population needing physical rehabilitation of the upper extremity is also constantly increasing. Robotic devices have the potential to address this problem as noted by the results of recent research studies. However, the availability of these devices in clinical settings is limited, leaving plenty of room for improvement. The purpose of this paper is to document a review of robotic devices for upper limb rehabilitation including those in developing phase in order to provide a comprehensive reference about existing solutions and facilitate the development of new and improved devices. In particular the following issues are discussed: application field, target group, type of assistance, mechanical design, control strategy and clinical evaluation. This paper also includes a comprehensive, tabulated comparison of technical solutions implemented in various systems.", "title": "" }, { "docid": "886df1aff444a120bd56a85fa4f53472", "text": "Acoustic Event Classification (AEC) has become a significant task for machines to perceive the surrounding auditory scene. However, extracting effective representations that capture the underlying characteristics of the acoustic events is still challenging. Previous methods mainly focused on designing the audio features in a ‘hand-crafted’ manner. Interestingly, data-learnt features have been recently reported to show better performance. Up to now, these were only considered on the frame-level. In this paper, we propose an unsupervised learning framework to learn a vector representation of an audio sequence for AEC. This framework consists of a Recurrent Neural Network (RNN) encoder and a RNN decoder, which respectively transforms the variable-length audio sequence into a fixed-length vector and reconstructs the input sequence on the generated vector. After training the encoder-decoder, we feed the audio sequences to the encoder and then take the learnt vectors as the audio sequence representations. Compared with previous methods, the proposed method can not only deal with the problem of arbitrary-lengths of audio streams, but also learn the salient information of the sequence. Extensive evaluation on a large-size acoustic event database is performed, and the empirical results demonstrate that the learnt audio sequence representation yields a significant performance improvement by a large margin compared with other state-of-the-art hand-crafted sequence features for AEC.", "title": "" }, { "docid": "9f84ec96cdb45bcf333db9f9459a3d86", "text": "A novel printed crossed dipole with broad axial ratio (AR) bandwidth is proposed. The proposed dipole consists of two dipoles crossed through a 90°phase delay line, which produces one minimum AR point due to the sequentially rotated configuration and four parasitic loops, which generate one additional minimum AR point. By combining these two minimum AR points, the proposed dipole achieves a broadband circularly polarized (CP) performance. The proposed antenna has not only a broad 3 dB AR bandwidth of 28.6% (0.75 GHz, 2.25-3.0 GHz) with respect to the CP center frequency 2.625 GHz, but also a broad impedance bandwidth for a voltage standing wave ratio (VSWR) ≤2 of 38.2% (0.93 GHz, 1.97-2.9 GHz) centered at 2.435 GHz and a peak CP gain of 8.34 dBic. Its arrays of 1 &times; 2 and 2 &times; 2 arrangement yield 3 dB AR bandwidths of 50.7% (1.36 GHz, 2-3.36 GHz) with respect to the CP center frequency, 2.68 GHz, and 56.4% (1.53 GHz, 1.95-3.48 GHz) at the CP center frequency, 2.715 GHz, respectively. This paper deals with the designs and experimental results of the proposed crossed dipole with parasitic loop resonators and its arrays.", "title": "" }, { "docid": "97cc1bbb077bb11613299b0c829eee39", "text": "Field Programmable Gate Array (FPGA) implementations of sorting algorithms have proven to be efficient, but existing implementations lack portability and maintainability because they are written in low-level hardware description languages that require substantial domain expertise to develop and maintain. To address this problem, we develop a framework that generates sorting architectures for different requirements (speed, area, power, etc.). Our framework provides ten highly optimized basic sorting architectures, easily composes basic architectures to generate hybrid sorting architectures, enables non-hardware experts to quickly design efficient hardware sorters, and facilitates the development of customized heterogeneous FPGA/CPU sorting systems. Experimental results show that our framework generates architectures that perform at least as well as existing RTL implementations for arrays smaller than 16K elements, and are comparable to RTL implementations for sorting larger arrays. We demonstrate a prototype of an end-to-end system using our sorting architectures for large arrays (16K-130K) on a heterogeneous FPGA/CPU system.", "title": "" } ]
scidocsrr
13d78c0927444d2f6528c8d31fefb8dd
Deep Reinforcement Learning for Autonomous Driving
[ { "docid": "9984fc080b1f2fe2bf4910b9091591a7", "text": "In the modern era, the vehicles are focused to be automated to give human driver relaxed driving. In the field of automobile various aspects have been considered which makes a vehicle automated. Google, the biggest network has started working on the self-driving cars since 2010 and still developing new changes to give a whole new level to the automated vehicles. In this paper we have focused on two applications of an automated car, one in which two vehicles have same destination and one knows the route, where other don't. The following vehicle will follow the target (i.e. Front) vehicle automatically. The other application is automated driving during the heavy traffic jam, hence relaxing driver from continuously pushing brake, accelerator or clutch. The idea described in this paper has been taken from the Google car, defining the one aspect here under consideration is making the destination dynamic. This can be done by a vehicle automatically following the destination of another vehicle. Since taking intelligent decisions in the traffic is also an issue for the automated vehicle so this aspect has been also under consideration in this paper.", "title": "" }, { "docid": "8665711daa00dac270ed0830e43acdde", "text": "Deep learning-based approaches have been widely used for training controllers for autonomous vehicles due to their powerful ability to approximate nonlinear functions or policies. However, the training process usually requires large labeled data sets and takes a lot of time. In this paper, we analyze the influences of features on the performance of controllers trained using the convolutional neural networks (CNNs), which gives a guideline of feature selection to reduce computation cost. We collect a large set of data using The Open Racing Car Simulator (TORCS) and classify the image features into three categories (sky-related, roadside-related, and road-related features). We then design two experimental frameworks to investigate the importance of each single feature for training a CNN controller. The first framework uses the training data with all three features included to train a controller, which is then tested with data that has one feature removed to evaluate the feature's effects. The second framework is trained with the data that has one feature excluded, while all three features are included in the test data. Different driving scenarios are selected to test and analyze the trained controllers using the two experimental frameworks. The experiment results show that (1) the road-related features are indispensable for training the controller, (2) the roadside-related features are useful to improve the generalizability of the controller to scenarios with complicated roadside information, and (3) the sky-related features have limited contribution to train an end-to-end autonomous vehicle controller.", "title": "" }, { "docid": "be283056a8db3ab5b2481f3dc1f6526d", "text": "Numerous groups have applied a variety of deep learning techniques to computer vision problems in highway perception scenarios. In this paper, we presented a number of empirical evaluations of recent deep learning advances. Computer vision, combined with deep learning, has the potential to bring about a relatively inexpensive, robust solution to autonomous driving. To prepare deep learning for industry uptake and practical applications, neural networks will require large data sets that represent all possible driving environments and scenarios. We collect a large data set of highway data and apply deep learning and computer vision algorithms to problems such as car and lane detection. We show how existing convolutional neural networks (CNNs) can be used to perform lane and vehicle detection while running at frame rates required for a real-time system. Our results lend credence to the hypothesis that deep learning holds promise for autonomous driving.", "title": "" }, { "docid": "b1e4fb97e4b1d31e4064f174e50f17d3", "text": "We propose an inverse reinforcement learning (IRL) approach using Deep QNetworks to extract the rewards in problems with large state spaces. We evaluate the performance of this approach in a simulation-based autonomous driving scenario. Our results resemble the intuitive relation between the reward function and readings of distance sensors mounted at different poses on the car. We also show that, after a few learning rounds, our simulated agent generates collision-free motions and performs human-like lane change behaviour.", "title": "" }, { "docid": "03097e1239e5540fe1ec45729d1cbbc2", "text": "Policy gradient is an efficient technique for improving a policy in a reinforcement learning setting. However, vanilla online variants are on-policy only and not able to take advantage of off-policy data. In this paper we describe a new technique that combines policy gradient with off-policy Q-learning, drawing experience from a replay buffer. This is motivated by making a connection between the fixed points of the regularized policy gradient algorithm and the Q-values. This connection allows us to estimate the Q-values from the action preferences of the policy, to which we apply Q-learning updates. We refer to the new technique as ‘PGQ’, for policy gradient and Q-learning. We also establish an equivalency between action-value fitting techniques and actor-critic algorithms, showing that regularized policy gradient techniques can be interpreted as advantage function learning algorithms. We conclude with some numerical examples that demonstrate improved data efficiency and stability of PGQ. In particular, we tested PGQ on the full suite of Atari games and achieved performance exceeding that of both asynchronous advantage actor-critic (A3C) and Q-learning.", "title": "" } ]
[ { "docid": "375470d901a7d37698d34747621667ce", "text": "RNA interference (RNAi) has recently emerged as a specific and efficient method to silence gene expression in mammalian cells either by transfection of short interfering RNAs (siRNAs; ref. 1) or, more recently, by transcription of short hairpin RNAs (shRNAs) from expression vectors and retroviruses. But the resistance of important cell types to transduction by these approaches, both in vitro and in vivo, has limited the use of RNAi. Here we describe a lentiviral system for delivery of shRNAs into cycling and non-cycling mammalian cells, stem cells, zygotes and their differentiated progeny. We show that lentivirus-delivered shRNAs are capable of specific, highly stable and functional silencing of gene expression in a variety of cell types and also in transgenic mice. Our lentiviral vectors should permit rapid and efficient analysis of gene function in primary human and animal cells and tissues and generation of animals that show reduced expression of specific genes. They may also provide new approaches for gene therapy.", "title": "" }, { "docid": "a99e30d406d5053d8345b36791899238", "text": "Advances in sequencing technologies and increased access to sequencing services have led to renewed interest in sequence and genome assembly. Concurrently, new applications for sequencing have emerged, including gene expression analysis, discovery of genomic variants and metagenomics, and each of these has different needs and challenges in terms of assembly. We survey the theoretical foundations that underlie modern assembly and highlight the options and practical trade-offs that need to be considered, focusing on how individual features address the needs of specific applications. We also review key software and the interplay between experimental design and efficacy of assembly.", "title": "" }, { "docid": "356a72153f61311546f6ff874ee79bb4", "text": "In this paper, an object cosegmentation method based on shape conformability is proposed. Different from the previous object cosegmentation methods which are based on the region feature similarity of the common objects in image set, our proposed SaCoseg cosegmentation algorithm focuses on the shape consistency of the foreground objects in image set. In the proposed method, given an image set where the implied foreground objects may be varied in appearance but share similar shape structures, the implied common shape pattern in the image set can be automatically mined and regarded as the shape prior of those unsatisfactorily segmented images. The SaCoseg algorithm mainly consists of four steps: 1) the initial Grabcut segmentation; 2) the shape mapping by coherent point drift registration; 3) the common shape pattern discovery by affinity propagation clustering; and 4) the refinement by Grabcut with common shape constraint. To testify our proposed algorithm and establish a benchmark for future work, we built the CoShape data set to evaluate the shape-based cosegmentation. The experiments on CoShape data set and the comparison with some related cosegmentation algorithms demonstrate the good performance of the proposed SaCoseg algorithm.", "title": "" }, { "docid": "0616a6a220d117f00cc97526f3e493c5", "text": "To accelerate research on adversarial examples and robustness of machine learning classifiers, Google Brain organized a NIPS 2017 competition that encouraged researchers to develop new methods to generate adversarial examples as well as to develop new ways to defend against them. In this chapter, we describe the Alexey Kurakin, Ian Goodfellow, Samy Bengio Google Brain Yinpeng Dong, Fangzhou Liao, Ming Liang, Tianyu Pang, Jun Zhu, Xiaolin Hu Department of Computer Science and Technology, Tsinghua University Cihang Xie, Zhishuai Zhang, Alan Yuille Department of Computer Science, The Johns Hopkins University Jianyu Wang Baidu Research USA", "title": "" }, { "docid": "7efa3543711bc1bb6e3a893ed424b75d", "text": "This dissertation is concerned with the creation of training data and the development of probability models for statistical parsing of English with Combinatory Categorial Grammar (CCG). Parsing, or syntactic analysis, is a prerequisite for semantic interpretation, and forms therefore an integral part of any system which requires natural language understanding. Since almost all naturally occurring sentences are ambiguous, it is not sufficient (and often impossible) to generate all possible syntactic analyses. Instead, the parser needs to rank competing analyses and select only the most likely ones. A statistical parser uses a probability model to perform this task. I propose a number of ways in which such probability models can be defined for CCG. The kinds of models developed in this dissertation, generative models over normal-form derivation trees, are particularly simple, and have the further property of restricting the set of syntactic analyses to those corresponding to a canonical derivation structure. This is important to guarantee that parsing can be done efficiently. In order to achieve high parsing accuracy, a large corpus of annotated data is required to estimate the parameters of the probability models. Most existing wide-coverage statistical parsers use models of phrase-structure trees estimated from the Penn Treebank, a 1-million-word corpus of manually annotated sentences from the Wall Street Journal. This dissertation presents an algorithm which translates the phrase-structure analyses of the Penn Treebank to CCG derivations. The resulting corpus, CCGbank, is used to train and test the models proposed in this dissertation. Experimental results indicate that parsing accuracy (when evaluated according to a comparable metric, the recovery of unlabelled word-word dependency relations), is as high as that of standard Penn Treebank parsers which use similar modelling techniques. Most existing wide-coverage statistical parsers use simple phrase-structure grammars whose syntactic analyses fail to capture long-range dependencies, and therefore do not correspond to directly interpretable semantic representations. By contrast, CCG is a grammar formalism in which semantic representations that include long-range dependencies can be built directly during the derivation of syntactic structure. These dependencies define the predicate-argument structure of a sentence, and are used for two purposes in this dissertation: First, the performance of the parser can be evaluated according to how well it recovers these dependencies. In contrast to purely syntactic evaluations, this yields a direct measure of how accurate the semantic interpretations returned by the parser are. Second, I propose a generative model that captures the local and non-local dependencies in the predicate-argument structure, and investigate the impact of modelling non-local in addition to local dependencies.", "title": "" }, { "docid": "0d59ab6748a16bf4deedfc8bd79e4d71", "text": "Paget's disease (PD) is a chronic progressive disease of the bone characterized by abnormal bone metabolism affecting either a single bone (monostotic) or many bones (polyostotic) with uncertain etiology. We report a case of PD in a 70-year-old male, which was initially identified as osteonecrosis of the maxilla. Non-drug induced osteonecrosis in PD is rare and very few cases have been reported in the literature.", "title": "" }, { "docid": "4a5abe07b93938e7549df068967731fc", "text": "A novel compact dual-polarized unidirectional wideband antenna based on two crossed magneto-electric dipoles is proposed. The proposed miniaturization method consist in transforming the electrical filled square dipoles into vertical folded square loops. The surface of the radiating element is reduced to 0.23λ0∗0.23λ0, where λ0 is the wavelength at the lowest operation frequency for a standing wave ratio (SWR) <2.5, which corresponds to a reduction factor of 48%. The antenna has been prototyped using 3D printing technology. The measured input impedance bandwidth is 51.2% from 1.7 GHz to 2.9 GHz with a Standing wave ratio (SWR) <2.", "title": "" }, { "docid": "749dd1398938c5517858384c616ecaff", "text": "An asymmetric wideband dual-polarized bilateral tapered slot antenna (DBTSA) is proposed in this letter for wireless EMC measurements. The DBTSA is formed by two bilateral tapered slot antennas designed with low cross polarization. With careful design, the achieved DBTSA not only has a wide operating frequency band, but also maintains a single main-beam from 700 MHz to 20 GHz. This is a significant improvement compared to the conventional dual-polarized tapered slot antennas, which suffer from main-beam split in the high-frequency band. The innovative asymmetric configuration of the proposed DBTSA significantly reduces the field coupling between the two antenna elements, so that low cross polarization and high port isolation are obtained across the entire frequency range. All these intriguing characteristics make the proposed DBTSA a good candidate for a dual-polarized sensor antenna for wireless EMC measurements.", "title": "" }, { "docid": "e63eac157bd750ca39370fd5b9fdf85e", "text": "Allometric scaling relations, including the 3/4 power law for metabolic rates, are characteristic of all organisms and are here derived from a general model that describes how essential materials are transported through space-filling fractal networks of branching tubes. The model assumes that the energy dissipated is minimized and that the terminal tubes do not vary with body size. It provides a complete analysis of scaling relations for mammalian circulatory systems that are in agreement with data. More generally, the model predicts structural and functional properties of vertebrate cardiovascular and respiratory systems, plant vascular systems, insect tracheal tubes, and other distribution networks.", "title": "" }, { "docid": "9d068f6b812272750fe8a56562d703a2", "text": "Sustainable development, although a widely used phrase and idea, has many different meanings and therefore provokes many different responses. In broad terms, the concept of sustainable development is an attempt to combine growing concerns about a range of environmental issues with socio-economic issues. To aid understanding of these different policies this paper presents a classification and mapping of different trends of thought on sustainable development, their political and policy frameworks and their attitudes towards change and means of change. Sustainable development has the potential to address fundamental challenges for humanity, now and into the future. However, to do this, it needs more clarity of meaning, concentrating on sustainable livelihoods and well-being rather than well-having, and long term environmental sustainability, which requires a strong basis in principles that link the social and environmental to human equity. Copyright © 2005 John Wiley & Sons, Ltd and ERP Environment. Received 31 July 2002; revised 16 October 2003; accepted 3 December 2003 Sustainable Development: A Challenging and Contested Concept T HE WIDESPREAD RISE OF INTEREST IN, AND SUPPORT FOR, THE CONCEPT OF SUSTAINABLE development is potentially an important shift in understanding relationships of humanity with nature and between people. It is in contrast to the dominant outlook of the last couple of hundred years, especially in the ‘North’, that has been based on the view of the separation of the environment from socio-economic issues. For most of the last couple of hundred years the environment has been largely seen as external to humanity, mostly to be used and exploited, with a few special areas preserved as wilderness or parks. Environmental problems were viewed mainly as local. On the whole the relationship between people and the environment was conceived as humanity’s triumph over nature. This Promethean view (Dryzek, 1997) was that human knowledge and technology could overcome all obstacles including natural and environmental ones. This view was linked with the development of capitalism, the industrial revolution and modern science. As Bacon, one of the founders of modern science, put it, ‘The world is made for * Correspondence to: Bill Hopwood, Sustainable Cities Research Institute, 6 North Street East, University of Northumbria, Newcastle on Tyne NE1 8ST, UK. E-mail: william.hopwood@unn.ac.uk Sustainable Development Sust. Dev. 13, 38–52 (2005) Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/sd.244 Mapping Different Approaches 39 man, not man for the world’. Environmental management and concern amongst most businesses and governments, apart from local problems and wilderness conservation, was at best based on natural resource management. A key example was the ideas of Pinchot in the USA (Dryzek, 1997), which recognized that humans do need natural resources and that these resources should be managed, rather than rapidly exploited, in order to ensure maximum long-term use. Economics came to be the dominating issue of human relations with economic growth, defined by increasing production, as the main priority (Douthwaite, 1992). This was the seen as the key to humanity’s well-being and, through growth, poverty would be overcome: as everyone floated higher those at the bottom would be raised out of poverty. The concept of sustainable development is the result of the growing awareness of the global links between mounting environmental problems, socio-economic issues to do with poverty and inequality and concerns about a healthy future for humanity. It strongly links environmental and socio-economic issues. The first important use of the term was in 1980 in the World Conservation Strategy (IUCN et al., 1980). This process of bringing together environmental and socio-economic questions was most famously expressed in the Brundtland Report’s definition of sustainable development as meeting ‘the needs of the present without compromising the ability of future generations to meet their needs’ (WCED, 1987, p. 43). This defines needs from a human standpoint; as Lee (2000, p. 32) has argued, ‘sustainable development is an unashamedly anthropocentric concept’. Brundtland’s definition and the ideas expressed in the report Our Common Future recognize the dependency of humans on the environment to meet needs and well-being in a much wider sense than merely exploiting resources: ‘ecology and economy are becoming ever more interwoven – locally, regionally, nationally and globally’ (WCED, 1987, p. 5). Rather than domination over nature our lives, activities and society are nested within the environment (Giddings et al., 2002). The report stresses that humanity, whether in an industrialized or a rural subsistence society, depends for security and basic existence on the environment; the economy and our well-being now and in the future need the environment. It also points to the planetwide interconnections: environmental problems are not local but global, so that actions and impacts have to be considered internationally to avoid displacing problems from one area to another by actions such as releasing pollution that crosses boundaries, moving polluting industries to another location or using up more than an equitable share of the earth’s resources (by an ecological footprint (Wackernagel and Rees, 1996) far in excess of the area inhabited). Environmental problems threaten people’s health, livelihoods and lives and can cause wars and threaten future generations. Sustainable development raises questions about the post-war claim, that still dominates much mainstream economic policy, that international prosperity and human well-being can be achieved through increased global trade and industry (Reid, 1995; Moffat, 1996; Sachs, 1999). It recognizes that past growth models have failed to eradicate poverty globally or within countries, ‘no trends, . . . no programmes or policies offer any real hope of narrowing the growing gap between rich and poor nations’ (WCED, 1987, p. xi). This pattern of growth has also damaged the environment upon which we depend, with a ‘downward spiral of poverty and environmental degradation’ (WCED, 1987, p. xii). Brundtland, recognizing this failure, calls for a different form of growth, ‘changing the quality of growth, meeting essential needs, merging environment and economics in decision making’ (WCED, 1987, p. 49), with an emphasis on human development, participation in decisions and equity in benefits. The development proposed is a means to eradicate poverty, meet human needs and ensure that all get a fair share of resources – very different from present development. Social justice today and in the future is a crucial component of the concept of sustainable development. There were, and are, long standing debates about both goals and means within theories dealing with both environmental and socio-economic questions which have inevitably flowed into ideas on sustainCopyright © 2005 John Wiley & Sons, Ltd and ERP Environment Sust. Dev. 13, 38–52 (2005) 40 B. Hopwood et al. able development. As Wackernagel and Rees (1996) have argued, the Brundtland Report attempted to bridge some of these debates by leaving a certain ambiguity, talking at the same time of the priorities of meeting the needs of the poor, protecting the environment and more rapid economic growth. The looseness of the concept and its theoretical underpinnings have enabled the use of the phrases ‘sustainable development’ and ‘sustainability’ to become de rigueur for politicians and business leaders, but as the Workshop on Urban Sustainability of the US National Science Foundation (2000, p. 1) pointed out, sustainability is ‘laden with so many definitions that it risks plunging into meaninglessness, at best, and becoming a catchphrase for demagogy, at worst. [It] is used to justify and legitimate a myriad of policies and practices ranging from communal agrarian utopianism to large-scale capital-intensive market development’. While many claim that sustainable development challenges the increased integration of the world in a capitalist economy dominated by multinationals (Middleton et al., 1993; Christie and Warburton, 2001), Brundtland’s ambiguity allows business and governments to be in favour of sustainability without any fundamental challenge to their present course, using Brundtland’s support for rapid growth to justify the phrase ‘sustainable growth’. Rees (1998) points out that this allows capitalism to continue to put forward economic growth as its ‘morally bankrupt solution’ to poverty. If the economy grows, eventually all will benefit (Dollar and Kraay, 2000): in modern parlance the trickle-down theory. Daly (1993) criticized the notion of ‘sustainable growth’ as ‘thought-stopping’ and oxymoronic in a world in which ecosystems are finite. At some point, economic growth with ever more use of resources and production of waste is unsustainable. Instead Daly argued for the term ‘sustainable development’ by which he, much more clearly than Brundtland, meant qualitative, rather than quantitative, improvements. Development is open to confusion, with some seeing it as an end in itself, so it has been suggested that greater clarity would be to speak of ‘sustainable livelihoods’, which is the aim that Brundtland outlined (Workshop on Urban Sustainability, 2000). Another area of debate is between the views of weak and strong sustainability (Haughton and Hunter, 1994). Weak sustainability sees natural and manufactured capital as interchangeable with technology able to fill human produced gaps in the natural world (Daly and Cobb, 1989) such as a lack of resources or damage to the environment. Solow put the case most strongly, stating that by substituting other factors for natural resources ‘the world can, in effect, get along without natural resources, so exhaustion is just an event, not a catastrophe’ (1974, p. 11). Strong ", "title": "" }, { "docid": "ddf4e9582bc1b86ca8cb9967c4247e8e", "text": "In the past few years, Iranian universities have embarked to use e-learning tools and technologies to extend and improve their educational services. After a few years of conducting e-learning programs a debate took place within the executives and managers of the e-learning institutes concerning which activities are of the most influence on the learning progress of online students. This research is aimed to investigate the impact of a number of e-learning activities on the students’ learning development. The results show that participation in virtual classroom sessions has the most substantial impact on the students’ final grades. This paper presents the process of applying data mining methods to the web usage records of students’ activities in a virtual learning environment. The main idea is to rank the learning activities based on their importance in order to improve students’ performance by focusing on the most important ones.", "title": "" }, { "docid": "3a68175de0dbc4c89b66678976898d1f", "text": "The rapid accumulation of data in social media (in million and billion scales) has imposed great challenges in information extraction, knowledge discovery, and data mining, and texts bearing sentiment and opinions are one of the major categories of user generated data in social media. Sentiment analysis is the main technology to quickly capture what people think from these text data, and is a research direction with immediate practical value in big data era. Learning such techniques will allow data miners to perform advanced mining tasks considering real sentiment and opinions expressed by users in additional to the statistics calculated from the physical actions (such as viewing or purchasing records) user perform, which facilitates the development of real-world applications. However, the situation that most tools are limited to the English language might stop academic or industrial people from doing research or products which cover a wider scope of data, retrieving information from people who speak different languages, or developing applications for worldwide users. More specifically, sentiment analysis determines the polarities and strength of the sentiment-bearing expressions, and it has been an important and attractive research area. In the past decade, resources and tools have been developed for sentiment analysis in order to provide subsequent vital applications, such as product reviews, reputation management, call center robots, automatic public survey, etc. However, most of these resources are for the English language. Being the key to the understanding of business and government issues, sentiment analysis resources and tools are required for other major languages, e.g., Chinese. In this tutorial, audience can learn the skills for retrieving sentiment from texts in another major language, Chinese, to overcome this obstacle. The goal of this tutorial is to introduce the proposed sentiment analysis technologies and datasets in the literature, and give the audience the opportunities to use resources and tools to process Chinese texts from the very basic preprocessing, i.e., word segmentation and part of speech tagging, to sentiment analysis, i.e., applying sentiment dictionaries and obtaining sentiment scores, through step-by-step instructions and a hand-on practice. The basic processing tools are from CKIP Participants can download these resources, use them and solve the problems they encounter in this tutorial. This tutorial will begin from some background knowledge of sentiment analysis, such as how sentiment are categorized, where to find available corpora and which models are commonly applied, especially for the Chinese language. Then a set of basic Chinese text processing tools for word segmentation, tagging and parsing will be introduced for the preparation of mining sentiment and opinions. After bringing the idea of how to pre-process the Chinese language to the audience, I will describe our work on compositional Chinese sentiment analysis from words to sentences, and an application on social media text (Facebook) as an example. All our involved and recently developed related resources, including Chinese Morphological Dataset, Augmented NTU Sentiment Dictionary (ANTUSD), E-hownet with sentiment information, Chinese Opinion Treebank, and the CopeOpi Sentiment Scorer, will also be introduced and distributed in this tutorial. The tutorial will end by a hands-on session of how to use these materials and tools to process Chinese sentiment.", "title": "" }, { "docid": "2eff0a817a48a2fd62e6f834d0389105", "text": "In this paper, we demonstrate that image reconstruction can be expressed in terms of neural networks. We show that filtered backprojection can be mapped identically onto a deep neural network architecture. As for the case of iterative reconstruction, the straight forward realization as matrix multiplication is not feasible. Thus, we propose to compute the back-projection layer efficiently as fixed function and its gradient as projection operation. This allows a data-driven approach for joint optimization of correction steps in projection domain and image domain. As a proof of concept, we demonstrate that we are able to learn weightings and additional filter layers that consistently reduce the reconstruction error of a limited angle reconstruction by a factor of two while keeping the same computational complexity as filtered back-projection. We believe that this kind of learning approach can be extended to any common CT artifact compensation heuristic and will outperform hand-crafted artifact correction methods in the future.", "title": "" }, { "docid": "4ea0ee8c40e2cc8ac5238eb6a3579414", "text": "This paper suggests a method for Subject–Action–Object (SAO) network analysis of patents for technology trends identification by using the concept of function. The proposed method solves the shortcoming of the keyword-based approach to identification of technology trends, i.e., that it cannot represent how technologies are used or for what purpose. The concept of function provides information on how a technology is used and how it interacts with other technologies; the keyword-based approach does not provide such information. The proposed method uses an SAO model and represents “key concept” instead of “key word”. We present a procedure that formulates an SAO network by using SAO models extracted from patent documents, and a method that applies actor network theory to analyze technology implications of the SAO network. To demonstrate the effectiveness of the SAO network this paper presents a case study of patents related to Polymer Electrolyte Membrane technology in Proton Exchange Membrane Fuel Cells.", "title": "" }, { "docid": "dac17254c16068a4dcf49e114bfcc822", "text": "We present a novel coded exposure video technique for multi-image motion deblurring. The key idea of this paper is to capture video frames with a set of complementary fluttering patterns, which enables us to preserve all spectrum bands of a latent image and recover a sharp latent image. To achieve this, we introduce an algorithm for generating a complementary set of binary sequences based on the modern communication theory and implement the coded exposure video system with an off-the-shelf machine vision camera. To demonstrate the effectiveness of our method, we provide in-depth analyses of the theoretical bounds and the spectral gains of our method and other state-of-the-art computational imaging approaches. We further show deblurring results on various challenging examples with quantitative and qualitative comparisons to other computational image capturing methods used for image deblurring, and show how our method can be applied for protecting privacy in videos.", "title": "" }, { "docid": "5f1c6bb714a9daeeec807117284e92f0", "text": "One potential method to estimate noninvasive cuffless blood pressure (BP) is pulse wave velocity (PWV), which can be calculated by using the distance and the transit time of the blood between two arterial sites. To obtain the pulse waveform, bioimpedance (BI) measurement is a promising approach because it continuously reflects the change in BP through the change in the arterial cross-sectional area. Many studies have investigated BI channels in a vertical direction with electrodes located along the wrist and the finger to calculate PWV and convert to BP; however, the measurement systems were relatively large in size. In order to reduce the total device size for use in a PWV-based BP smartwatch, this study proposed and examined a robust horizontal BI structure. The BI device was also designed to apply in a very small body area. The proposed structure was based on two sets of four electrodes attached around the wrist. Our model was evaluated on 15 human subjects; the PWV values were obtained with various distances between two BI channels to assess the efficacy. The results showed that the designed BI system can monitor pulse rate efficiently in only a 0.5 × 1.75 cm² area of the body. The correlation of pulse rate from the proposed design against the reference was 0.98 ± 0.07 (p < 0.001). Our structure yielded higher detection ratios for PWV measurements of 99.0 ± 2.2%, 99.0 ± 2.1%, and 94.8 ± 3.7% at 1, 2, and 3 cm between two BI channels, respectively. The measured PWVs correlated well with the BP standard device at 0.81 ± 0.08 and 0.84 ± 0.07 with low root-mean-squared-errors at 7.47 ± 2.15 mmHg and 5.17 ± 1.81 mmHg for SBP and DBP, respectively. The result demonstrates the potential of a new wearable BP smartwatch structure.", "title": "" }, { "docid": "8e0ac2ad99b819f0c1c36cfa4f20b0ef", "text": "As a new distributed computing model, crowdsourcing lets people leverage the crowd's intelligence and wisdom toward solving problems. This article proposes a framework for characterizing various dimensions of quality control in crowdsourcing systems, a critical issue. The authors briefly review existing quality-control approaches, identify open issues, and look to future research directions. In the Web extra, the authors discuss both design-time and runtime approaches in more detail.", "title": "" }, { "docid": "a0172830d69b0a386aa291235e5837a0", "text": "There is an increasing need to bring machine learning to a wide diversity of hardware devices. Current frameworks rely on vendor-specific operator libraries and optimize for a narrow range of server-class GPUs. Deploying workloads to new platforms – such as mobile phones, embedded devices, and accelerators (e.g., FPGAs, ASICs) – requires significant manual effort. We propose TVM, a compiler that exposes graph-level and operator-level optimizations to provide performance portability to deep learning workloads across diverse hardware back-ends. TVM solves optimization challenges specific to deep learning, such as high-level operator fusion, mapping to arbitrary hardware primitives, and memory latency hiding. It also automates optimization of low-level programs to hardware characteristics by employing a novel, learning-based cost modeling method for rapid exploration of code optimizations. Experimental results show that TVM delivers performance across hardware back-ends that are competitive with state-ofthe-art, hand-tuned libraries for low-power CPU, mobile GPU, and server-class GPUs. We also demonstrate TVM’s ability to target new accelerator back-ends, such as the FPGA-based generic deep learning accelerator. The system is open sourced and in production use inside several major companies.", "title": "" }, { "docid": "9f40a57159a06ecd9d658b4d07a326b5", "text": "_____________________________________________________________________________ The aim of the present study was to investigate a cytotoxic oxidative cell stress related and the antioxidant profile of kaempferol, quercetin, and isoquercitrin. The flavonol compounds were able to act as scavengers of superoxide anion (but not hydrogen peroxide), hypochlorous acid, chloramine and nitric oxide. Although flavonoids are widely described as antioxidants and this activity is generally related to beneficial effects on human health, here we show important cytotoxic actions of three well known flavonoids. They were able to promote hemolysis which one was exacerbated on the presence of hypochlorous acid but not by AAPH radical. Therefore, WWW.SCIELO.BR/EQ VOLUME 36, NÚMERO 2, 2011", "title": "" }, { "docid": "ac1edbb7cef99be7127cb505faf7a082", "text": "http://dujs.dartmouth.edu/2011/02/you-are-what-you-eat-how-food-affects-your-mood/ or thousands of years, people have believed that food could influence their health and wellbeing. Hippocrates, the father of modern medicine, once said: “Let your food be your medicine, and your medicine be your food” (1). In medieval times, people started to take great interest in how certain foods affected their mood and temperament. Many medical culinary textbooks of the time described the relationship between food and mood. For example, quince, dates and elderberries were used as mood enhancers, lettuce and chicory as tranquilizers, and apples, pomegranates, beef and eggs as erotic stimulants (1). The past 80 years have seen immense progress in research, primarily short-term human trials and animal studies, showing how certain foods change brain structure, chemistry, and physiology thus affecting mood and performance. These studies suggest that foods directly influencing brain neurotransmitter systems have the greatest effects on mood, at least temporarily. In turn, mood can also influence our food choices and expectations on the effects of certain foods can influence our perception.", "title": "" } ]
scidocsrr
676c2fb0b8eea08a77d812ffa3ef15b9
Impact of Data Normalization on Stock Index Forecasting
[ { "docid": "5fb09fd2436069e01ad2d9292769069c", "text": "In this study, we propose a novel nonlinear ensemble forecasting model integrating generalized linear autoregression (GLAR) with artificial neural networks (ANN) in order to obtain accurate prediction results and ameliorate forecasting performances. We compare the new model’s performance with the two individual forecasting models—GLAR and ANN—as well as with the hybrid model and the linear combination models. Empirical results obtained reveal that the prediction using the nonlinear ensemble model is generally better than those obtained using the other models presented in this study in terms of the same evaluation measurements. Our findings reveal that the nonlinear ensemble model proposed here can be used as an alternative forecasting tool for exchange rates to achieve greater forecasting accuracy and improve prediction quality further. 2004 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "5546f93f4c10681edb0fdfe3bf52809c", "text": "The current applications of neural networks to in vivo medical imaging and signal processing are reviewed. As is evident from the literature neural networks have already been used for a wide variety of tasks within medicine. As this trend is expected to continue this review contains a description of recent studies to provide an appreciation of the problems associated with implementing neural networks for medical imaging and signal processing.", "title": "" } ]
[ { "docid": "d4724f6b007c914120508b2e694a31d9", "text": "Finding semantically related words is a first step in the dire ct on of automatic ontology building. Guided by the view that similar words occur in simi lar contexts, we looked at the syntactic context of words to measure their semantic sim ilarity. Words that occur in a direct object relation with the verb drink, for instance, have something in common ( liquidity, ...). Co-occurrence data for common nouns and proper names , for several syntactic relations, was collected from an automatically parsed corp us of 78 million words of newspaper text. We used several vector-based methods to compute the distributional similarity between words. Using Dutch EuroWordNet as evaluation stand ard, we investigated which vector-based method and which combination of syntactic rel ations is the strongest predictor of semantic similarity.", "title": "" }, { "docid": "c96fa07ef9860880d391a750826f5faf", "text": "This paper presents the investigations of short-circuit current, electromagnetic force, and transient dynamic response of windings deformation including mechanical stress, strain, and displacements for an oil-immersed-type 220-kV power transformer. The worst-case fault with three-phase short-circuit happening simultaneously is assumed. A considerable leakage magnetic field excited by short-circuit current can produce the dynamical electromagnetic force to act on copper disks in each winding. The two-dimensional finite element method (FEM) is employed to obtain the electromagnetic force and its dynamical characteristics in axial and radial directions. In addition, to calculate the windings deformation accurately, we measured the nonlinear elasticity characteristic of spacer and built three-dimensional FE kinetic model to analyze the axial dynamic deformation. The results of dynamic mechanical stress and strain induced by combining of short-circuit force and prestress are useful for transformer design and fault diagnosis.", "title": "" }, { "docid": "738a69ad1006c94a257a25c1210f6542", "text": "Encrypted data search allows cloud to offer fundamental information retrieval service to its users in a privacy-preserving way. In most existing schemes, search result is returned by a semi-trusted server and usually considered authentic. However, in practice, the server may malfunction or even be malicious itself. Therefore, users need a result verification mechanism to detect the potential misbehavior in this computation outsourcing model and rebuild their confidence in the whole search process. On the other hand, cloud typically hosts large outsourced data of users in its storage. The verification cost should be efficient enough for practical use, i.e., it only depends on the corresponding search operation, regardless of the file collection size. In this paper, we are among the first to investigate the efficient search result verification problem and propose an encrypted data search scheme that enables users to conduct secure conjunctive keyword search, update the outsourced file collection and verify the authenticity of the search result efficiently. The proposed verification mechanism is efficient and flexible, which can be either delegated to a public trusted authority (TA) or be executed privately by data users. We formally prove the universally composable (UC) security of our scheme. Experimental result shows its practical efficiency even with a large dataset.", "title": "" }, { "docid": "d0e2f8c9c7243f5a67e73faeb78038d1", "text": "This paper presents a novel learning framework for training boosting cascade based object detector from large scale dataset. The framework is derived from the well-known Viola-Jones (VJ) framework but distinguished by three key differences. First, the proposed framework adopts multi-dimensional SURF features instead of single dimensional Haar features to describe local patches. In this way, the number of used local patches can be reduced from hundreds of thousands to several hundreds. Second, it adopts logistic regression as weak classifier for each local patch instead of decision trees in the VJ framework. Third, we adopt AUC as a single criterion for the convergence test during cascade training rather than the two trade-off criteria (false-positive-rate and hit-rate) in the VJ framework. The benefit is that the false-positive-rate can be adaptive among different cascade stages, and thus yields much faster convergence speed of SURF cascade. Combining these points together, the proposed approach has three good properties. First, the boosting cascade can be trained very efficiently. Experiments show that the proposed approach can train object detectors from billions of negative samples within one hour even on personal computers. Second, the built detector is comparable to the state-of-the-art algorithm not only on the accuracy but also on the processing speed. Third, the built detector is small in model-size due to short cascade stages.", "title": "" }, { "docid": "d12d475dc72f695d3aecfb016229da19", "text": "Following the increasing popularity of the mobile ecosystem, cybercriminals have increasingly targeted mobile ecosystems, designing and distributing malicious apps that steal information or cause harm to the device's owner. Aiming to counter them, detection techniques based on either static or dynamic analysis that model Android malware, have been proposed. While the pros and cons of these analysis techniques are known, they are usually compared in the context of their limitations e.g., static analysis is not able to capture runtime behaviors, full code coverage is usually not achieved during dynamic analysis, etc. Whereas, in this paper, we analyze the performance of static and dynamic analysis methods in the detection of Android malware and attempt to compare them in terms of their detection performance, using the same modeling approach.To this end, we build on MAMADROID, a state-of-the-art detection system that relies on static analysis to create a behavioral model from the sequences of abstracted API calls. Then, aiming to apply the same technique in a dynamic analysis setting, we modify CHIMP, a platform recently proposed to crowdsource human inputs for app testing, in order to extract API calls' sequences from the traces produced while executing the app on a CHIMP virtual device. We call this system AUNTIEDROID and instantiate it by using both automated (Monkey) and usergenerated inputs. We find that combining both static and dynamic analysis yields the best performance, with $F -$measure reaching 0.92. We also show that static analysis is at least as effective as dynamic analysis, depending on how apps are stimulated during execution, and investigate the reasons for inconsistent misclassifications across methods.", "title": "" }, { "docid": "3f6fcee0073e7aaf587602d6510ed913", "text": "BACKGROUND\nTreatment of early onset scoliosis (EOS) is challenging. In many cases, bracing will not be effective and growing rod surgery may be inappropriate. Serial, Risser casts may be an effective intermediate method of treatment.\n\n\nMETHODS\nWe studied 20 consecutive patients with EOS who received serial Risser casts under general anesthesia between 1999 and 2011. Analyses included diagnosis, sex, age at initial cast application, major curve severity, initial curve correction, curve magnitude at the time of treatment change or latest follow-up for those still in casts, number of casts per patient, the type of subsequent treatment, and any complications.\n\n\nRESULTS\nThere were 8 patients with idiopathic scoliosis, 6 patients with neuromuscular scoliosis, 5 patients with syndromic scoliosis, and 1 patient with skeletal dysplasia. Fifteen patients were female and 5 were male. The mean age at first cast was 3.8±2.3 years (range, 1 to 8 y), and the mean major curve magnitude was 74±18 degrees (range, 40 to 118 degrees). After initial cast application, the major curve measured 46±14 degrees (range, 25 to 79 degrees). At treatment change or latest follow-up for those still in casts, the major curve measured 53±24 degrees (range, 13 to 112 degrees). The mean time in casts was 16.9±9.1 months (range, 4 to 35 mo). The mean number of casts per patient was 4.7±2.2 casts (range, 1 to 9 casts). At the time of this study, 7 patients had undergone growing rod surgery, 6 patients were still undergoing casting, 5 returned to bracing, and 2 have been lost to follow-up. Four patients had minor complications: 2 patients each with superficial skin irritation and cast intolerance.\n\n\nCONCLUSIONS\nSerial Risser casting is a safe and effective intermediate treatment for EOS. It can stabilize relatively large curves in young children and allows the child to reach a more suitable age for other forms of treatment, such as growing rods.\n\n\nLEVEL OF EVIDENCE\nLevel IV; case series.", "title": "" }, { "docid": "e1fd762bc710863f2df3fd6c41cf468b", "text": "This paper analyzes the performances of Spearman’s rho (SR) and Kendall’s tau (KT) with respect to samples drawn from bivariate normal and contaminated normal populations. Theoretical and simulation results suggest that, contrary to the opinion of equivalence between SR and KT in some literature, the behaviors of SR and KT are strikingly different in the aspects of bias effect, variance, mean square error (MSE), and asymptotic relative efficiency (ARE). The new findings revealed in this work provide not only deeper insights into the two most widely used rank-based correlation coefficients, but also a guidance for choosing which one to use under the circumstances where Pearson’s product moment correlation coefficient (PPMCC) fails to apply. & 2012 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "13e2b22875e1a23e9e8ea2f80671c74e", "text": "This paper presents a novel yet intuitive approach to unsupervised feature learning. Inspired by the human visual system, we explore whether low-level motion-based grouping cues can be used to learn an effective visual representation. Specifically, we use unsupervised motion-based segmentation on videos to obtain segments, which we use as pseudo ground truth to train a convolutional network to segment objects from a single frame. Given the extensive evidence that motion plays a key role in the development of the human visual system, we hope that this straightforward approach to unsupervised learning will be more effective than cleverly designed pretext tasks studied in the literature. Indeed, our extensive experiments show that this is the case. When used for transfer learning on object detection, our representation significantly outperforms previous unsupervised approaches across multiple settings, especially when training data for the target task is scarce.", "title": "" }, { "docid": "cf17aefc8e4cb91c6fdb7c621651d41e", "text": "Quantitative 13C NMR spectroscopy has been used to study the chemical structure of industrial kraft lignin, obtained from softwood pulping, and its nitrosated derivatives, which demonstrate high inhibition activity in the polymerization of unsaturated hydrocarbons.", "title": "" }, { "docid": "e34a61754ff8cfac053af5cbedadd9e0", "text": "An ongoing, annual survey of publications in systems and software engineering identifies the top 15 scholars and institutions in the field over a 5-year period. Each ranking is based on the weighted scores of the number of papers published in TSE, TOSEM, JSS, SPE, EMSE, IST, and Software of the corresponding period. This report summarizes the results for 2003–2007 and 2004–2008. The top-ranked institution is Korea Advanced Institute of Science and Technology, Korea for 2003–2007, and Simula Research Laboratory, Norway for 2004–2008, while Magne Jørgensen is the top-ranked scholar for both periods.", "title": "" }, { "docid": "4421a42fc5589a9b91215b68e1575a3f", "text": "We present a method for extracting depth information from a rectified image pair. Our approach focuses on the first stage of many stereo algorithms: the matching cost computation. We approach the problem by learning a similarity measure on small image patches using a convolutional neural network. Training is carried out in a supervised manner by constructing a binary classification data set with examples of similar and dissimilar pairs of patches. We examine two network architectures for this task: one tuned for speed, the other for accuracy. The output of the convolutional neural network is used to initialize the stereo matching cost. A series of post-processing steps follow: cross-based cost aggregation, semiglobal matching, a left-right consistency check, subpixel enhancement, a median filter, and a bilateral filter. We evaluate our method on the KITTI 2012, KITTI 2015, and Middlebury stereo data sets and show that it outperforms other approaches on all three data sets.", "title": "" }, { "docid": "e5638848a3844d7edf7dae7115233771", "text": "Interest in gamification is growing steadily. But as the underlying mechanisms of gamification are not well understood yet, a closer examination of a gamified activity's meaning and individual game design elements may provide more insights. We examine the effects of points -- a basic element of gamification, -- and meaningful framing -- acknowledging participants' contribution to a scientific cause, -- on intrinsic motivation and performance in an online image annotation task. Based on these findings, we discuss implications and opportunities for future research on gamification.", "title": "" }, { "docid": "2c329f3d77abe2d73bbddee34268c12f", "text": "Various procedures of mixing starting powders for hot-pressing α-SiAlON ceramics were studied. They included different milling methods (attrition milling, ball milling, and sonication), liquid medium (water, isopropyl alcohol, and pyridine), and atmospheres (ambient air and nitrogen). These mixing procedures resulted in markedly different densification behavior and fired ceramics. As the powders experienced increasing oxidation because of mixing, the densification temperature decreased, the amount of residual glass increased, and α-SiAlON was destabilized and replaced by ß-SiAlON and AlN polytypes during hot pressing. These effects were mitigated when pyridine, nitrogen, and sonication were used. Several protocols that yielded nearly phase-pure, glass-free dense α-SiAlON were thus identified. Comments Copyright The American Ceramic Society. Reprinted from Journal of the American Ceramic Society, Volume 89, Issue 3, 2006, pages 1110-1113. This journal article is available at ScholarlyCommons: http://repository.upenn.edu/mse_papers/103 The Effect of Powder Mixing Procedures on a-SiAlON Roman Shuba and I-Wei Chen Department of Materials Science and Engineering, University of Pennsylvania, Philadelphia, Pennsylvania 19104-6272 Various procedures of mixing starting powders for hot-pressing a-SiAlON ceramics were studied. They included different milling methods (attrition milling, ball milling, and sonication), liquid medium (water, isopropyl alcohol, and pyridine), and atmospheres (ambient air and nitrogen). These mixing procedures resulted in markedly different densification behavior and fired ceramics. As the powders experienced increasing oxidation because of mixing, the densification temperature decreased, the amount of residual glass increased, and a-SiAlON was destabilized and replaced by b-SiAlON and AlN polytypes during hot pressing. These effects were mitigated when pyridine, nitrogen, and sonication were used. Several protocols that yielded nearly phase-pure, glass-free dense a-SiAlON were thus identified.", "title": "" }, { "docid": "a00cc13a716439c75a5b785407b02812", "text": "A novel current feedback programming principle and circuit architecture are presented, compatible with LED displays utilizing the 2T1C pixel structure. The new pixel programming approach is compatible with all TFT backplane technologies and can compensate for non-uniformities in both threshold voltage and carrier mobility of the OLED pixel drive TFT, due to a feedback loop that modulates the gate of the driving transistor according to the OLED current. The circuit can be internal or external to the integrated display data driver. Based on simulations and data gathered through a fabricated prototype driver, a pixel drive current of 20 nA can be programmed within an addressing time ranging from 10 μs to 50 μs.", "title": "" }, { "docid": "4b1948d0b09047baf27b95f5b416c8e7", "text": "Recently, several pattern recognition methods have been proposed to automatically discriminate between patients with and without Alzheimer's disease using different imaging modalities: sMRI, fMRI, PET and SPECT. Classical approaches in visual information retrieval have been successfully used for analysis of structural MRI brain images. In this paper, we use the visual indexing framework and pattern recognition analysis based on structural MRI data to discriminate three classes of subjects: normal controls (NC), mild cognitive impairment (MCI) and Alzheimer's disease (AD). The approach uses the circular harmonic functions (CHFs) to extract local features from the most involved areas in the disease: hippocampus and posterior cingulate cortex (PCC) in each slice in all three brain projections. The features are quantized using the Bag-of-Visual-Words approach to build one signature by brain (subject). This yields a transformation of a full 3D image of brain ROIs into a 1D signature, a histogram of quantized features. To reduce the dimensionality of the signature, we use the PCA technique. Support vector machines classifiers are then applied to classify groups. The experiments were conducted on a subset of ADNI dataset and applied to the \"Bordeaux-3City\" dataset. The results showed that our approach achieves respectively for ADNI dataset and \"Bordeaux-3City\" dataset; for AD vs NC classification, an accuracy of 83.77% and 78%, a specificity of 88.2% and 80.4% and a sensitivity of 79.09% and 74.7%. For NC vs MCI classification we achieved for the ADNI datasets an accuracy of 69.45%, a specificity of 74.8% and a sensitivity of 62.52%. For the most challenging classification task (AD vs MCI), we reached an accuracy of 62.07%, a specificity of 75.15% and a sensitivity of 49.02%. The use of PCC visual features description improves classification results by more than 5% compared to the use of hippocampus features only. Our approach is automatic, less time-consuming and does not require the intervention of the clinician during the disease diagnosis.", "title": "" }, { "docid": "5e6f9014a07e7b2bdfd255410a73b25f", "text": "Context: Offshore software development outsourcing is a modern business strategy for developing high quality software at low cost. Objective: The objective of this research paper is to identify and analyse factors that are important in terms of the competitiveness of vendor organisations in attracting outsourcing projects. Method: We performed a systematic literature review (SLR) by applying our customised search strings which were derived from our research questions. We performed all the SLR steps, such as the protocol development, initial selection, final selection, quality assessment, data extraction and data synthesis. Results: We have identified factors such as cost-saving, skilled human resource, appropriate infrastructure, quality of product and services, efficient outsourcing relationships management, and an organisation’s track record of successful projects which are generally considered important by the outsourcing clients. Our results indicate that appropriate infrastructure, cost-saving, and skilled human resource are common in three continents, namely Asia, North America and Europe. We identified appropriate infrastructure, cost-saving, and quality of products and services as being common in three types of organisations (small, medium and large). We have also identified four factors-appropriate infrastructure, cost-saving, quality of products and services, and skilled human resource as being common in the two decades (1990–1999 and 2000–mid 2008). Conclusions: Cost-saving should not be considered as the driving factor in the selection process of software development outsourcing vendors. Vendors should rather address other factors in order to compete in as sk the OSDO business, such and services.", "title": "" }, { "docid": "c576c08aa746ea30a528e104932047a6", "text": "Despite tremendous progress achieved in temporal action localization, state-of-the-art methods still struggle to train accurate models when annotated data is scarce. In this paper, we introduce a novel active learning framework for temporal localization that aims to mitigate this data dependency issue. We equip our framework with active selection functions that can reuse knowledge from previously annotated datasets. We study the performance of two state-of-the-art active selection functions as well as two widely used active learning baselines. To validate the effectiveness of each one of these selection functions, we conduct simulated experiments on ActivityNet. We find that using previously acquired knowledge as a bootstrapping source is crucial for active learners aiming to localize actions. When equipped with the right selection function, our proposed framework exhibits significantly better performance than standard active learning strategies, such as uncertainty sampling. Finally, we employ our framework to augment the newly compiled Kinetics action dataset with ground-truth temporal annotations. As a result, we collect Kinetics-Localization, a novel large-scale dataset for temporal action localization, which contains more than 15K YouTube videos.", "title": "" }, { "docid": "0bf292fdbc04805b4bd671d6f5099cf7", "text": "We consider the stochastic optimization of finite sums over a Riemannian manifold where the functions are smooth and convex. We present MASAGA, an extension of the stochastic average gradient variant SAGA on Riemannian manifolds. SAGA is a variance-reduction technique that typically outperforms methods that rely on expensive full-gradient calculations, such as the stochastic variance-reduced gradient method. We show that MASAGA achieves a linear convergence rate with uniform sampling, and we further show that MASAGA achieves a faster convergence rate with non-uniform sampling. Our experiments show that MASAGA is faster than the recent Riemannian stochastic gradient descent algorithm for the classic problem of finding the leading eigenvector corresponding to the maximum eigenvalue.", "title": "" }, { "docid": "67417a87eff4ad3b1d2a906a1f17abd2", "text": "Epitaxial growth of A-A and A-B stacking MoS2 on WS2 via a two-step chemical vapor deposition method is reported. These epitaxial heterostructures show an atomic clean interface and a strong interlayer coupling, as evidenced by systematic characterization. Low-frequency Raman breathing and shear modes are observed in commensurate stacking bilayers for the first time; these can serve as persuasive fingerprints for interfacial quality and stacking configurations.", "title": "" }, { "docid": "67f46f2866852372a78c7745d9e29a63", "text": "The endosomal sorting complexes required for transport (ESCRTs) catalyse one of the most unusual membrane remodelling events in cell biology. ESCRT-I and ESCRT-II direct membrane budding away from the cytosol by stabilizing bud necks without coating the buds and without being consumed in the buds. ESCRT-III cleaves the bud necks from their cytosolic faces. ESCRT-III-mediated membrane neck cleavage is crucial for many processes, including the biogenesis of multivesicular bodies, viral budding, cytokinesis and, probably, autophagy. Recent studies of ultrastructures induced by ESCRT-III overexpression in cells and the in vitro reconstitution of the budding and scission reactions have led to breakthroughs in understanding these remarkable membrane reactions.", "title": "" } ]
scidocsrr
66d70e9d7fece9d5c642f654dfc1c3a7
Characterization and management of exfoliative cheilitis: a single-center experience.
[ { "docid": "3e2f4a96462ed5a12fbe0462272d013c", "text": "Exfoliative cheilitis is an uncommon condition affecting the vermilion zone of the upper, lower or both lips. It is characterized by the continuous production and desquamation of unsightly, thick scales of keratin; when removed, these leave a normal appearing lip beneath. The etiology is unknown, although some cases may be factitious. Attempts at treatment by a wide variety of agents and techniques have been unsuccessful. Three patients with this disease are reported and its relationship to factitious cheilitis and candidal cheilitis is discussed.", "title": "" } ]
[ { "docid": "b7dd9d1cb89ec4aab21b9bb35cec1beb", "text": "Target detection is one of the important applications in the field of remote sensing. The Gaofen-3 (GF-3) Synthetic Aperture Radar (SAR) satellite launched by China is a powerful tool for maritime monitoring. This work aims at detecting ships in GF-3 SAR images using a new land masking strategy, the appropriate model for sea clutter and a neural network as the discrimination scheme. Firstly, the fully convolutional network (FCN) is applied to separate the sea from the land. Then, by analyzing the sea clutter distribution in GF-3 SAR images, we choose the probability distribution model of Constant False Alarm Rate (CFAR) detector from K-distribution, Gamma distribution and Rayleigh distribution based on a tradeoff between the sea clutter modeling accuracy and the computational complexity. Furthermore, in order to better implement CFAR detection, we also use truncated statistic (TS) as a preprocessing scheme and iterative censoring scheme (ICS) for boosting the performance of detector. Finally, we employ a neural network to re-examine the results as the discrimination stage. Experiment results on three GF-3 SAR images verify the effectiveness and efficiency of this approach.", "title": "" }, { "docid": "ca722c65f7089f6fad369ce0f3d42abd", "text": "A huge amount of texts available on the World Wide Web presents an unprecedented opportunity for information extraction (IE). One important assumption in IE is that frequent extractions are more likely to be correct. Sparse IE is hence a challenging task because no matter how big a corpus is, there are extractions supported by only a small amount of evidence in the corpus. However, there is limited research on sparse IE, especially in the assessment of the validity of sparse IEs. Motivated by this, we introduce a lightweight, explicit semantic approach for assessing sparse IE.1 We first use a large semantic network consisting of millions of concepts, entities, and attributes to explicitly model the context of any semantic relationship. Second, we learn from three semantic contexts using different base classifiers to select an optimal classification model for assessing sparse extractions. Finally, experiments show that as compared with several state-of-the-art approaches, our approach can significantly improve the F-score in the assessment of sparse extractions while maintaining the efficiency.", "title": "" }, { "docid": "92cafadc922255249108ce4a0dad9b98", "text": "Generative Adversarial Networks (GAN) have attracted much research attention recently, leading to impressive results for natural image generation. However, to date little success was observed in using GAN generated images for improving classification tasks. Here we attempt to explore, in the context of car license plate recognition, whether it is possible to generate synthetic training data using GAN to improve recognition accuracy. With a carefully-designed pipeline, we show that the answer is affirmative. First, a large-scale image set is generated using the generator of GAN, without manual annotation. Then, these images are fed to a deep convolutional neural network (DCNN) followed by a bidirectional recurrent neural network (BRNN) with long short-term memory (LSTM), which performs the feature learning and sequence labelling. Finally, the pre-trained model is fine-tuned on real images. Our experimental results on a few data sets demonstrate the effectiveness of using GAN images: an improvement of 7.5% over a strong baseline with moderate-sized real data being available. We show that the proposed framework achieves competitive recognition accuracy on challenging test datasets. We also leverage the depthwise separate convolution to construct a lightweight convolutional RNN, which is about half size and 2× faster on CPU. Combining this framework and the proposed pipeline, we make progress in performing accurate recognition on mobile and embedded devices.", "title": "" }, { "docid": "0801a2fd26263388a678d57bf7d2ff88", "text": "In the past, conventional i-vectors based on a Universal Background Model (UBM) have been successfully used as input features to adapt a Deep Neural Network (DNN) Acoustic Model (AM) for Automatic Speech Recognition (ASR). In contrast, this paper introduces Hidden Markov Model (HMM) based ivectors that use HMM state alignment information from an ASR system for estimating i-vectors. Further, we propose passing these HMM based i-vectors though an explicit non-linear hidden layer of a DNN before combining them with standard acoustic features, such as log filter bank energies (LFBEs). To improve robustness to mismatched adaptation data, we also propose estimating i-vectors in a causal fashion for training the DNN, restricting the connectivity among hidden nodes in the DNN and applying a max-pool non-linearity at selected hidden nodes. In our experiments, these techniques yield about 5-7% relative word error rate (WER) improvement over the baseline speaker independent system in matched condition, and a substantial WER reduction for mismatched adaptation data.", "title": "" }, { "docid": "30596d0edee0553117c5109eb948e1b6", "text": "Spatial relationships between objects provide important information for text-based image retrieval. As users are more likely to describe a scene from a real world perspective, using 3D spatial relationships rather than 2D relationships that assume a particular viewing direction, one of the main challenges is to infer the 3D structure that bridges images with users text descriptions. However, direct inference of 3D structure from images requires learning from large scale annotated data. Since interactions between objects can be reduced to a limited set of atomic spatial relations in 3D, we study the possibility of inferring 3D structure from a text description rather than an image, applying physical relation models to synthesize holistic 3D abstract object layouts satisfying the spatial constraints present in a textual description. We present a generic framework for retrieving images from a textual description of a scene by matching images with these generated abstract object layouts. Images are ranked by matching object detection outputs (bounding boxes) to 2D layout candidates (also represented by bounding boxes) which are obtained by projecting the 3D scenes with sampled camera directions. We validate our approach using public indoor scene datasets and show that our method outperforms baselines built upon object occurrence histograms and learned 2D pairwise relations.", "title": "" }, { "docid": "35293c16985878fca24b5a327fd52c72", "text": "In this paper we present a method for learning a discriminative classifier from unlabeled or partially labeled data. Our approach is based on an objective function that trades-off mutual information between observed examples and their predicted categorical class distribution, against robustness of the classifier to an adversarial generative model. The resulting algorithm can either be interpreted as a natural generalization of the generative adversarial networks (GAN) framework or as an extension of the regularized information maximization (RIM) framework to robust classification against an optimal adversary. We empirically evaluate our method – which we dub categorical generative adversarial networks (or CatGAN) – on synthetic data as well as on challenging image classification tasks, demonstrating the robustness of the learned classifiers. We further qualitatively assess the fidelity of samples generated by the adversarial generator that is learned alongside the discriminative classifier, and identify links between the CatGAN objective and discriminative clustering algorithms (such as RIM).", "title": "" }, { "docid": "f11bfcebaa54f78c26ce7534e30c3fb8", "text": "This article describes OpenTracker, an open software architecture that provides a framework for the different tasks involved in tracking input devices and processing multi-modal input data in virtual environments and augmented reality application. The OpenTracker framework eases the development and maintenance of hardware setups in a more flexible manner than what is typically offered by virtual reality development packages. This goal is achieved by using an object-oriented design based on XML, taking full advantage of this new technology by allowing to use standard XML tools for development, configuration and documentation. The OpenTracker engine is based on a data flow concept for multi-modal events. A multi-threaded execution model takes care of tunable performance. Transparent network access allows easy development of decoupled simulation models. Finally, the application developer's interface features both a time-based and an event based model, that can be used simultaneously, to serve a large range of applications. OpenTracker is a first attempt towards a \"'write once, input anywhere\"' approach to virtual reality application development. To support these claims, integration into an existing augmented reality system is demonstrated. We also show how a prototype tracking equipment for mobile augmented reality can be assembled from consumer input devices with the aid of OpenTracker. Once development is sufficiently mature, it is planned to make Open-Tracker available to the public under an open source software license.", "title": "" }, { "docid": "d8d068254761619ccbcd0bbab896d3b2", "text": "In this article we illustrate a methodology for introducing and maintaining ontology based knowledge management applications into enterprises with a focus on Knowledge Processes and Knowledge Meta Processes. While the former process circles around the usage of ontologies, the latter process guides their initial set up. We illustrate our methodology by an example from a case study on skills management.", "title": "" }, { "docid": "f7d30db4b04b33676d386953aebf503c", "text": "Microvascular free flap transfer currently represents one of the most popular methods for mandibularreconstruction. With the various free flap options nowavailable, there is a general consensus that no single kindof osseous or osteocutaneous flap can resolve the entire spectrum of mandibular defects. A suitable flap, therefore, should be selected according to the specific type of bone and soft tissue defect. We have developed an algorithm for mandibular reconstruction, in which the bony defect is termed as either “lateral” or “anterior” and the soft-tissue defect is classified as “none,” “skin or mucosal,” or “through-and-through.” For proper flap selection, the bony defect condition should be considered first, followed by the soft-tissue defect condition. When the bony defect is “lateral” and the soft tissue is not defective, the ilium is the best choice. When the bony defect is “lateral” and a small “skin or mucosal” soft-tissue defect is present, the fibula represents the optimal choice. When the bony defect is “lateral” and an extensive “skin or mucosal” or “through-and-through” soft-tissue defect exists, the scapula should be selected. When the bony defect is “anterior,” the fibula should always be selected. However, when an “anterior” bone defect also displays an “extensive” or “through-and-through” soft-tissue defect, the fibula should be usedwith other soft-tissue flaps. Flaps such as a forearm flap, anterior thigh flap, or rectus abdominis musculocutaneous flap are suitable, depending on the size of the soft-tissue defect.", "title": "" }, { "docid": "622b0d9526dfee6abe3a605fa83e92ed", "text": "Biomedical Image Processing is a growing and demanding field. It comprises of many different types of imaging methods likes CT scans, X-Ray and MRI. These techniques allow us to identify even the smallest abnormalities in the human body. The primary goal of medical imaging is to extract meaningful and accurate information from these images with the least error possible. Out of the various types of medical imaging processes available to us, MRI is the most reliable and safe. It does not involve exposing the body to any sorts of harmful radiation. This MRI can then be processed, and the tumor can be segmented. Tumor Segmentation includes the use of several different techniques. The whole process of detecting brain tumor from an MRI can be classified into four different categories: Pre-Processing, Segmentation, Optimization and Feature Extraction. This survey involves reviewing the research by other professionals and compiling it into one paper.", "title": "" }, { "docid": "b9779b478ee8714d5b0f6ce3e0857c9f", "text": "Sensor-based motion recognition integrates the emerging area of wearable sensors with novel machine learning techniques to make sense of low-level sensor data and provide rich contextual information in a real-life application. Although Human Activity Recognition (HAR) problem has been drawing the attention of researchers, it is still a subject of much debate due to the diverse nature of human activities and their tracking methods. Finding the best predictive model in this problem while considering different sources of heterogeneities can be very difficult to analyze theoretically, which stresses the need of an experimental study. Therefore, in this paper, we first create the most complete dataset, focusing on accelerometer sensors, with various sources of heterogeneities. We then conduct an extensive analysis on feature representations and classification techniques (the most comprehensive comparison yet with 293 classifiers) for activity recognition. Principal component analysis is applied to reduce the feature vector dimension while keeping essential information. The average classification accuracy of eight sensor positions is reported to be 96.44% ± 1.62% with 10-fold evaluation, whereas accuracy of 79.92% ± 9.68% is reached in the subject-independent evaluation. This study presents significant evidence that we can build predictive models for HAR problem under more realistic conditions, and still achieve highly accurate results.", "title": "" }, { "docid": "b10074ccf133a3c18a2029a5fe52f7ff", "text": "Maneuvering vessel detection and tracking (VDT), incorporated with state estimation and trajectory prediction, are important tasks for vessel navigational systems (VNSs), as well as vessel traffic monitoring and information systems (VTMISs) to improve maritime safety and security in ocean navigation. Although conventional VNSs and VTMISs are equipped with maritime surveillance systems for the same purpose, intelligent capabilities for vessel detection, tracking, state estimation, and navigational trajectory prediction are underdeveloped. Therefore, the integration of intelligent features into VTMISs is proposed in this paper. The first part of this paper is focused on detecting and tracking of a multiple-vessel situation. An artificial neural network (ANN) is proposed as the mechanism for detecting and tracking multiple vessels. In the second part of this paper, vessel state estimation and navigational trajectory prediction of a single-vessel situation are considered. An extended Kalman filter (EKF) is proposed for the estimation of vessel states and further used for the prediction of vessel trajectories. Finally, the proposed VTMIS is simulated, and successful simulation results are presented in this paper.", "title": "" }, { "docid": "c464a5f086f09d39b15beb3b3fbfec54", "text": "Sweet cherry, a non-climacteric fruit, is usually cold-stored during post-harvest to prevent over-ripening. The aim of the study was to evaluate the role of abscisic acid (ABA) on fruit growth and ripening of this fruit, considering as well its putative implication in over-ripening and effects on quality. We measured the endogenous concentrations of ABA during the ripening of sweet cherries (Prunus avium L. var. Prime Giant) collected from orchard trees and in cherries exposed to 4°C and 23°C during 10 days of post-harvest. Furthermore, we examined to what extent endogenous ABA concentrations were related to quality parameters, such as fruit biomass, anthocyanin accumulation and levels of vitamins C and E. Endogenous concentrations of ABA in fruits increased progressively during fruit growth and ripening on the tree, to decrease later during post-harvest at 23°C. Cold treatment, however, increased ABA levels and led to an inhibition of over-ripening. Furthermore, ABA levels positively correlated with anthocyanin and vitamin E levels during pre-harvest, but not during post-harvest. We conclude that ABA plays a major role in sweet cherry development, stimulating its ripening process and positively influencing quality parameters during pre-harvest. The possible influence of ABA preventing over-ripening in cold-stored sweet cherries is also discussed.", "title": "" }, { "docid": "3681c33edbb6f4d7ac370699b38e67c8", "text": "The volume of adult content on the world wide web is increasing rapidly. This makes an automatic detection of adult content a more challenging task, when eliminating access to ill-suited websites. Most pornographic webpage–filtering systems are based on n-gram, naïve Bayes, K-nearest neighbor, and keyword-matching mechanisms, which do not provide perfect extraction of useful data from unstructured web content. These systems have no reasoning capability to intelligently filter web content to classify medical webpages from adult content webpages. In addition, it is easy for children to access pornographic webpages due to the freely available adult content on the Internet. It creates a problem for parents wishing to protect their children from such unsuitable content. To solve these problems, this paper presents a support vector machine (SVM) and fuzzy ontology–based semantic knowledge system to systematically filter web content and to identify and block access to pornography. The proposed system classifies URLs into adult URLs and medical URLs by using a blacklist of censored webpages to provide accuracy and speed. The proposed fuzzy ontology then extracts web content to find website type (adult content, normal, and medical) and block pornographic content. In order to examine the efficiency of the proposed system, fuzzy ontology, and intelligent tools are developed using Protégé 5.1 and Java, respectively. Experimental analysis shows that the performance of the proposed system is efficient for automatically detecting and blocking adult content.", "title": "" }, { "docid": "b69e6bf80ad13a60819ae2ebbcc93ae0", "text": "Computational manufacturing technologies such as 3D printing hold the potential for creating objects with previously undreamed-of combinations of functionality and physical properties. Human designers, however, typically cannot exploit the full geometric (and often material) complexity of which these devices are capable. This STAR examines recent systems developed by the computer graphics community in which designers specify higher-level goals ranging from structural integrity and deformation to appearance and aesthetics, with the final detailed shape and manufacturing instructions emerging as the result of computation. It summarizes frameworks for interaction, simulation, and optimization, as well as documents the range of general objectives and domain-specific goals that have been considered. An important unifying thread in this analysis is that different underlying geometric and physical representations are necessary for different tasks: we document over a dozen classes of representations that have been used for fabrication-aware design in the literature. We analyze how these classes possess obvious advantages for some needs, but have also been used in creative manners to facilitate unexpected problem solutions.", "title": "" }, { "docid": "e7c77e563892c7807126c3feca79215a", "text": "With the rapid increase in Android device popularity, the capabilities that the diverse user base demands from Android have significantly exceeded its original design. As a result, people have to seek ways to obtain the permissions not directly offered to ordinary users. A typical way to do that is using the Android Debug Bridge (ADB), a developer tool that has been granted permissions to use critical system resources. Apps adopting this solution have combined tens of millions of downloads on Google Play. However, we found that such ADB-level capabilities are not well guarded by Android. A prominent example we investigated is the apps that perform programmatic screenshots, a much-needed capability Android fails to support. We found that all such apps in the market inadvertently expose this ADB capability to any party with the INTERNET permission on the same device. With this exposure, a malicious app can be built to stealthily and intelligently collect sensitive user data through screenshots. To understand the threat, we built Screenmilker, an app that can detect the right moment to monitor the screen and pick up a user’s password when she is typing in real time. We show that this can be done efficiently by leveraging the unique design of smartphone user interfaces and its public resources. Such an understanding also informs Android developers how to protect this screenshot capability, should they consider providing an interface to let third-party developers use it in the future, and more generally the security risks of the ADB workaround, a standard technique gaining popularity in app development. Based on the understanding, we present a mitigation mechanism that controls the exposure of the ADB capabilities only to authorized apps.", "title": "" }, { "docid": "dbd3234f12aff3ee0e01db8a16b13cad", "text": "Information visualization has traditionally limited itself to 2D representations, primarily due to the prevalence of 2D displays and report formats. However, there has been a recent surge in popularity of consumer grade 3D displays and immersive head-mounted displays (HMDs). The ubiquity of such displays enables the possibility of immersive, stereoscopic visualization environments. While techniques that utilize such immersive environments have been explored extensively for spatial and scientific visualizations, contrastingly very little has been explored for information visualization. In this paper, we present our considerations of layout, rendering, and interaction methods for visualizing graphs in an immersive environment. We conducted a user study to evaluate our techniques compared to traditional 2D graph visualization. The results show that participants answered significantly faster with a fewer number of interactions using our techniques, especially for more difficult tasks. While the overall correctness rates are not significantly different, we found that participants gave significantly more correct answers using our techniques for larger graphs.", "title": "" }, { "docid": "021f8f1a831e1f7a9b363bc240cc527b", "text": "This paper presents a new graph-based approach that induces synsets using synonymy dictionaries and word embeddings. First, we build a weighted graph of synonyms extracted from commonly available resources, such as Wiktionary. Second, we apply word sense induction to deal with ambiguous words. Finally, we cluster the disambiguated version of the ambiguous input graph into synsets. Our meta-clustering approach lets us use an efficient hard clustering algorithm to perform a fuzzy clustering of the graph. Despite its simplicity, our approach shows excellent results, outperforming five competitive state-of-the-art methods in terms of F-score on three gold standard datasets for English and Russian derived from large-scale manually constructed lexical resources.", "title": "" }, { "docid": "661b7615e660ae8e0a3b2a7294b9b921", "text": "In this paper, a very simple solution-based method is employed to coat amorphous MnO2 onto crystalline SnO2 nanowires grown on stainless steel substrate, which utilizes the better electronic conductivity of SnO2 nanowires as the supporting backbone to deposit MnO2 for supercapacitor electrodes. Cyclic voltammetry (CV) and galvanostatic charge/discharge methods have been carried out to study the capacitive properties of the SnO2/MnO2 composites. A specific capacitance (based on MnO2) as high as 637 F g(-1) is obtained at a scan rate of 2 mV s(-1) (800 F g(-1) at a current density of 1 A g(-1)) in 1 M Na2SO4 aqueous solution. The energy density and power density measured at 50 A g(-1) are 35.4 W h kg(-1) and 25 kW kg(-1), respectively, demonstrating the good rate capability. In addition, the SnO2/MnO2 composite electrode shows excellent long-term cyclic stability (less than 1.2% decrease of the specific capacitance is observed after 2000 CV cycles). The temperature-dependent capacitive behavior is also discussed. Such high-performance capacitive behavior indicates that the SnO2/MnO2 composite is a very promising electrode material for fabricating supercapacitors.", "title": "" }, { "docid": "342c95873edb988c3e055a1714753691", "text": "KEY CLINICAL MESSAGE\nThanatophoric dysplasia is typically a neonatal lethal condition. However, for those rare individuals who do survive, there is the development of seizures, progression of craniocervical stenosis, ventilator dependence, and limitations in motor and cognitive abilities. Families must be made aware of these issues during the discussion of management plans.", "title": "" } ]
scidocsrr
8afb9822659b7118f13c1a8847b836ab
Robust Sclera Recognition System With Novel Sclera Segmentation and Validation Techniques
[ { "docid": "ae087768fe3e7464d4f1f12a03ffc877", "text": "In this paper, we propose a novel sclera template generation, manipulation, and matching scheme for cancelable identity verification. Essentially, a region indicator matrix is generated based on an angular grid reference frame. For binary feature template generation, a random matrix and a local binary patterns (LBP) operator are utilized. Subsequently, the template is manipulated by user-specific random sequence attachment and bit shifting. Finally, matching is performed by a normalized Hamming distance comparison. Some experimental results on UBIRIS v1 database are included with discussion.", "title": "" } ]
[ { "docid": "e42c6d51324e5597d773e4c95960c76e", "text": "In this chapter, we discuss the design of tangible interaction techniques for Mixed Reality environments. We begin by recalling some conceptual models of tangible interaction. Then, we propose an engineering-oriented software/hardware co-design process, based on our experience in developing tangible user interfaces. We present three different tangible user interfaces for real-world applications, and analyse the feedback from the user studies that we conducted. In summary, we conclude that, since tangible user interfaces are part of the real world and provide a seamless interaction with virtual words, they are well-adapted to mix together reality and virtuality. Hence, tangible interaction optimizes a users' virtual tasks, especially in manipulating and controlling 3D digital data in 3D space.", "title": "" }, { "docid": "0cb490aacaf237bdade71479151ab8d2", "text": "This brief presents a high-speed parallel cyclic redundancy check (CRC) implementation based on unfolding, pipelining, and retiming algorithms. CRC architectures are first pipelined to reduce the iteration bound by using novel look-ahead pipelining methods and then unfolded and retimed to design high-speed parallel circuits. A comparison on commonly used generator polynomials between the proposed design and previously proposed parallel CRC algorithms shows that the proposed design can increase the speed by up to 25% and control or even reduce hardware cost", "title": "" }, { "docid": "70d0f96d42467e1c998bb9969de55a39", "text": "RGB-D cameras provide both a color image and a depth image which contains the real depth information about per-pixel. The richness of their data and the development of low-cost sensors have combined to present an attractive opportunity for mobile robotics research. In this paper, we describe a novel hybrid visual odometry using an RGB-D camera. Different from the original method, it is a pure visual odometry method without any other information, such as inertial data. The important key is hybrid, which means that the odometry can be executed in two different processes depending on the conditions. It consists of two parts, including a feature-based visual odometry and a direct visual odometry. Details about the algorithm are discussed in the paper. Especially, the switch conditions are described in detail. Beside, we evaluate the continuity and robustness for the system on public dataset. The experiments demonstrate that our system has more stable continuity and better robustness.", "title": "" }, { "docid": "d6477bab69274263bc208d19d9ec3ec2", "text": "Software APIs often contain too many methods and parameters for developers to memorize or navigate effectively. Instead, developers resort to finding answers through online search engines and systems such as Stack Overflow. However, the process of finding and integrating a working solution is often very time-consuming. Though code search engines have increased in quality, there remain significant language- and workflow-gaps in meeting end-user needs. Novice and intermediate programmers often lack the language to query, and the expertise in transferring found code to their task. To address this problem, we present CodeMend, a system to support finding and integration of code. CodeMend leverages a neural embedding model to jointly model natural language and code as mined from large Web and code datasets. We also demonstrate a novel, mixed-initiative, interface to support query and integration steps. Through CodeMend, end-users describe their goal in natural language. The system makes salient the relevant API functions, the lines in the end-user's program that should be changed, as well as proposing the actual change. We demonstrate the utility and accuracy of CodeMend through lab and simulation studies.", "title": "" }, { "docid": "ca7e4eafed84f5dbe5f996ac7c795c91", "text": "This paper examines the effects of review arousal on perceived helpfulness of online reviews, and on consumers’ emotional responses elicited by the reviews. Drawing on emotion theories in psychology and neuroscience, we focus on four emotions – anger, anxiety, excitement, and enjoyment that are common in the context of online reviews. The effects of the four emotions embedded in online reviews were examined using a controlled experiment. Our preliminary results show that reviews embedded with the four emotions (arousing reviews) are perceived to be more helpful than reviews without the emotions embedded (non-arousing reviews). However, reviews embedded with anxiety and enjoyment (low-arousal reviews) are perceived to be more helpfulness that reviews embedded with anger and excitement (high-arousal reviews). Furthermore, compared to reviews embedded with anger, reviews embedded with anxiety are associated with a higher EEG activity that is generally linked to negative emotions. The results suggest a non-linear relationship between review arousal and perceived helpfulness, which can be explained by the consumers’ emotional responses elicited by the reviews.", "title": "" }, { "docid": "7f61235bb8b77376936256dcf251ee0b", "text": "These practical guidelines for the biological treatment of personality disorders in primary care settings were developed by an international Task Force of the World Federation of Societies of Biological Psychiatry (WFSBP). They embody the results of a systematic review of all available clinical and scientific evidence pertaining to the biological treatment of three specific personality disorders, namely borderline, schizotypal and anxious/avoidant personality disorder in addition to some general recommendations for the whole field. The guidelines cover disease definition, classification, epidemiology, course and current knowledge on biological underpinnings, and provide a detailed overview on the state of the art of clinical management. They deal primarily with biological treatment (including antidepressants, neuroleptics, mood stabilizers and some further pharmacological agents) and discuss the relative significance of medication within the spectrum of treatment strategies that have been tested for patients with personality disorders, up to now. The recommendations should help the clinician to evaluate the efficacy spectrum of psychotropic drugs and therefore to select the drug best suited to the specific psychopathology of an individual patient diagnosed for a personality disorder.", "title": "" }, { "docid": "d063f8a20e2b6522fe637794e27d7275", "text": "Bag-of-Words (BoW) model based on SIFT has been widely used in large scale image retrieval applications. Feature quantization plays a crucial role in BoW model, which generates visual words from the high dimensional SIFT features, so as to adapt to the inverted file structure for indexing. Traditional feature quantization approaches suffer several problems: 1) high computational cost---visual words generation (codebook construction) is time consuming especially with large amount of features; 2) limited reliability---different collections of images may produce totally different codebooks and quantization error is hard to be controlled; 3) update inefficiency--once the codebook is constructed, it is not easy to be updated. In this paper, a novel feature quantization algorithm, scalar quantization, is proposed. With scalar quantization, a SIFT feature is quantized to a descriptive and discriminative bit-vector, of which the first tens of bits are taken out as code word. Our quantizer is independent of collections of images. In addition, the result of scalar quantization naturally lends itself to adapt to the classic inverted file structure for image indexing. Moreover, the quantization error can be flexibly reduced and controlled by efficiently enumerating nearest neighbors of code words.\n The performance of scalar quantization has been evaluated in partial-duplicate Web image search on a database of one million images. Experiments reveal that the proposed scalar quantization achieves a relatively 42% improvement in mean average precision over the baseline (hierarchical visual vocabulary tree approach), and also outperforms the state-of-the-art Hamming Embedding approach and soft assignment method.", "title": "" }, { "docid": "30f0583f57317b9def629c7e81c934d8", "text": "The growth in the size of networks and the number of vulnerabilities is increasingly challenging to manage network security. Especially, difficult to manage are multi-step attacks which are attacks using one or more vulnerabilities as stepping stones. Attack graphs are widely used for analyzing multi-step attacks. However, since these graphs had large sizes, it was too expensive to work with. In this paper, we propose a mechanism to manage attack graphs using a divide and conquer approach. To enhance efficiency of risk analyzer working with attack graphs, we converted a large graph to multiple sub-graphs named risk units and provide the light-weighted graphs to the analyzers. As a result, when k order of time complexity algorithms work with an attack graph with n vertices, a division having c of overhead vertices reduces the workloads from nk to r(n + c)k. And the coefficient r becomes smaller geometrically from 2−kdepended on their division rounds. By this workload reduction, risk assessment processes which work with large size attack graphs become more scalable and resource practical.", "title": "" }, { "docid": "f7de95bb35f7f53518f6c86e06ce9e48", "text": "Domain Generation Algorithms (DGAs) are a popular technique used by contemporary malware for command-and-control (C&C) purposes. Such malware utilizes DGAs to create a set of domain names that, when resolved, provide information necessary to establish a link to a C&C server. Automated discovery of such domain names in real-time DNS traffic is critical for network security as it allows to detect infection, and, in some cases, take countermeasures to disrupt the communication and identify infected machines. Detection of the specific DGA malware family provides the administrator valuable information about the kind of infection and steps that need to be taken. In this paper we compare and evaluate machine learning methods that classify domain names as benign or DGA, and label the latter according to their malware family. Unlike previous work, we select data for test and training sets according to observation time and known seeds. This allows us to assess the robustness of the trained classifiers for detecting domains generated by the same families at a different time or when seeds change. Our study includes tree ensemble models based on human-engineered features and deep neural networks that learn features automatically from domain names. We find that all state-of-the-art classifiers are significantly better at catching domain names from malware families with a time-dependent seed compared to time-invariant DGAs. In addition, when applying the trained classifiers on a day of real traffic, we find that many domain names unjustifiably are flagged as malicious, thereby revealing the shortcomings of relying on a standard whitelist for training a production grade DGA detection system.", "title": "" }, { "docid": "0fd635cfbcbd2d648f5c25ce2cb551a5", "text": "The main focus of relational learning for knowledge graph completion (KGC) lies in exploiting rich contextual information for facts. Many state-of-the-art models incorporate fact sequences, entity types, and even textual information. Unfortunately, most of them do not fully take advantage of rich structural information in a KG, i.e., connectivity patterns around each entity. In this paper, we propose a context-aware convolutional learning (CACL) model which jointly learns from entities and their multi-hop neighborhoods. Since we directly utilize the connectivity patterns contained in each multi-hop neighborhood, the structural role similarity among entities can be better captured, resulting in more informative entity and relation embeddings. Specifically, CACL collects entities and relations from the multi-hop neighborhood as contextual information according to their relative importance and uniquely maps them to a linear vector space. Our convolutional architecture leverages a deep learning technique to represent each entity along with its linearly mapped contextual information. Thus, we can elaborately extract the features of key connectivity patterns from the context and incorporate them into a score function which evaluates the validity of facts. Experimental results on the newest datasets show that CACL outperforms existing approaches by successfully enriching embeddings with neighborhood information.", "title": "" }, { "docid": "f5e44676e9ce8a06bcdb383852fb117f", "text": "We explore techniques to significantly improve the compute efficiency and performance of Deep Convolution Networks without impacting their accuracy. To improve the compute efficiency, we focus on achieving high accuracy with extremely low-precision (2-bit) weight networks, and to accelerate the execution time, we aggressively skip operations on zero-values. We achieve the highest reported accuracy of 76.6% Top-1/93% Top-5 on the Imagenet object classification challenge with low-precision network while reducing the compute requirement by ∼3× compared to a full-precision network that achieves similar accuracy. Furthermore, to fully exploit the benefits of our low-precision networks, we build a deep learning accelerator core, DLAC, that can achieve up to 1 TFLOP/mm2 equivalent for single-precision floating-point operations (∼2 TFLOP/mm2 for half-precision), which is ∼5× better than Linear Algebra Core [16] and ∼4× better than previous deep learning accelerator proposal [8].", "title": "" }, { "docid": "092b55732087aef57a1164c228c00d8b", "text": "Penetration of advanced sensor systems such as advanced metering infrastructure (AMI), high-frequency overhead and underground current and voltage sensors have been increasing significantly in power distribution systems over the past few years. According to U.S. energy information administration (EIA), the aggregated AMI installation experienced a 17 times increase from 2007 to 2012. The AMI usually collects electricity usage data every 15 minute, instead of once a month. This is a 3,000 fold increase in the amount of data utilities would have processed in the past. It is estimated that the electricity usage data collected through AMI in the U.S. amount to well above 100 terabytes in 2012. To unleash full value of the complex data sets, innovative big data algorithms need to be developed to transform the way we operate and plan for the distribution system. This paper not only proposes promising applications but also provides an in-depth discussion of technical and regulatory challenges and risks of big data analytics in power distribution systems. In addition, a flexible system architecture design is proposed to handle heterogeneous big data analysis workloads.", "title": "" }, { "docid": "f81dd0c86a7b45e743e4be117b4030c2", "text": "Stock market prediction is of great importance for financial analysis. Traditionally, many studies only use the news or numerical data for the stock market prediction. In the recent years, in order to explore their complementary, some studies have been conducted to equally treat dual sources of information. However, numerical data often play a much more important role compared with the news. In addition, the existing simple combination cannot exploit their complementarity. In this paper, we propose a numerical-based attention (NBA) method for dual sources stock market prediction. Our major contributions are summarized as follows. First, we propose an attention-based method to effectively exploit the complementarity between news and numerical data in predicting the stock prices. The stock trend information hidden in the news is transformed into the importance distribution of numerical data. Consequently, the news is encoded to guide the selection of numerical data. Our method can effectively filter the noise and make full use of the trend information in news. Then, in order to evaluate our NBA model, we collect news corpus and numerical data to build three datasets from two sources: the China Security Index 300 (CSI300) and the Standard & Poor’s 500 (S&P500). Extensive experiments are conducted, showing that our NBA is superior to previous models in dual sources stock price prediction.", "title": "" }, { "docid": "137449952a30730185552ed6fca4d8ba", "text": "BACKGROUND\nPoor sleep quality and depression negatively impact the health-related quality of life of patients with type 2 diabetes, but the combined effect of the two factors is unknown. This study aimed to assess the interactive effects of poor sleep quality and depression on the quality of life in patients with type 2 diabetes.\n\n\nMETHODS\nPatients with type 2 diabetes (n = 944) completed the Diabetes Specificity Quality of Life scale (DSQL) and questionnaires on sleep quality and depression. The products of poor sleep quality and depression were added to the logistic regression model to evaluate their multiplicative interactions, which were expressed as the relative excess risk of interaction (RERI), the attributable proportion (AP) of interaction, and the synergy index (S).\n\n\nRESULTS\nPoor sleep quality and depressive symptoms both increased DSQL scores. The co-presence of poor sleep quality and depressive symptoms significantly reduced DSQL scores by a factor of 3.96 on biological interaction measures. The relative excess risk of interaction was 1.08. The combined effect of poor sleep quality and depressive symptoms was observed only in women.\n\n\nCONCLUSIONS\nPatients with both depressive symptoms and poor sleep quality are at an increased risk of reduction in diabetes-related quality of life, and this risk is particularly high for women due to the interaction effect. Clinicians should screen for and treat sleep difficulties and depressive symptoms in patients with type 2 diabetes.", "title": "" }, { "docid": "d690cfa0fbb63e53e3d3f7a1c7a6a442", "text": "Ambient intelligence has acquired great importance in recent years and requires the development of new innovative solutions. This paper presents a distributed telemonitoring system, aimed at improving healthcare and assistance to dependent people at their homes. The system implements a service-oriented architecture based platform, which allows heterogeneous wireless sensor networks to communicate in a distributed way independent of time and location restrictions. This approach provides the system with a higher ability to recover from errors and a better flexibility to change their behavior at execution time. Preliminary results are presented in this paper.", "title": "" }, { "docid": "8feb5dce809acf0efb63d322f0526fcf", "text": "Recent studies of eye movements in reading and other information processing tasks, such as music reading, typing, visual search, and scene perception, are reviewed. The major emphasis of the review is on reading as a specific example of cognitive processing. Basic topics discussed with respect to reading are (a) the characteristics of eye movements, (b) the perceptual span, (c) integration of information across saccades, (d) eye movement control, and (e) individual differences (including dyslexia). Similar topics are discussed with respect to the other tasks examined. The basic theme of the review is that eye movement data reflect moment-to-moment cognitive processes in the various tasks examined. Theoretical and practical considerations concerning the use of eye movement data are also discussed.", "title": "" }, { "docid": "c6399386c27aa8d039094d23e76aed8e", "text": "Spin systems and harmonic oscillators comprise two archetypes in quantum mechanics. The spin-1/2 system, with two quantum energy levels, is essentially the most nonlinear system found in nature, whereas the harmonic oscillator represents the most linear, with an infinite number of evenly spaced quantum levels. A significant difference between these systems is that a two-level spin can be prepared in an arbitrary quantum state using classical excitations, whereas classical excitations applied to an oscillator generate a coherent state, nearly indistinguishable from a classical state. Quantum behaviour in an oscillator is most obvious in Fock states, which are states with specific numbers of energy quanta, but such states are hard to create. Here we demonstrate the controlled generation of multi-photon Fock states in a solid-state system. We use a superconducting phase qubit, which is a close approximation to a two-level spin system, coupled to a microwave resonator, which acts as a harmonic oscillator, to prepare and analyse pure Fock states with up to six photons. We contrast the Fock states with coherent states generated using classical pulses applied directly to the resonator.", "title": "" }, { "docid": "1eb2715d2dfec82262c7b3870db9b649", "text": "Leadership is a crucial component to the success of academic health science centers (AHCs) within the shifting U.S. healthcare environment. Leadership talent acquisition and development within AHCs is immature and approaches to leadership and its evolution will be inevitable to refine operations to accomplish the critical missions of clinical service delivery, the medical education continuum, and innovations toward discovery. To reach higher organizational outcomes in AHCs requires a reflection on what leadership approaches are in place and how they can better support these missions. Transactional leadership approaches are traditionally used in AHCs and this commentary suggests that movement toward a transformational approach is a performance improvement opportunity for AHC leaders. This commentary describes the transactional and transformational approaches, how they complement each other, and how to access the transformational approach. Drawing on behavioral sciences, suggestions are made on how a transactional leader can change her cognitions to align with the four dimensions of the transformational leadership approach.", "title": "" }, { "docid": "408f58b7dd6cb1e6be9060f112773888", "text": "Semantic hashing has become a powerful paradigm for fast similarity search in many information retrieval systems. While fairly successful, previous techniques generally require two-stage training, and the binary constraints are handled ad-hoc. In this paper, we present an end-to-end Neural Architecture for Semantic Hashing (NASH), where the binary hashing codes are treated as Bernoulli latent variables. A neural variational inference framework is proposed for training, where gradients are directly backpropagated through the discrete latent variable to optimize the hash function. We also draw connections between proposed method and rate-distortion theory, which provides a theoretical foundation for the effectiveness of the proposed framework. Experimental results on three public datasets demonstrate that our method significantly outperforms several state-of-the-art models on both unsupervised and supervised scenarios.", "title": "" }, { "docid": "a433ebaeeb5dc5b68976b3ecb770c0cd", "text": "1 abstract The importance of the inspection process has been magniied by the requirements of the modern manufacturing environment. In electronics mass-production manufacturing facilities, an attempt is often made to achieve 100 % quality assurance of all parts, subassemblies, and nished goods. A variety of approaches for automated visual inspection of printed circuits have been reported over the last two decades. In this survey, algorithms and techniques for the automated inspection of printed circuit boards are examined. A classiication tree for these algorithms is presented and the algorithms are grouped according to this classiication. This survey concentrates mainly on image analysis and fault detection strategies, these also include the state-of-the-art techniques. A summary of the commercial PCB inspection systems is also presented. 2 Introduction Many important applications of vision are found in the manufacturing and defense industries. In particular, the areas in manufacturing where vision plays a major role are inspection, measurements , and some assembly tasks. The order among these topics closely reeects the manufacturing needs. In most mass-production manufacturing facilities, an attempt is made to achieve 100% quality assurance of all parts, subassemblies, and nished products. One of the most diicult tasks in this process is that of inspecting for visual appearance-an inspection that seeks to identify both functional and cosmetic defects. With the advances in computers (including high speed, large memory and low cost) image processing, pattern recognition, and artiicial intelligence have resulted in better and cheaper equipment for industrial image analysis. This development has made the electronics industry active in applying automated visual inspection to manufacturing/fabricating processes that include printed circuit boards, IC chips, photomasks, etc. Nello 1] gives a summary of the machine vision inspection applications in electronics industry. 01", "title": "" } ]
scidocsrr
1db8ca5e4f9226fa3e746ebfe8f93ac3
Happiness Is Everything , or Is It ? Explorations on the Meaning of Psychological Well-Being
[ { "docid": "10990c819cbc6dfb88b4c2de829f27f1", "text": "Building on the fraudulent foundation established by atheist Sigmund Freud, psychoanalyst Erik Erikson has proposed a series of eight \"life cycles,\" each with an accompanying \"life crisis,\" to explain both human behavior and man's religious tendencies. Erikson's extensive application of his theories to the life of Martin Luther reveals his contempt for the living God who has revealed Himself in Scripture. This paper will consider Erikson's view of man, sin, redemption, and religion, along with an analysis of his eight \"life cycles.\" Finally, we will critique his attempted psychoanalysis of Martin Luther.", "title": "" } ]
[ { "docid": "cb086fa252f4db172b9c7ac7e1081955", "text": "Drivable free space information is vital for autonomous vehicles that have to plan evasive maneu vers in realtime. In this paper, we present a new efficient met hod for environmental free space detection with laser scann er based on 2D occupancy grid maps (OGM) to be used for Advance d Driving Assistance Systems (ADAS) and Collision Avo idance Systems (CAS). Firstly, we introduce an enhanced in verse sensor model tailored for high-resolution laser scanners f or building OGM. It compensates the unreflected beams and deals with the ray casting to grid cells accuracy and computationa l effort problems. Secondly, we introduce the ‘vehicle on a circle for grid maps’ map alignment algorithm that allows building more accurate local maps by avoiding the computationally expensive inaccurate operations of image sub-pixel shifting a nd rotation. The resulted grid map is more convenient for ADAS f eatures than existing methods, as it allows using less memo ry sizes, and hence, results into a better real-time performance. Thirdly, we present an algorithm to detect what we call the ‘in-sight edges’. These edges guarantee modeling the free space area with a single polygon of a fixed number of vertices regardless th e driving situation and map complexity. The results from real world experiments show the effectiveness of our approach. Keywords— Occupancy Grid Map; Static Free Space Detection; Advanced Driving Assistance Systems; las er canner; autonomous driving", "title": "" }, { "docid": "94f39416ba9918e664fb1cd48732e3ae", "text": "In this paper, a nanostructured biosensor is developed to detect glucose in tear by using fluorescence resonance energy transfer (FRET) quenching mechanism. The designed FRET pair, including the donor, CdSe/ZnS quantum dots (QDs), and the acceptor, dextran-binding malachite green (MG-dextran), was conjugated to concanavalin A (Con A), an enzyme with specific affinity to glucose. In the presence of glucose, the quenched emission of QDs through the FRET mechanism is restored by displacing the dextran from Con A. To have a dual-modulation sensor for convenient and accurate detection, the nanostructured FRET sensors were assembled onto a patterned ZnO nanorod array deposited on the synthetic silicone hydrogel. Consequently, the concentration of glucose detected by the patterned sensor can be converted to fluorescence spectra with high signal-to-noise ratio and calibrated image pixel value. The photoluminescence intensity of the patterned FRET sensor increases linearly with increasing concentration of glucose from 0.03mmol/L to 3mmol/L, which covers the range of tear glucose levels for both diabetics and healthy subjects. Meanwhile, the calibrated values of pixel intensities of the fluorescence images captured by a handhold fluorescence microscope increases with increasing glucose. Four male Sprague-Dawley rats with different blood glucose concentrations were utilized to demonstrate the quick response of the patterned FRET sensor to 2µL of tear samples.", "title": "" }, { "docid": "8ccb5aeb084c9a6223dc01fa296d908e", "text": "Effective chronic disease management is essential to improve positive health outcomes, and incentive strategies are useful in promoting self-care with longevity. Gamification, applied with mHealth (mobile health) applications, has the potential to better facilitate patient self-management. This review article addresses a knowledge gap around the effective use of gamification design principles, or mechanics, in developing mHealth applications. Badges, leaderboards, points and levels, challenges and quests, social engagement loops, and onboarding are mechanics that comprise gamification. These mechanics are defined and explained from a design and development perspective. Health and fitness applications with gamification mechanics include: bant which uses points, levels, and social engagement, mySugr which uses challenges and quests, RunKeeper which uses leaderboards as well as social engagement loops and onboarding, Fitocracy which uses badges, and Mango Health, which uses points and levels. Specific design considerations are explored, an example of the efficacy of a gamified mHealth implementation in facilitating improved self-management is provided, limitations to this work are discussed, a link between the principles of gaming and gamification in health and wellness technologies is provided, and suggestions for future work are made. We conclude that gamification could be leveraged in developing applications with the potential to better facilitate self-management in persons with chronic conditions.", "title": "" }, { "docid": "398c791338adf824a81a2bfb8f35c6bb", "text": "Hybrid Reality Environments represent a new kind of visualization spaces that blur the line between virtual environments and high resolution tiled display walls. This paper outlines the design and implementation of the CAVE2 TM Hybrid Reality Environment. CAVE2 is the world’s first near-seamless flat-panel-based, surround-screen immersive system. Unique to CAVE2 is that it will enable users to simultaneously view both 2D and 3D information, providing more flexibility for mixed media applications. CAVE2 is a cylindrical system of 24 feet in diameter and 8 feet tall, and consists of 72 near-seamless, off-axisoptimized passive stereo LCD panels, creating an approximately 320 degree panoramic environment for displaying information at 37 Megapixels (in stereoscopic 3D) or 74 Megapixels in 2D and at a horizontal visual acuity of 20/20. Custom LCD panels with shifted polarizers were built so the images in the top and bottom rows of LCDs are optimized for vertical off-center viewingallowing viewers to come closer to the displays while minimizing ghosting. CAVE2 is designed to support multiple operating modes. In the Fully Immersive mode, the entire room can be dedicated to one virtual simulation. In 2D model, the room can operate like a traditional tiled display wall enabling users to work with large numbers of documents at the same time. In the Hybrid mode, a mixture of both 2D and 3D applications can be simultaneously supported. The ability to treat immersive work spaces in this Hybrid way has never been achieved before, and leverages the special abilities of CAVE2 to enable researchers to seamlessly interact with large collections of 2D and 3D data. To realize this hybrid ability, we merged the Scalable Adaptive Graphics Environment (SAGE) a system for supporting 2D tiled displays, with Omegalib a virtual reality middleware supporting OpenGL, OpenSceneGraph and Vtk applications.", "title": "" }, { "docid": "f042dd6b78c65541e657c48452a1e0e4", "text": "We present a general framework for semantic role labeling. The framework combines a machine-learning technique with an integer linear programming-based inference procedure, which incorporates linguistic and structural constraints into a global decision process. Within this framework, we study the role of syntactic parsing information in semantic role labeling. We show that full syntactic parsing information is, by far, most relevant in identifying the argument, especially, in the very first stagethe pruning stage. Surprisingly, the quality of the pruning stage cannot be solely determined based on its recall and precision. Instead, it depends on the characteristics of the output candidates that determine the difficulty of the downstream problems. Motivated by this observation, we propose an effective and simple approach of combining different semantic role labeling systems through joint inference, which significantly improves its performance. Our system has been evaluated in the CoNLL-2005 shared task on semantic role labeling, and achieves the highest F1 score among 19 participants.", "title": "" }, { "docid": "7138c13d88d87df02c7dbab4c63328c4", "text": "Banisteriopsis caapi is the basic ingredient of ayahuasca, a psychotropic plant tea used in the Amazon for ritual and medicinal purposes, and by interested individuals worldwide. Animal studies and recent clinical research suggests that B. caapi preparations show antidepressant activity, a therapeutic effect that has been linked to hippocampal neurogenesis. Here we report that harmine, tetrahydroharmine and harmaline, the three main alkaloids present in B. caapi, and the harmine metabolite harmol, stimulate adult neurogenesis in vitro. In neurospheres prepared from progenitor cells obtained from the subventricular and the subgranular zones of adult mice brains, all compounds stimulated neural stem cell proliferation, migration, and differentiation into adult neurons. These findings suggest that modulation of brain plasticity could be a major contribution to the antidepressant effects of ayahuasca. They also expand the potential application of B. caapi alkaloids to other brain disorders that may benefit from stimulation of endogenous neural precursor niches.", "title": "" }, { "docid": "f69113c023a9900be69fd6109c6d5d30", "text": "The IETF designed the Routing Protocol for Low power and Lossy Networks (RPL) as a candidate for use in constrained networks. Keeping in mind the different requirements of such networks, the protocol was designed to support multiple routing topologies, called DODAGs, constructed using different objective functions, so as to optimize routing based on divergent metrics. A DODAG versioning system is incorporated into RPL in order to ensure that the topology does not become stale and that loops are not formed over time. However, an attacker can exploit this versioning system to gain an advantage in the topology and also acquire children that would be forced to route packets via this node. In this paper we present a study of possible attacks that exploit the DODAG version system. The impact on overhead, delivery ratio, end-to-end delay, rank inconsistencies and loops is studied.", "title": "" }, { "docid": "7f0a6e9a1bcdf8b12ac4273138eb7523", "text": "The graph-search algorithms developed between 60s and 80s were widely used in many fields, from robotics to video games. The A* algorithm shall be mentioned between some of the most important solutions explicitly oriented to motion-robotics, improving the logic of graph search with heuristic principles inside the loop. Nevertheless, one of the most important drawbacks of the A* algorithm resides in the heading constraints connected with the grid characteristics. Different solutions were developed in the last years to cope with this problem, based on postprocessing algorithms or on improvements of the graph-search algorithm itself. A very important one is Theta* that refines the graph search allowing to obtain paths with “any” heading. In the last two years, the Flight Mechanics Research Group of Politecnico di Torino studied and implemented different path planning algorithms. A L. De Filippis (B) · G. Guglieri · F. Quagliotti Dipartimento di Ingegneria Aeronautica e Spaziale, Politecnico di Torino, Corso Duca Degli Abruzzi 24, 10129 Turin, Italy e-mail: luca.defilippis@polito.it G. Guglieri e-mail: giorgio.guglieri@polito.it F. Quagliotti e-mail: fulvia.quagliotti@polito.it Matlab based planning tool was developed, collecting four separate approaches: geometric predefined trajectories, manual waypoint definition, automatic waypoint distribution (i.e. optimizing camera payload capabilities) and a comprehensive A*-based algorithm used to generate paths, minimizing risk of collision with orographic obstacles. The tool named PCube exploits Digital Elevation Maps (DEMs) to assess the risk maps and it can be used to generate waypoint sequences for UAVs autopilots. In order to improve the A*-based algorithm, the solution is extended to tri-dimensional environments implementing a more effective graph search (based on Theta*). In this paper the application of basic Theta* to tridimensional path planning will be presented. Particularly, the algorithm is applied to orographic obstacles and in urban environments, to evaluate the solution for different kinds of obstacles. Finally, a comparison with the A* algorithm will be introduced as a metric of the algorithm", "title": "" }, { "docid": "abb54a0c155805e7be2602265f78ae79", "text": "In this paper we sketch out a computational theory of spatial cognition motivated by navigational behaviours, ecological requirements, and neural mechanisms as identified in animals and man. Spatial cognition is considered in the context of a cognitive agent built around the action-perception cycle. Besides sensors and effectors, the agent comprises multiple memory structures including a working memory and a longterm memory stage. Spatial longterm memory is modeled along the graph approach, treating recognizable places or poses as nodes and navigational actions as links. Models of working memory and its interaction with reference memory are discussed. The model provides an overall framework of spatial cognition which can be adapted to model different levels of behavioural complexity as well as interactions between working and longterm memory. A number of design questions for building cognitive robots are derived from comparison with biological systems and discussed in the paper.", "title": "" }, { "docid": "cd527e5a6aefe889ee4ac56d70cc834e", "text": "In this paper we analyze Tendermint proposed in [7], one of the most popular blockchains based on PBFT Consensus. The current paper dissects Tendermint under various system communication models and Byzantine adversaries. Our methodology consists in identifying the algorithmic principles of Tendermint necessary for a specific combination of communication model adversary. This methodology allowed to identify bugs [3] in preliminary versions of the protocol ([19], [7]) and to prove its correctness under the most adversarial conditions: an eventually synchronous communication model and asymmetric Byzantine faults.", "title": "" }, { "docid": "7332f08a9447fd321f7e40609cfabfc0", "text": "Requirements Engineering und Management gewinnen in allen Bereichen der Systementwicklung stetig an Bedeutung. Zusammenhänge zwischen der Qualität der Anforderungserhebung und des Projekterfolges, wie von der Standish Group im jährlich erscheinenden Chaos Report [Standish 2004] untersucht, sind den meisten ein Begriff. Bei der Erhebung von Anforderungen treten immer wieder ähnliche Probleme auf. Dabei spielen unterschiedliche Faktoren und Gegebenheiten eine Rolle, die beachtet werden müssen. Es gibt mehrere Möglichkeiten, die Tücken der Analysephase zu meistern; eine Hilfe bietet der Einsatz der in diesem Artikel vorgestellten Methoden zur Anforderungserhebung. Auch wenn die Anforderungen korrekt und vollständig erhoben sind, ist es eine Kunst, diese zu verwalten. In der heutigen Zeit der verteilten Projekte ist es eine Herausforderung, die Dokumentation für jeden Beteiligten ständig verfügbar, nachvollziehbar und eindeutig zu erhalten. Requirements Management rüstet den Analytiker mit Methoden aus, um sich dieser Herausforderung zu stellen. Änderungen von Stakeholder-Wünschen an bestehenden Anforderungen stellen besondere Ansprüche an das Requirements Management, doch mithilfe eines Change-Management-Prozesses können auch diese bewältigt werden. Metriken und Traceability unterstützen bei der Aufwandsabschätzung für Änderungsanträge.", "title": "" }, { "docid": "09f36704e0bbd914f7ce6b5c7e0da228", "text": "Studies have repeatedly shown that users are increasingly concerned about their privacy when they go online. In response to both public interest and regulatory pressures, privacy policies have become almost ubiquitous. An estimated 77% of websites now post a privacy policy. These policies differ greatly from site to site, and often address issues that are different from those that users care about. They are in most cases the users' only source of information.This paper evaluates the usability of online privacy policies, as well as the practice of posting them. We analyze 64 current privacy policies, their accessibility, writing, content and evolution over time. We examine how well these policies meet user needs and how they can be improved. We determine that significant changes need to be made to current practice to meet regulatory and usability requirements.", "title": "" }, { "docid": "293f102f8e6cedb4b93856224f081272", "text": "In this paper, we propose a decision-based, signal-adaptive median filtering algorithm for removal of impulse noise. Our algorithm achieves accurate noise detection and high SNR measures without smearing the fine details and edges in the image. The notion of homogeneity level is defined for pixel values based on their global and local statistical properties. The cooccurrence matrix technique is used to represent the correlations between a pixel and its neighbors, and to derive the upper and lower bound of the homogeneity level. Noise detection is performed at two stages: noise candidates are first selected using the homogeneity level, and then a refining process follows to eliminate false detections. The noise detection scheme does not use a quantitative decision measure, but uses qualitative structural information, and it is not subject to burdensome computations for optimization of the threshold values. Empirical results indicate that our scheme performs significantly better than other median filters, in terms of noise suppression and detail preservation.", "title": "" }, { "docid": "32b2cd6b63c6fc4de5b086772ef9d319", "text": "Link prediction for knowledge graphs is the task of predicting missing relationships between entities. Previous work on link prediction has focused on shallow, fast models which can scale to large knowledge graphs. However, these models learn less expressive features than deep, multi-layer models – which potentially limits performance. In this work we introduce ConvE, a multi-layer convolutional network model for link prediction, and report state-of-the-art results for several established datasets. We also show that the model is highly parameter efficient, yielding the same performance as DistMult and R-GCN with 8x and 17x fewer parameters. Analysis of our model suggests that it is particularly effective at modelling nodes with high indegree – which are common in highlyconnected, complex knowledge graphs such as Freebase and YAGO3. In addition, it has been noted that the WN18 and FB15k datasets suffer from test set leakage, due to inverse relations from the training set being present in the test set – however, the extent of this issue has so far not been quantified. We find this problem to be severe: a simple rule-based model can achieve state-of-the-art results on both WN18 and FB15k. To ensure that models are evaluated on datasets where simply exploiting inverse relations cannot yield competitive results, we investigate and validate several commonly used datasets – deriving robust variants where necessary. We then perform experiments on these robust datasets for our own and several previously proposed models, and find that ConvE achieves state-of-the-art Mean Reciprocal Rank across all datasets.", "title": "" }, { "docid": "8a077d963a9df5528583388c3e1a229d", "text": "Context-aware recommender systems improve context-free recommenders by exploiting the knowledge of the contextual situation under which a user experienced and rated an item. They use data sets of contextually-tagged ratings to predict how the target user would evaluate (rate) an item in a given contextual situation, with the ultimate goal to recommend the items with the best estimated ratings. This paper describes and evaluates a pre-filtering approach to context-aware recommendation, called distributional-semantics pre-filtering (DSPF), which exploits in a novel way the distributional semantics of contextual conditions to build more precise context-aware rating prediction models. In DSPF, given a target contextual situation (of a target user), a matrix-factorization predictive model is built by using the ratings tagged with the contextual situations most similar to the target one. Then, this model is used to compute rating predictions and identify recommendations for that specific target contextual situation. In the proposed approach, the definition of the similarity of contextual situations is based on the distributional semantics of their composing conditions: situations are similar if they influence the user’s ratings in a similar way. This notion of similarity has the advantage of being directly derived from the rating data; hence it does not require a context taxonomy. We analyze the effectiveness of DSPF varying the specific method used to compute the situation-to-situation similarity. We also show how DSPF can be further improved by using clustering techniques. Finally, we evaluate DSPF on several contextually-tagged data sets and demonstrate that it outperforms state-of-the-art context-aware approaches.", "title": "" }, { "docid": "9ccbd750bd39e0451d98a7371c2b0914", "text": "The aim of this study was to assess the effect of inspiratory muscle training (IMT) on resistance to fatigue of the diaphragm (D), parasternal (PS), sternocleidomastoid (SCM) and scalene (SC) muscles in healthy humans during exhaustive exercise. Daily inspiratory muscle strength training was performed for 3 weeks in 10 male subjects (at a pressure threshold load of 60% of maximal inspiratory pressure (MIP) for the first week, 70% of MIP for the second week, and 80% of MIP for the third week). Before and after training, subjects performed an incremental cycle test to exhaustion. Maximal inspiratory pressure and EMG-analysis served as indices of inspiratory muscle fatigue assessment. The before-to-after exercise decreases in MIP and centroid frequency (fc) of the EMG (D, PS, SCM, and SC) power spectrum (P<0.05) were observed in all subjects before the IMT intervention. Such changes were absent after the IMT. The study found that in healthy subjects, IMT results in significant increase in MIP (+18%), a delay of inspiratory muscle fatigue during exhaustive exercise, and a significant improvement in maximal work performance. We conclude that the IMT elicits resistance to the development of inspiratory muscles fatigue during high-intensity exercise.", "title": "" }, { "docid": "4bfbf4b3135241b2e8d61a954c8fe7c8", "text": "This study examined adolescents' emotional reactivity to parents' marital conflict as a mediator of the association between triangulation and adolescents' internalizing problems in a sample of 2-parent families (N = 416)[corrected]. Four waves of annual, multiple-informant data were analyzed (youth ages 11-15 years). The authors used structural equation modeling and found that triangulation was associated with increases in adolescents' internalizing problems, controlling for marital hostility and adolescent externalizing problems. There also was an indirect pathway from triangulation to internalizing problems across time through youths' emotional reactivity. Moderating analyses indicated that the 2nd half of the pathway, the association between emotional reactivity and increased internalizing problems, characterized youth with lower levels of hopefulness and attachment to parents. The findings help detail why triangulation is a risk factor for adolescents' development and which youth will profit most from interventions focused on emotional regulation.", "title": "" }, { "docid": "6a6b47d95cf79792e053efde77bee014", "text": "Wind energy conversion systems have become a focal point in the research of renewable energy sources. This is in no small part due to the rapid advances in the size of wind generators as well as the development of power electronics and their applicability in wind energy extraction. This paper provides a comprehensive review of past and present converter topologies applicable to permanent magnet generators, induction generators, synchronous generators and doubly fed induction generators. The many different generator-converter combinations are compared on the basis of topology, cost, efficiency, power consumption and control complexity. The features of each generator-converter configuration are considered in the context of wind turbine systems", "title": "" }, { "docid": "bbb9ac7170663ce653ec9cb40db8695b", "text": "What we believe to be a novel three-dimensional (3D) phase unwrapping algorithm is proposed to unwrap 3D wrapped-phase volumes. It depends on a quality map to unwrap the most reliable voxels first and the least reliable voxels last. The technique follows a discrete unwrapping path to perform the unwrapping process. The performance of this technique was tested on both simulated and real wrapped-phase maps. And it is found to be robust and fast compared with other 3D phase unwrapping algorithms.", "title": "" } ]
scidocsrr
bd56c9412a60ba12ec9f8bf2a266c83c
Distance Fields for Rapid Collision Detection in Physically Based Modeling
[ { "docid": "05894f874111fd55bd856d4768c61abe", "text": "Collision detection is of paramount importance for many applications in computer graphics and visualization. Typically, the input to a collision detection algorithm is a large number of geometric objects comprising an environment, together with a set of objects moving within the environment. In addition to determining accurately the contacts that occur between pairs of objects, one needs also to do so at real-time rates. Applications such as haptic force-feedback can require over 1,000 collision queries per second. In this paper, we develop and analyze a method, based on bounding-volume hierarchies, for efficient collision detection for objects moving within highly complex environments. Our choice of bounding volume is to use a “discrete orientation polytope” (“k-dop”), a convex polytope whose facets are determined by halfspaces whose outward normals come from a small fixed set of k orientations. We compare a variety of methods for constructing hierarchies (“BV-trees”) of bounding k-dops. Further, we propose algorithms for maintaining an effective BV-tree of k-dops for moving objects, as they rotate, and for performing fast collision detection using BV-trees of the moving objects and of the environment. Our algorithms have been implemented and tested. We provide experimental evidence showing that our approach yields substantially faster collision detection than previous methods.", "title": "" } ]
[ { "docid": "db9e401e4c2bdee1187389c340541877", "text": "We show in this paper how some algebraic methods can be used for fingerprint matching. The described technique is able to compute the score of a match also when the template and test fingerprints have been not correctly acquired. In particular, the match is independent of translations, rotations and scaling transformations of the template. The technique is also able to compute a match score when part of the fingerprint image is incorrect or missed. The algorithm is being implemented in CoCoA, a computer algebra system for doing computations in Commutative Algebra.", "title": "" }, { "docid": "51624e6c70f4eb5f2295393c68ee386c", "text": "Advances in mobile technologies and devices has changed the way users interact with devices and other users. These new interaction methods and services are offered by the help of intelligent sensing capabilities, using context, location and motion sensors. However, indoor location sensing is mostly achieved by utilizing radio signal (Wi-Fi, Bluetooth, GSM etc.) and nearest neighbor identification. The most common algorithm adopted for Received Signal Strength (RSS)-based location sensing is K Nearest Neighbor (KNN), which calculates K nearest neighboring points to mobile users (MUs). Accordingly, in this paper, we aim to improve the KNN algorithm by enhancing the neighboring point selection by applying k-means clustering approach. In the proposed method, k-means clustering algorithm groups nearest neighbors according to their distance to mobile user. Then the closest group to the mobile user is used to calculate the MU's location. The evaluation results indicate that the performance of clustered KNN is closely tied to the number of clusters, number of neighbors to be clustered and the initiation of the center points in k-mean algorithm. Keywords-component; Received signal strength, k-Means, clustering, location estimation, personal digital assistant (PDA), wireless, indoor positioning", "title": "" }, { "docid": "70fa03bcd9c5eec86050052ea77d30fd", "text": "The importance of SMEs SMEs (small and medium-sized enterprises) account for 60 to 70 per cent of jobs in most OECD countries, with a particularly large share in Italy and Japan, and a relatively smaller share in the United States. Throughout they also account for a disproportionately large share of new jobs, especially in those countries which have displayed a strong employment record, including the United States and the Netherlands. Some evidence points also to the importance of age, rather than size, in job creation: young firms generate more than their share of employment. However, less than one-half of start-ups survive for more than five years and only a fraction develop into the high-growth firms which make important contributions to job creation. High job turnover poses problems for employment security; and small establishments are often exempt from giving notice to their employees. Small firms also tend to invest less in training and rely relatively more on external recruitment for raising competence. The demand for reliable, relevant and internationally comparable data on SMEs is on the rise, and statistical offices have started to expand their collection and publication of data. International comparability is still weak, however, due to divergent size-class definitions and sector classifications. To enable useful policy analysis, OECD governments need to improve their build-up of data, without creating additional obstacles for firms through the burden of excessive paper work. The greater variance in profitability, survival and growth of SMEs compared to larger firms accounts for special problems in financing. SMEs generally tend to be confronted with higher interest rates, as well as credit rationing due to shortage of collateral. The issues that arise in financing differ considerably between existing and new firms, as well as between those which grow slowly and those that grow rapidly. The expansion of private equity markets, including informal markets, has greatly improved the access to venture capital for start-ups and SMEs, but considerable differences remain among countries. Regulatory burdens remain a major obstacle for SMEs as these firms tend to be poorly equipped to deal with the problems arising from regulations. Access to information about regulations should be made available to SMEs at minimum cost. Policy makers must ensure that the compliance procedures associated with, e.g. R&D and new technologies, are not unnecessarily costly, complex or lengthy. Transparency is of particular importance to SMEs, and information technology has great potential to narrow the information …", "title": "" }, { "docid": "a112cd31e136054bdf9d34c82b960d95", "text": "We propose a completely automatic approach for recognizing low resolution face images captured in uncontrolled environment. The approach uses multidimensional scaling to learn a common transformation matrix for the entire face which simultaneously transforms the facial features of the low resolution and the high resolution training images such that the distance between them approximates the distance had both the images been captured under the same controlled imaging conditions. Stereo matching cost is used to obtain the similarity of two images in the transformed space. Though this gives very good recognition performance, the time taken for computing the stereo matching cost is significant. To overcome this limitation, we propose a reference-based approach in which each face image is represented by its stereo matching cost from a few reference images. Experimental evaluation on the real world challenging databases and comparison with the state-of-the-art super-resolution, classifier based and cross modal synthesis techniques show the effectiveness of the proposed algorithm.", "title": "" }, { "docid": "235ed0d7a20b67e227db9e35a3865d2b", "text": "convolutional neural networks are the most widely used deep learning algorithms for traffic signal classification till date[1] but they fail to capture pose, view, orientation of the images because of the intrinsic inability of max pooling layer.This paper proposes a novel method for Traffic sign detection using deep learning architecture called capsule networks that achieves outstanding performance on the German traffic sign dataset.Capsule network consists of capsules which are a group of neurons representing the instantiating parameters of an object like the pose and orientation[2] by using the dynamic routing and route by agreement algorithms.unlike the previous approaches of manual feature extraction,multiple deep neural networks with many parameters,our method eliminates the manual effort and provides resistance to the spatial variances.CNNs ́ can be fooled easily using various adversary attacks[3] and capsule networks can overcome such attacks from the intruders and can offer more reliability in traffic sign detection for autonomous vehicles.Capsule network have achieved the state-of-the-art accuracy of 97.6% on German Traffic Sign Recognition Benchmark dataset (GTSRB).", "title": "" }, { "docid": "4aed0c391351671ccb5297b2fe9d4891", "text": "Applying evolution to generate simple agent behaviours has become a successful and heavily used practice. However the notion of scaling up behaviour into something more noteworthy and complex is far from elementary. In this paper we propose a method of combining neuroevolution practices with the subsumption paradigm; in which we generate Artificial Neural Network (ANN) layers ordered in a hierarchy such that high-level controllers can override lower behaviours. To explore this proposal we apply our controllers to the dasiaEvoTankspsila domain; a small, dynamic, adversarial environment. Our results show that once layers are evolved we can generate competent and capable results that can deal with hierarchies of multiple layers. Further analysis of results provides interesting insights into design decisions for such controllers, particularly when compared to the original suggestions for the subsumption paradigm.", "title": "" }, { "docid": "8bb30efa3f14fa0860d1e5bc1265c988", "text": "The introduction of microgrids in distribution networks based on power electronics facilitates the use of renewable energy resources, distributed generation (DG) and storage systems while improving the quality of electric power and reducing losses thus increasing the performance and reliability of the electrical system, opens new horizons for microgrid applications integrated into electrical power systems. The hierarchical control structure consists of primary, secondary, and tertiary levels for microgrids that mimic the behavior of the mains grid is reviewed. The main objective of this paper is to give a description of state of the art for the distributed power generation systems (DPGS) based on renewable energy and explores the power converter connected in parallel to the grid which are distinguished by their contribution to the formation of the grid voltage and frequency and are accordingly classified in three classes. This analysis is extended focusing mainly on the three classes of configurations grid-forming, grid-feeding, and gridsupporting. The paper ends up with an overview and a discussion of the control structures and strategies to control distribution power generation system (DPGS) units connected to the network. Keywords— Distributed power generation system (DPGS); hierarchical control; grid-forming; grid-feeding; grid-supporting. Nomenclature Symbols id − iq Vd − Vq P Q ω E f U", "title": "" }, { "docid": "4a8c9a2301ea45d6c18ec5ab5a75a2ba", "text": "We propose in this paper a computer vision-based posture recognition method for home monitoring of the elderly. The proposed system performs human detection prior to the posture analysis; posture recognition is performed only on a human silhouette. The human detection approach has been designed to be robust to different environmental stimuli. Thus, posture is analyzed with simple and efficient features that are not designed to manage constraints related to the environment but only designed to describe human silhouettes. The posture recognition method, based on fuzzy logic, identifies four static postures and is robust to variation in the distance between the camera and the person, and to the person's morphology. With an accuracy of 74.29% of satisfactory posture recognition, this approach can detect emergency situations such as a fall within a health smart home.", "title": "" }, { "docid": "b15b88a31cc1762618ca976bdf895d57", "text": "How can we build agents that keep learning from experience, quickly and efficiently, after their initial training? Here we take inspiration from the main mechanism of learning in biological brains: synaptic plasticity, carefully tuned by evolution to produce efficient lifelong learning. We show that plasticity, just like connection weights, can be optimized by gradient descent in large (millions of parameters) recurrent networks with Hebbian plastic connections. First, recurrent plastic networks with more than two million parameters can be trained to memorize and reconstruct sets of novel, high-dimensional (1,000+ pixels) natural images not seen during training. Crucially, traditional non-plastic recurrent networks fail to solve this task. Furthermore, trained plastic networks can also solve generic meta-learning tasks such as the Omniglot task, with competitive results and little parameter overhead. Finally, in reinforcement learning settings, plastic networks outperform a non-plastic equivalent in a maze exploration task. We conclude that differentiable plasticity may provide a powerful novel approach to the learning-to-learn problem.", "title": "" }, { "docid": "a2cf369a67507d38ac1a645e84525497", "text": "Development of a cystic mass on the nasal dorsum is a very rare complication of aesthetic rhinoplasty. Most reported cases are of mucous cyst and entrapment of the nasal mucosa in the subcutaneous space due to traumatic surgical technique has been suggested as a presumptive pathogenesis. Here, we report a case of dorsal nasal cyst that had a different pathogenesis for cyst formation. A 58-yr-old woman developed a large cystic mass on the nasal radix 30 yr after augmentation rhinoplasty with silicone material. The mass was removed via a direct open approach and the pathology findings revealed a foreign body inclusion cyst associated with silicone. Successful nasal reconstruction was performed with autologous cartilages. Discussion and a brief review of the literature will be focused on the pathophysiology of and treatment options for a postrhinoplasty dorsal cyst.", "title": "" }, { "docid": "9ade6407ce2603e27744df1b03728bfc", "text": "We describe a large vocabulary speech recognition system that is accurate, has low latency, and yet has a small enough memory and computational footprint to run faster than real-time on a Nexus 5 Android smartphone. We employ a quantized Long Short-Term Memory (LSTM) acoustic model trained with connectionist temporal classification (CTC) to directly predict phoneme targets, and further reduce its memory footprint using an SVD-based compression scheme. Additionally, we minimize our memory footprint by using a single language model for both dictation and voice command domains, constructed using Bayesian interpolation. Finally, in order to properly handle device-specific information, such as proper names and other context-dependent information, we inject vocabulary items into the decoder graph and bias the language model on-the-fly. Our system achieves 13.5% word error rate on an open-ended dictation task, running with a median speed that is seven times faster than real-time.", "title": "" }, { "docid": "fe2594f98faa2ceda8b2c25bddc722d1", "text": "This study aimed at investigating the effect of a suggested EFL Flipped Classroom Teaching Model (EFL-FCTM) on graduate students' English higher-order thinking skills (HOTS), engagement and satisfaction. Also, it investigated the relationship between higher-order thinking skills, engagement and satisfaction. The sample comprised (67) graduate female students; an experimental group (N=33) and a control group (N=34), studying an English course at Taif University, KSA. The study used mixed method design; a pre-post HOTS test was carried out and two 5-Likert scale questionnaires had been designed and distributed; an engagement scale and a satisfaction scale. The findings of the study revealed statistically significant differences between the two group in HOTS in favor of the experimental group. Also, there was significant difference between the pre and post administration of the engagement scale in favor of the post administration. Moreover, students satisfaction on the (EFL-FCTM) was high. Finally, there were high significant relationships between HOTS and student engagement, HOTS and satisfaction and between student engagement and satisfaction.", "title": "" }, { "docid": "617e92bba5d9bd93eaae1718c1da276c", "text": "This paper describes MAISE, an embedded linear circuit simulator for use mainly within timing and noise analysis tools. MAISE achieves the fastest possible analysis performance over a wide range of circuit sizes and topologies by an adaptive architecture that allows applying the most efficient combination of model reduction algorithms and linear solvers for each class of circuits. The main pillar of adaptability in MAISE is a novel nodal-analysis formulation (PNA) which permits the use of symmetric, positive-definite Cholesky solvers for all circuit topologies. Moreover, frequently occurring special cases, e.g., inductor-resistor tree structures result in particular types of matrices that are solved by an even faster linear time algorithm. Model order reduction algorithms employed in MAISE exploit symmetry and positive-definiteness whenever available and use symmetric-Lanczos iteration and nonstandard inner-products for generating the Krylov subspace basis. The efficiency of the new simulator is supported by a wide range of industrial examples.", "title": "" }, { "docid": "66c2fcf1076796bb0a7fa16b18eac612", "text": "A firewall is a security guard placed at the point of entry between a private network and the outside Internet such that all incoming and outgoing packets have to pass through it. The function of a firewall is to examine every incoming or outgoing packet and decide whether to accept or discard it. This function is conventionally specified by a sequence of rules, where rules often conflict. To resolve conflicts, the decision for each packet is the decision of the first rule that the packet matches. The current practice of designing a firewall directly as a sequence of rules suffers from three types of major problems: (1) the consistency problem, which means that it is difficult to order the rules correctly; (2) the completeness problem, which means that it is difficult to ensure thorough consideration for all types of traffic; (3) the compactness problem, which means that it is difficult to keep the number of rules small (because some rules may be redundant and some rules may be combined into one rule). To achieve consistency, completeness, and compactness, we propose a new method called Structured Firewall Design, which consists of two steps. First, one designs a firewall using a Firewall Decision Diagram instead of a sequence of often conflicting rules. Second, a program converts the firewall decision diagram into a compact, yet functionally equivalent, sequence of rules. This method addresses the consistency problem because a firewall decision diagram is conflict-free. It addresses the completeness problem because the syntactic requirements of a firewall decision diagram force the designer to consider all types of traffic. It also addresses the compactness problem because in the second step we use two algorithms (namely FDD reduction and FDD marking) to combine rules together, and one algorithm (namely Firewall compaction) to remove redundant rules. Moreover, the techniques and algorithms presented in this paper are extensible to other rule-based systems such as IPsec rules.", "title": "" }, { "docid": "e5ddbe32d1beed6de2e342c5d5fea274", "text": "Link prediction appears as a central problem of network science, as it calls for unfolding the mechanisms that govern the micro-dynamics of the network. In this work, we are interested in ego-networks, that is the mere information of interactions of a node to its neighbors, in the context of social relationships. As the structural information is very poor, we rely on another source of information to predict links among egos’ neighbors: the timing of interactions. We define several features to capture different kinds of temporal information and apply machine learning methods to combine these various features and improve the quality of the prediction. We demonstrate the efficiency of this temporal approach on a cellphone interaction dataset, pointing out features which prove themselves to perform well in this context, in particular the temporal profile of interactions and elapsed time between contacts.", "title": "" }, { "docid": "12c947a09e6dbaeca955b18900912b96", "text": "A two stages car detection method using deformable part models with composite feature sets (DPM/CF) is proposed to recognize cars of various types and from multiple viewing angles. In the first stage, a HOG template is matched to detect the bounding box of the entire car of a certain type and viewed from a certain angle (called a t/a pair), which yields a region of interest (ROI). In the second stage, various part detectors using either HOG or the convolution neural network (CNN) features are applied to the ROI for validation. An optimization procedure based on latent logistic regression is adopted to select the optimal part detector's location, window size, and feature to use. Extensive experimental results indicate the proposed DPM/CF system can strike a balance between detection accuracy and training complexity.", "title": "" }, { "docid": "27136e888c3ebfef4ea7105d68a13ffd", "text": "The huge amount of (potentially) available spectrum makes millimeter wave (mmWave) a promising candidate for fifth generation cellular networks. Unfortunately, differences in the propagation environment as a function of frequency make it hard to make comparisons between systems operating at mmWave and microwave frequencies. This paper presents a simple channel model for evaluating system level performance in mmWave cellular networks. The model uses insights from measurement results that show mmWave is sensitive to blockages revealing very different path loss characteristics between line-of-sight (LOS) and non-line-of-sight (NLOS) links. The conventional path loss model with a single log-distance path loss function and a shadowing term is replaced with a stochastic path loss model with a distance-dependent LOS probability and two different path loss functions to account for LOS and NLOS links. The proposed model is used to compare microwave and mmWave networks in simulations. It is observed that mmWave networks can provide comparable coverage probability with a dense deployment, leading to much higher data rates thanks to the large bandwidth available in the mmWave spectrum.", "title": "" }, { "docid": "68a5b5664afe1d75811e5f0346455689", "text": "Personality, as defined in psychology, accounts for the individual differences in users’ preferences and behaviour. It has been found that there are significant correlations between personality and users’ characteristics that are traditionally used by recommender systems ( e.g. music preferences, social media behaviour, learning styles etc.). Among the many models of personality, the Five Factor Model (FFM) appears suitable for usage in recommender systems as it can be quantitatively measured (i.e. numerical values for each of the factors, namely, openness, conscientiousness, extraversion, agreeableness and neuroticism). The acquisition of the personality factors for an observed user can be done explicitly through questionnaires or implicitly using machine learning techniques with features extracted from social media streams or mobile phone call logs. There are, although limited, a number of available datasets to use in offline recommender systems experiment. Studies have shown that personality was successful at tackling the cold-start problem, making group recommendations, addressing cross-domain preferences4 and at generating diverse recommendations. However, a number of challenges still remain.", "title": "" }, { "docid": "28445e19325130be11eae6d21963489e", "text": "Social media is often viewed as a sensor into various societal events such as disease outbreaks, protests, and elections. We describe the use of social media as a crowdsourced sensor to gain insight into ongoing cyber-attacks. Our approach detects a broad range of cyber-attacks (e.g., distributed denial of service (DDoS) attacks, data breaches, and account hijacking) in a weakly supervised manner using just a small set of seed event triggers and requires no training or labeled samples. A new query expansion strategy based on convolution kernels and dependency parses helps model semantic structure and aids in identifying key event characteristics. Through a large-scale analysis over Twitter, we demonstrate that our approach consistently identifies and encodes events, outperforming existing methods.", "title": "" }, { "docid": "d7538c23aa43edce6cfde8f2125fd3bb", "text": "We propose a holographic-laser-drawing volumetric display using a computer-generated hologram displayed on a liquid crystal spatial light modulator and multilayer fluorescent screen. The holographic-laser-drawing technique has enabled three things; (i) increasing the number of voxels of the volumetric graphics per unit time; (ii) increasing the total input energy to the volumetric display because the maximum energy incident at a point in the multilayer fluorescent screen is limited by the damage threshold; (iii) controlling the size, shape and spatial position of voxels. In this paper, we demonstrated (i) and (ii). The multilayer fluorescent screen was newly developed to display colored voxels. The thin layer construction of the multilayer fluorescent screen minimized the axial length of the voxels. A two-color volumetric display with blue-green voxels and red voxels were demonstrated.", "title": "" } ]
scidocsrr
a7c17eaa960b048e176856545ff58fd7
SERM: A Recurrent Model for Next Location Prediction in Semantic Trajectories
[ { "docid": "1527c70d0b78a3d2aa6886282425c744", "text": "Spatial and temporal contextual information plays a key role for analyzing user behaviors, and is helpful for predicting where he or she will go next. With the growing ability of collecting information, more and more temporal and spatial contextual information is collected in systems, and the location prediction problem becomes crucial and feasible. Some works have been proposed to address this problem, but they all have their limitations. Factorizing Personalized Markov Chain (FPMC) is constructed based on a strong independence assumption among different factors, which limits its performance. Tensor Factorization (TF) faces the cold start problem in predicting future actions. Recurrent Neural Networks (RNN) model shows promising performance comparing with PFMC and TF, but all these methods have problem in modeling continuous time interval and geographical distance. In this paper, we extend RNN and propose a novel method called Spatial Temporal Recurrent Neural Networks (ST-RNN). ST-RNN can model local temporal and spatial contexts in each layer with time-specific transition matrices for different time intervals and distance-specific transition matrices for different geographical distances. Experimental results show that the proposed ST-RNN model yields significant improvements over the competitive compared methods on two typical datasets, i.e., Global Terrorism Database (GTD) and Gowalla dataset.", "title": "" }, { "docid": "153721e9da56e400558f9ec6d4011aac", "text": "Periodicity is a frequently happening phenomenon for moving objects. Finding periodic behaviors is essential to understanding object movements. However, periodic behaviors could be complicated, involving multiple interleaving periods, partial time span, and spatiotemporal noises and outliers.\n In this paper, we address the problem of mining periodic behaviors for moving objects. It involves two sub-problems: how to detect the periods in complex movement, and how to mine periodic movement behaviors. Our main assumption is that the observed movement is generated from multiple interleaved periodic behaviors associated with certain reference locations. Based on this assumption, we propose a two-stage algorithm, Periodica, to solve the problem. At the first stage, the notion of observation spot is proposed to capture the reference locations. Through observation spots, multiple periods in the movement can be retrieved using a method that combines Fourier transform and autocorrelation. At the second stage, a probabilistic model is proposed to characterize the periodic behaviors. For a specific period, periodic behaviors are statistically generalized from partial movement sequences through hierarchical clustering. Empirical studies on both synthetic and real data sets demonstrate the effectiveness of our method.", "title": "" } ]
[ { "docid": "af2dbc8d3a04fb3059263b8c367ac856", "text": "The area of sentiment mining (also called sentiment extraction, opinion mining, opinion extraction, sentiment analysis, etc.) has seen a large increase in academic interest in the last few years. Researchers in the areas of natural language processing, data mining, machine learning, and others have tested a variety of methods of automating the sentiment analysis process. In this research work, new hybrid classification method is proposed based on coupling classification methods using arcing classifier and their performances are analyzed in terms of accuracy. A Classifier ensemble was designed using Naïve Bayes (NB), Support Vector Machine (SVM) and Genetic Algorithm (GA). In the proposed work, a comparative study of the effectiveness of ensemble technique is made for sentiment classification. The feasibility and the benefits of the proposed approaches are demonstrated by means of restaurant review that is widely used in the field of sentiment classification. A wide range of comparative experiments are conducted and finally, some in-depth discussion is presented and conclusions are drawn about the effectiveness of ensemble technique for sentiment classification. Keywords— Accuracy, Arcing classifier, Genetic Algorithm (GA). Naïve Bayes (NB), Sentiment Mining, Support Vector Machine (SVM)", "title": "" }, { "docid": "ff3392832942da723a6a5184669a06a8", "text": "The past few years has seen the rapid growth of data mining approaches for the analysis of data obtained from Massive Open Online Courses (MOOCs). The objectives of this study are to develop approaches to predict the scores a student may achieve on a given grade-related assessment based on information, considered as prior performance or prior activity in the course. We develop a personalized linear multiple regression (PLMR) model to predict the grade for a student, prior to attempting the assessment activity. The developed model is real-time and tracks the participation of a student within a MOOC (via click-stream server logs) and predicts the performance of a student on the next assessment within the course offering. We perform a comprehensive set of experiments on data obtained from two openEdX MOOCs via a Stanford University initiative. Our experimental results show the promise of the proposed approach in comparison to baseline approaches and also helps in identification of key features that are associated with the study habits and learning behaviors of students.", "title": "" }, { "docid": "6ef985d656f605d40705a582483d562e", "text": "A rising issue in the scientific community entails the identification of patterns in the evolution of the scientific enterprise and the emergence of trends that influence scholarly impact. In this direction, this paper investigates the mechanism with which citation accumulation occurs over time and how this affects the overall impact of scientific output. Utilizing data regarding the SOFSEM Conference (International Conference on Current Trends in Theory and Practice of Computer Science), we study a corpus of 1006 publications with their associated authors and affiliations to uncover the effects of collaboration on the conference output. We proceed to group publications into clusters based on the trajectories they follow in their citation acquisition. Representative patterns are identified to characterize dominant trends of the conference, while exploring phenomena of early and late recognition by the scientific community and their correlation with impact.", "title": "" }, { "docid": "0939a703cb2eeb9396c4e681f95e1e4d", "text": "Learning-based methods for visual segmentation have made progress on particular types of segmentation tasks, but are limited by the necessary supervision, the narrow definitions of fixed tasks, and the lack of control during inference for correcting errors. To remedy the rigidity and annotation burden of standard approaches, we address the problem of few-shot segmentation: given few image and few pixel supervision, segment any images accordingly. We propose guided networks, which extract a latent task representation from any amount of supervision, and optimize our architecture end-to-end for fast, accurate few-shot segmentation. Our method can switch tasks without further optimization and quickly update when given more guidance. We report the first results for segmentation from one pixel per concept and show real-time interactive video segmentation. Our unified approach propagates pixel annotations across space for interactive segmentation, across time for video segmentation, and across scenes for semantic segmentation. Our guided segmentor is state-of-the-art in accuracy for the amount of annotation and time. See http://github.com/shelhamer/revolver for code, models, and more details.", "title": "" }, { "docid": "d836e5c3ef7742b6dfb47c46672fa251", "text": "Convolutional neural networks have proven to be highly successful in applications such as image classification, object tracking, and many other tasks based on 2D inputs. Recently, researchers have started to apply convolutional neural networks to video classification, which constitutes a 3D input and requires far larger amounts of memory and much more computation. FFT based methods can reduce the amount of computation, but this generally comes at the cost of an increased memory requirement. On the other hand, the Winograd Minimal Filtering Algorithm (WMFA) can reduce the number of operations required and thus can speed up the computation, without increasing the required memory. This strategy was shown to be successful for 2D neural networks. We implement the algorithm for 3D convolutional neural networks and apply it to a popular 3D convolutional neural network which is used to classify videos and compare it to cuDNN. For our highly optimized implementation of the algorithm, we observe a twofold speedup for most of the 3D convolution layers of our test network compared to the cuDNN version.", "title": "" }, { "docid": "432e7ae2e76d76dbb42d92cd9103e3d2", "text": "Previous work has used monolingual parallel corpora to extract and generate paraphrases. We show that this task can be done using bilingual parallel corpora, a much more commonly available resource. Using alignment techniques from phrasebased statistical machine translation, we show how paraphrases in one language can be identified using a phrase in another language as a pivot. We define a paraphrase probability that allows paraphrases extracted from a bilingual parallel corpus to be ranked using translation probabilities, and show how it can be refined to take contextual information into account. We evaluate our paraphrase extraction and ranking methods using a set of manual word alignments, and contrast the quality with paraphrases extracted from automatic alignments.", "title": "" }, { "docid": "866f7fa780b24fe420623573482df984", "text": "We present the prenatal ultrasound findings of massive macroglossia in a fetus with prenatally diagnosed Beckwith-Wiedemann syndrome. Three-dimensional surface mode ultrasound was utilized for enhanced visualization of the macroglossia.", "title": "" }, { "docid": "f45b7caf3c599a6de835330c39599570", "text": "Describes an automated method to locate and outline blood vessels in images of the ocular fundus. Such a tool should prove useful to eye care specialists for purposes of patient screening, treatment evaluation, and clinical study. The authors' method differs from previously known methods in that it uses local and global vessel features cooperatively to segment the vessel network. The authors evaluate their method using hand-labeled ground truth segmentations of 20 images. A plot of the operating characteristic shows that the authors' method reduces false positives by as much as 15 times over basic thresholding of a matched filter response (MFR), at up to a 75% true positive rate. For a baseline, they also compared the ground truth against a second hand-labeling, yielding a 90% true positive and a 4% false positive detection rate, on average. These numbers suggest there is still room for a 15% true positive rate improvement, with the same false positive rate, over the authors' method. They are making all their images and hand labelings publicly available for interested researchers to use in evaluating related methods.", "title": "" }, { "docid": "84ece888e2302d13775973f552c6b810", "text": "We present a qualitative study of hospitality exchange processes that take place via the online peer-to-peer platform Airbnb. We explore 1) what motivates individuals to monetize network hospitality and 2) how the presence of money ties in with the social interaction related to network hospitality. We approach the topic from the perspective of hosts -- that is, Airbnb users who participate by offering accommodation for other members in exchange for monetary compensation. We found that participants were motivated to monetize network hospitality for both financial and social reasons. Our analysis indicates that the presence of money can provide a helpful frame for network hospitality, supporting hosts in their efforts to accomplish desired sociability, select guests consistent with their preferences, and control the volume and type of demand. We conclude the paper with a critical discussion of the implications of our findings for network hospitality and, more broadly, for the so-called sharing economy.", "title": "" }, { "docid": "97a9f11cf142c251364da09a264026ab", "text": "We consider techniques for permuting a sparse matrix so that the diagonal of the permuted matrix has entries of large absolute value. We discuss various criteria for this and consider their implementation as computer codes. We then indicate several cases where such a permutation can be useful. These include the solution of sparse equations by a direct method and by an iterative technique. We also consider its use in generating a preconditioner for an iterative method. We see that the effect of these reorderings can be dramatic although the best a priori strategy is by no means clear.", "title": "" }, { "docid": "1ab59137961e9a9f3a347d5331ce7be1", "text": "Peer-to-peer networks have been quite thoroughly measured over the past years, however it is interesting to note that the BitTorrent Mainline DHT has received very little attention even though it is by far the largest of currently active overlay systems, as our results show. As Mainline DHT differs from other systems, existing measurement methodologies are not appropriate for studying it. In this paper we present an efficient methodology for estimating the number of active users in the network. We have identified an omission in previous methodologies used to measure the size of the network and our methodology corrects this. Our method is based on modeling crawling inaccuracies as a Bernoulli process. It guarantees a very accurate estimation and is able to provide the estimate in about 5 seconds. Through experiments in controlled situations, we demonstrate the accuracy of our method and show the causes of the inaccuracies in previous work, by reproducing the incorrect results. Besides accurate network size estimates, our methodology can be used to detect network anomalies, in particular Sybil attacks in the network. We also report on the results from our measurements which have been going on for almost 2.5 years and are the first long-term study of Mainline DHT.", "title": "" }, { "docid": "6f2dfe7dad77b55635ce279bd4c2acdd", "text": "Designing of biologically active scaffolds with optimal characteristics is one of the key factors for successful tissue engineering. Recently, hydrogels have received a considerable interest as leading candidates for engineered tissue scaffolds due to their unique compositional and structural similarities to the natural extracellular matrix, in addition to their desirable framework for cellular proliferation and survival. More recently, the ability to control the shape, porosity, surface morphology, and size of hydrogel scaffolds has created new opportunities to overcome various challenges in tissue engineering such as vascularization, tissue architecture and simultaneous seeding of multiple cells. This review provides an overview of the different types of hydrogels, the approaches that can be used to fabricate hydrogel matrices with specific features and the recent applications of hydrogels in tissue engineering. Special attention was given to the various design considerations for an efficient hydrogel scaffold in tissue engineering. Also, the challenges associated with the use of hydrogel scaffolds were described.", "title": "" }, { "docid": "e373e44d5d4445ca56a45b4800b93740", "text": "In recent years a great deal of research efforts in ship hydromechanics have been devoted to practical navigation problems in moving larger ships safely into existing harbours and inland waterways and to ease congestion in existing shipping routes. The starting point of any navigational or design analysis lies in the accurate determination of the hydrodynamic forces generated on the ship hull moving in confined waters. The analysis of such ship motion should include the effects of shallow water. An area of particular interest is the determination of ship resistance in shallow or restricted waters at different speeds, forming the basis for the power calculation and design of the propulsion system. The present work describes the implementation of CFD techniques for determining the shallow water resistance of a river-sea ship at different speeds. The ship hull flow is analysed for different ship speeds in shallow water conditions. The results obtained from CFD analysis are compared with available standard results.", "title": "" }, { "docid": "447b689d9c7c2a6b71baf2fac2fa2a4f", "text": "Status of this Memo This memo provides information for the Internet community. It does not specify an Internet standard of any kind. Distribution of this memo is unlimited. Abstract Various routing protocols, including Open Shortest Path First (OSPF) and Intermediate System to Intermediate System (ISIS), explicitly allow \"Equal-Cost Multipath\" (ECMP) routing. Some router implementations also allow equal-cost multipath usage with RIP and other routing protocols. The effect of multipath routing on a forwarder is that the forwarder potentially has several next-hops for any given destination and must use some method to choose which next-hop should be used for a given data packet.", "title": "" }, { "docid": "1c60ddeb7e940992094cb8f3913e811a", "text": "In this paper, we address the scene segmentation task by capturing rich contextual dependencies based on the selfattention mechanism. Unlike previous works that capture contexts by multi-scale features fusion, we propose a Dual Attention Networks (DANet) to adaptively integrate local features with their global dependencies. Specifically, we append two types of attention modules on top of traditional dilated FCN, which model the semantic interdependencies in spatial and channel dimensions respectively. The position attention module selectively aggregates the features at each position by a weighted sum of the features at all positions. Similar features would be related to each other regardless of their distances. Meanwhile, the channel attention module selectively emphasizes interdependent channel maps by integrating associated features among all channel maps. We sum the outputs of the two attention modules to further improve feature representation which contributes to more precise segmentation results. We achieve new state-of-the-art segmentation performance on three challenging scene segmentation datasets, i.e., Cityscapes, PASCAL Context and COCO Stuff dataset. In particular, a Mean IoU score of 81.5% on Cityscapes test set is achieved without using coarse data. we make the code and trained models publicly available at https://github.com/junfu1115/DANet", "title": "" }, { "docid": "9f1acbd886cdf792fcaeafad9bfdfed3", "text": "In technical support scams, cybercriminals attempt to convince users that their machines are infected with malware and are in need of their technical support. In this process, the victims are asked to provide scammers with remote access to their machines, who will then “diagnose the problem”, before offering their support services which typically cost hundreds of dollars. Despite their conceptual simplicity, technical support scams are responsible for yearly losses of tens of millions of dollars from everyday users of the web. In this paper, we report on the first systematic study of technical support scams and the call centers hidden behind them. We identify malvertising as a major culprit for exposing users to technical support scams and use it to build an automated system capable of discovering, on a weekly basis, hundreds of phone numbers and domains operated by scammers. By allowing our system to run for more than 8 months we collect a large corpus of technical support scams and use it to provide insights on their prevalence, the abused infrastructure, the illicit profits, and the current evasion attempts of scammers. Finally, by setting up a controlled, IRB-approved, experiment where we interact with 60 different scammers, we experience first-hand their social engineering tactics, while collecting detailed statistics of the entire process. We explain how our findings can be used by law-enforcing agencies and propose technical and educational countermeasures for helping users avoid being victimized by technical support scams.", "title": "" }, { "docid": "c6d1ad31d52ed40d2fdba3c5840cbb63", "text": "Classification is one of the most active research and application areas of neural networks. The literature is vast and growing. This paper summarizes the some of the most important developments in neural network classification research. Specifically, the issues of posterior probability estimation, the link between neural and conventional classifiers, learning and generalization tradeoff in classification, the feature variable selection, as well as the effect of misclassification costs are examined. Our purpose is to provide a synthesis of the published research in this area and stimulate further research interests and efforts in the identified topics.", "title": "" }, { "docid": "f3c0479308b50a66646a99f55d19b310", "text": "In the course of the More Electric Aircraft program frequently active three-phase rectifiers in the power range of several kilowatts are required. It is shown that the three-phase -switch rectifier (comprising three -connected bidirectional switches) is well suited for this application. The system is analyzed using space vector calculus and a novel PWM current controller modulation concept is presented, where all three phases are controlled simultaneously; the analysis shows that the proposed concept yields optimal switching sequences. Analytical relationships for calculating the power components average and rms current ratings are derived to facilitate the rectifier design. A laboratory prototype with an output power of 5 kW is built and measurements taken from this prototype confirm the operation of the proposed current controller. Finally, initial EMI-measurements of the system are also presented.", "title": "" }, { "docid": "bbfdc30b412df84861e242d4305ca20d", "text": "OBJECTIVES\nLocal anesthetic injection into the interspace between the popliteal artery and the posterior capsule of the knee (IPACK) has the potential to provide motor-sparing analgesia to the posterior knee after total knee arthroplasty. The primary objective of this cadaveric study was to evaluate injectate spread to relevant anatomic structures with IPACK injection.\n\n\nMETHODS\nAfter receipt of Institutional Review Board Biospecimen Subcommittee approval, IPACK injection was performed on fresh-frozen cadavers. The popliteal fossa in each specimen was dissected and examined for injectate spread.\n\n\nRESULTS\nTen fresh-frozen cadaver knees were included in the study. Injectate was observed to spread in the popliteal fossa at a mean ± SD of 6.1 ± 0.7 cm in the medial-lateral dimension and 10.1 ± 3.2 cm in the proximal-distal dimension. No injectate was noted to be in contact with the proximal segment of the sciatic nerve, but 3 specimens showed injectate spread to the tibial nerve. In 3 specimens, the injectate showed possible contact with the common peroneal nerve. The middle genicular artery was consistently surrounded by injectate.\n\n\nCONCLUSIONS\nThis cadaver study of IPACK injection demonstrated spread throughout the popliteal fossa without proximal sciatic involvement. However, the potential for injectate to spread to the tibial or common peroneal nerve was demonstrated. Consistent surrounding of the middle genicular artery with injectate suggests a potential mechanism of analgesia for the IPACK block, due to the predictable relationship between articular sensory nerves and this artery. Further study is needed to determine the ideal site of IPACK injection.", "title": "" }, { "docid": "daa7773486701deab7b0c69e1205a1d9", "text": "Age progression is defined as aesthetically re-rendering the aging face at any future age for an individual face. In this work, we aim to automatically render aging faces in a personalized way. Basically, for each age group, we learn an aging dictionary to reveal its aging characteristics (e.g., wrinkles), where the dictionary bases corresponding to the same index yet from two neighboring aging dictionaries form a particular aging pattern cross these two age groups, and a linear combination of all these patterns expresses a particular personalized aging process. Moreover, two factors are taken into consideration in the dictionary learning process. First, beyond the aging dictionaries, each person may have extra personalized facial characteristics, e.g., mole, which are invariant in the aging process. Second, it is challenging or even impossible to collect faces of all age groups for a particular person, yet much easier and more practical to get face pairs from neighboring age groups. To this end, we propose a novel Bi-level Dictionary Learning based Personalized Age Progression (BDL-PAP) method. Here, bi-level dictionary learning is formulated to learn the aging dictionaries based on face pairs from neighboring age groups. Extensive experiments well demonstrate the advantages of the proposed BDL-PAP over other state-of-the-arts in term of personalized age progression, as well as the performance gain for cross-age face verification by synthesizing aging faces.", "title": "" } ]
scidocsrr
1aa1736b8bed1b6c5a1f950ddd3b2365
Towards consistent visual-inertial navigation
[ { "docid": "cc63fa999bed5abf05a465ae7313c053", "text": "In this paper, we consider the development of a rotorcraft micro aerial vehicle (MAV) system capable of vision-based state estimation in complex environments. We pursue a systems solution for the hardware and software to enable autonomous flight with a small rotorcraft in complex indoor and outdoor environments using only onboard vision and inertial sensors. As rotorcrafts frequently operate in hover or nearhover conditions, we propose a vision-based state estimation approach that does not drift when the vehicle remains stationary. The vision-based estimation approach combines the advantages of monocular vision (range, faster processing) with that of stereo vision (availability of scale and depth information), while overcoming several disadvantages of both. Specifically, our system relies on fisheye camera images at 25 Hz and imagery from a second camera at a much lower frequency for metric scale initialization and failure recovery. This estimate is fused with IMU information to yield state estimates at 100 Hz for feedback control. We show indoor experimental results with performance benchmarking and illustrate the autonomous operation of the system in challenging indoor and outdoor environments.", "title": "" } ]
[ { "docid": "7aec5d9476ed1bd9452a348f5e2a9147", "text": "Emerging nonvolatile memory (NVM) technologies, such as resistive random access memories (RRAM) and phase-change memories (PCM), are an attractive option for future memory architectures due to their nonvolatility, high density, and low-power operation. Notwithstanding these advantages, they are prone to high defect densities due to the nondeterministic nature of the nanoscale fabrication. We examine the fault models and propose an efficient testing technique to test crossbar-based NVMs. The typical approach to testing memories entails testing one memory element at a time. This is time consuming and does not scale for the dense, RRAM or PCM-based memories. We propose a testing scheme based on “sneak-path sensing” to efficiently detect faults in the memory. The testing scheme uses sneak paths inherent in crossbar memories, to test multiple memory elements at the same time, thereby reducing testing time. We designed the design-for-test support necessary to control the number of sneak paths that are concurrently enabled; this helps control the power consumed during test. The proposed scheme enables and leverages sneak paths during test mode, while still maintaining a sneak path free crossbar during normal operation.", "title": "" }, { "docid": "203ae6dee1000e83dbce325c14539365", "text": "In this paper, the usefulness of several topologies of DC-DC converters for measuring the characteristic curves of photovoltaic (PV) modules is theoretically analyzed. Eight topologies of DC-DC converters with step-down/step-up conversion relation (buck-boost single inductor, CSC (canonical switching cell), Cuk, SEPIC (single-ended primary inductance converter), zeta, flyback, boost-buck-cascaded, and buck-boost-cascaded converters) are compared and evaluated. This application is based on the property of these converters for emulating a resistor when operating in continuous conduction mode. Therefore, they are suitable to implement a system capable of measuring the I-V curve of PV modules. Other properties have been taken into account: input ripple, devices stress, size of magnetic components and input-output isolation. The study determines that SEPIC and Cuk converters are the most suitable for this application mainly due to the low input current ripple, allow input-output insulation and can be connected in parallel in order to measure PV modules o arrays with greater power. CSC topology is also suitable because it uses fewer components but of a larger size. Experimental results validate the comparative analysis.", "title": "" }, { "docid": "8acd410ff0757423d09928093e7e8f63", "text": "We present a simple log-linear reparameterization of IBM Model 2 that overcomes problems arising from Model 1’s strong assumptions and Model 2’s overparameterization. Efficient inference, likelihood evaluation, and parameter estimation algorithms are provided. Training the model is consistently ten times faster than Model 4. On three large-scale translation tasks, systems built using our alignment model outperform IBM Model 4. An open-source implementation of the alignment model described in this paper is available from http://github.com/clab/fast align .", "title": "" }, { "docid": "9dac90ed6c1a89fc1f12d7ba581d4889", "text": "BACKGROUND\nAccurate measurement of core temperature is a standard component of perioperative and intensive care patient management. However, core temperature measurements are difficult to obtain in awake patients. A new non-invasive thermometer has been developed, combining two sensors separated by a known thermal resistance ('double-sensor' thermometer). We thus evaluated the accuracy of the double-sensor thermometer compared with a distal oesophageal thermometer to determine if the double-sensor thermometer is a suitable substitute.\n\n\nMETHODS\nIn perioperative and intensive care patient populations (n=68 total), double-sensor measurements were compared with measurements from a distal oesophageal thermometer using Bland-Altman analysis and Lin's concordance correlation coefficient (CCC).\n\n\nRESULTS\nOverall, 1287 measurement pairs were obtained at 5 min intervals. Ninety-eight per cent of all double-sensor values were within +/-0.5 degrees C of oesophageal temperature. The mean bias between the methods was -0.08 degrees C; the limits of agreement were -0.66 degrees C to 0.50 degrees C. Sensitivity and specificity for detection of fever were 0.86 and 0.97, respectively. Sensitivity and specificity for detection of hypothermia were 0.77 and 0.93, respectively. Lin's CCC was 0.93.\n\n\nCONCLUSIONS\nThe new double-sensor thermometer is sufficiently accurate to be considered an alternative to distal oesophageal core temperature measurement, and may be particularly useful in patients undergoing regional anaesthesia.", "title": "" }, { "docid": "ff0d818dfd07033fb5eef453ba933914", "text": "Hyperplastic placentas have been reported in several experimental mouse models, including animals produced by somatic cell nuclear transfer, by inter(sub)species hybridization, and by somatic cytoplasm introduction to oocytes followed by intracytoplasmic sperm injection. Of great interest are the gross and histological features common to these placental phenotypes--despite their quite different etiologies--such as the enlargement of the spongiotrophoblast layers. To find morphological clues to the pathways leading to these similar placental phenotypes, we analyzed the ultrastructure of the three different types of hyperplastic placenta. Most cells affected were of trophoblast origin and their subcellular ultrastructural lesions were common to the three groups, e.g., a heavy accumulation of cytoplasmic vacuoles in the trophoblastic cells composing the labyrinthine wall and an increased volume of spongiotrophoblastic cells with extraordinarily dilatated rough endoplasmic reticulum. Although the numbers of trophoblastic glycogen cells were greatly increased, they maintained their normal ultrastructural morphology, including a heavy glycogen deposition throughout the cytoplasm. The fetal endothelium and small vessels were nearly intact. Our ultrastructural study suggests that these three types of placental hyperplasias, with different etiologies, may have common pathological pathways, which probably exclusively affect the development of certain cell types of the trophoblastic lineage during mouse placentation.", "title": "" }, { "docid": "b8dbd71ff09f2e07a523532a65f690c7", "text": "OBJECTIVE\nTo assess whether adolescent obesity is associated with risk for development of major depressive disorder (MDD) or anxiety disorder. Obesity has been linked to psychosocial difficulties among youth.\n\n\nMETHODS\nAnalysis of a prospective community-based cohort originally from upstate New York, assessed four times over 20 years. Participants (n = 776) were 9 to 18 years old in 1983; subsequent assessments took place in 1985 to 1986 (n = 775), 1991 to 1994 (n = 776), and 2001 to 2003 (n = 661). Using Cox proportional hazards analysis, we evaluated the association of adolescent (age range, 12-17.99 years) weight status with risk for subsequent MDD or anxiety disorder (assessed at each wave by structured diagnostic interviews) in males and females. A total of 701 participants were not missing data on adolescent weight status and had > or = 1 subsequent assessments. MDD and anxiety disorder analyses included 674 and 559 participants (free of current or previous MDD or anxiety disorder), respectively. Adolescent obesity was defined as body mass index above the age- and gender-specific 95th percentile of the Centers for Disease Control and Prevention growth reference.\n\n\nRESULTS\nAdolescent obesity in females predicted an increased risk for subsequent MDD (adjusted hazard ratio (HR) = 3.9; 95% confidence interval (CI) = 1.3, 11.8) and for anxiety disorder (HR = 3.8; CI = 1.3, 11.3). Adolescent obesity in males was not statistically significantly associated with risk for MDD (HR = 1.5; CI = 0.5, 3.5) or anxiety disorder (HR = 0.7; CI = 0.2, 2.9).\n\n\nCONCLUSION\nFemales obese as adolescents may be at increased risk for development of depression or anxiety disorders.", "title": "" }, { "docid": "5100ef5ffa501eb7193510179039cd82", "text": "The interplay between caching and HTTP Adaptive Streaming (HAS) is known to be intricate, and possibly detrimental to QoE. In this paper, we make the case for caching-aware rate decision algorithms at the client side which do not require any collaboration with cache or server. To this goal, we introduce the optimization model which allows to compute the optimal rate decisions in the presence of cache, and compare the current main representatives of HAS algorithms (RBA and BBA) to this optimal. This allows us to assess how far from the optimal these versions are, and on which to build a caching-aware rate decision algorithm.", "title": "" }, { "docid": "33b0347afbf3c15d713c0c8b1ffab1ca", "text": "Modern models of event extraction for tasks like ACE are based on supervised learning of events from small hand-labeled data. However, hand-labeled training data is expensive to produce, in low coverage of event types, and limited in size, which makes supervised methods hard to extract large scale of events for knowledge base population. To solve the data labeling problem, we propose to automatically label training data for event extraction via world knowledge and linguistic knowledge, which can detect key arguments and trigger words for each event type and employ them to label events in texts automatically. The experimental results show that the quality of our large scale automatically labeled data is competitive with elaborately human-labeled data. And our automatically labeled data can incorporate with human-labeled data, then improve the performance of models learned from these data.", "title": "" }, { "docid": "b68680f47f1d9b45e30262ab45f0027b", "text": "Brain-computer interface (BCI) systems create a novel communication channel from the brain to an output device by bypassing conventional motor output pathways of nerves and muscles. Therefore they could provide a new communication and control option for paralyzed patients. Modern BCI technology is essentially based on techniques for the classification of single-trial brain signals. Here we present a novel technique that allows the simultaneous optimization of a spatial and a spectral filter enhancing discriminability rates of multichannel EEG single-trials. The evaluation of 60 experiments involving 22 different subjects demonstrates the significant superiority of the proposed algorithm over to its classical counterpart: the median classification error rate was decreased by 11%. Apart from the enhanced classification, the spatial and/or the spectral filter that are determined by the algorithm can also be used for further analysis of the data, e.g., for source localization of the respective brain rhythms", "title": "" }, { "docid": "74ef26e332b12329d8d83f80169de5c0", "text": "It has been claimed that the discovery of association rules is well-suited for applications of market basket analysis to reveal regularities in the purchase behaviour of customers. Moreover, recent work indicates that the discovery of interesting rules can in fact only be addressed within a microeconomic framework. This study integrates the discovery of frequent itemsets with a (microeconomic) model for product selection (PROFSET). The model enables the integration of both quantitative and qualitative (domain knowledge) criteria. Sales transaction data from a fullyautomated convenience store is used to demonstrate the effectiveness of the model against a heuristic for product selection based on product-specific profitability. We show that with the use of frequent itemsets we are able to identify the cross-sales potential of product items and use this information for better product selection. Furthermore, we demonstrate that the impact of product assortment decisions on overall assortment profitability can easily be evaluated by means of sensitivity analysis.", "title": "" }, { "docid": "444f26f3c1ae4b574d8007f93fc80d3d", "text": "User experience (UX) research has expanded our notion of what makes interactive technology good, often putting hedonic aspects of use such as fun, affect, and stimulation at the center. Outside of UX, the hedonic is often contrasted to the eudaimonic, the notion of striving towards one's personal best. It remains unclear, however, what this distinction offers to UX research conceptually and empirically. We investigate a possible role for eudaimonia in UX research by empirically examining 266 reports of positive experiences with technology and analyzing its relation to established UX concepts. Compared to hedonic experiences, eudaimonic experiences were about striving towards and accomplishing personal goals through technology use. They were also characterized by increased need fulfillment, positive affect, meaning, and long-term importance. Taken together, our findings suggest that while hedonic UX is about momentary pleasures directly derived from technology use, eudaimonic UX is about meaning from need fulfilment.", "title": "" }, { "docid": "4f296caa2ee4621a8e0858bfba701a3b", "text": "This paper considers the problem of assessing visual aesthetic quality with semantic information. We cast the assessment problem as the main task among a multi-task deep model, and argue that semantic recognition offers the key to addressing this problem. Based on convolutional neural networks, we propose a general multi-task framework with four different structures. In each structure, aesthetic quality assessment task and semantic recognition task are leveraged, and different features are explored to improve the quality assessment. Moreover, an effective strategy of keeping a balanced effect between the semantic task and aesthetic task is developed to optimize the parameters of our framework. The correlation analysis among the tasks validates the importance of the semantic recognition in aesthetic quality assessment. Extensive experiments verify the effectiveness of the proposed multi-task framework, and further corroborate the", "title": "" }, { "docid": "2253d4fcef5289578595d6c72db3a905", "text": "Estimation of efficiency of firms in a non-competit ive market characterized by heterogeneous inputs and outputs along with their varying prices is questionable when factor-based technology sets are used in data envelopment analysis (DEA). In thi s scenario, a value-based technology becomes an appropriate reference technology against which e fficiency can be assessed. In this contribution, the value-based models of Tone (2002) are extended in a directional DEA set up to develop new directional costand revenue-based measures of eff iciency, which are then decomposed into their respective directional value-based technical and al loc tive efficiencies. These new directional value-based measures are more general, and include the xisting value-based measures as special cases. These measures satisfy several desirable pro p rties of an ideal efficiency measure. These new measures are advantageous over the existing ones in t rms of 1) their ability to satisfy the most important property of translation invariance; 2) ch oi es over the use of suitable direction vectors in handling negative data; and 3) flexibility in provi ding the decision makers with the option of specifying preferable direction vectors to incorpor ate their preferences. Finally, under the condition of no prior unit price information, a directional v alue-based measure of profit inefficiency is developed for firms whose underlying objectives are p ofit maximization. For an illustrative empirical application, our new measures are applied to a real-life data set of 50 US banks to draw inferences about the production correspondence of b anking industry.", "title": "" }, { "docid": "4ca04d9a84555894f8cf2834ffafd310", "text": "T he Economist recently reported that infrastructure spending is the largest it is ever been as a share of world GDP. With $22 trillion in projected investments over the next ten years in emerging economies alone, the magazine calls it the “biggest investment boom in history.” The efficiency of infrastructure planning and execution is therefore particularly important at present. Unfortunately, the private sector, the public sector, and private/public sector partnerships have a dismal record of delivering on large infrastructure cost and performance promises. Consider the following typical examples.", "title": "" }, { "docid": "2419e2750787b1ba2f00d1629e3bbdad", "text": "Resilient transportation systems enable quick evacuation, rescue, distribution of relief supplies, and other activities for reducing the impact of natural disasters and for accelerating the recovery from them. The resilience of a transportation system largely relies on the decisions made during a natural disaster. We developed an agent-based traffic simulator for predicting the results of potential actions taken with respect to the transportation system to quickly make appropriate decisions. For realistic simulation, we govern the behavior of individual drivers of vehicles with foundational principles learned from probe-car data. For example, we used the probe-car data to estimate the personality of individual drivers of vehicles in selecting their routes, taking into account various metrics of routes such as travel time, travel distance, and the number of turns. This behavioral model, which was constructed from actual data, constitutes a special feature of our simulator. We built this simulator using the X10 language, which enables massively parallel execution for simulating traffic in a large metropolitan area. We report the use cases of the simulator in three major cities in the context of disaster recovery and resilient transportation.", "title": "" }, { "docid": "00ac09dab67200f6b9df78a480d6dbd8", "text": "In this paper, a new three-phase current-fed push-pull DC-DC converter is proposed. This converter uses a high-frequency three-phase transformer that provides galvanic isolation between the power source and the load. The three active switches are connected to the same reference, which simplifies the gate drive circuitry. Reduction of the input current ripple and the output voltage ripple is achieved by means of an inductor and a capacitor, whose volumes are smaller than in equivalent single-phase topologies. The three-phase DC-DC conversion also helps in loss distribution, allowing the use of lower cost switches. These characteristics make this converter suitable for applications where low-voltage power sources are used and the associated currents are high, such as in fuel cells, photovoltaic arrays, and batteries. The theoretical analysis, a simplified design example, and the experimental results for a 1-kW prototype will be presented for two operation regions. The prototype was designed for a switching frequency of 40 kHz, an input voltage of 120 V, and an output voltage of 400 V.", "title": "" }, { "docid": "815098e9ed06dfa5335f0c2c595f4059", "text": "Effectively managing risk is an essential element of successful project management. It is imperative that project management team consider all possible risks to establish corrective actions in the right time. So far, several techniques have been proposed for project risk analysis. Failure Mode and Effect Analysis (FMEA) is recognized as one of the most useful techniques in this field. The main goal is identifying all failure modes within a system, assessing their impact, and planning for corrective actions. In traditional FMEA, the risk priorities of failure modes are determined by using Risk Priority Numbers (RPN), which can be obtained by multiplying the scores of risk factors like occurrence (O), severity (S), and detection (D). This technique has some limitations, though in this paper, Fuzzy logic and Analytical Hierarchy Process (AHP) are used to address the limitations of traditional FMEA. Linguistic variables, expressed in fuzzy numbers, are used to assess the ratings of risk factors O, S, and D. Each factor consists of seven membership functions and on the whole there are 343 rules for fuzzy system. The analytic hierarchy process (AHP) is applied to determine the relative weightings of risk impacts on time, cost, quality and safety. A case study is presented to validate the concept. The feedbacks are showing the advantages of the proposed approach in project risk management.", "title": "" }, { "docid": "240d47115c8bbf98e15ca4acae13ee62", "text": "A trusted and active community aided and supported by the Internet of Things (IoT) is a key factor in food waste reduction and management. This paper proposes an IoT based context aware framework which can capture real-time dynamic requirements of both vendors and consumers and perform real-time match-making based on captured data. We describe our proposed reference framework and the notion of smart food sharing containers as enabling technology in our framework. A prototype system demonstrates the feasibility of a proposed approach using a smart container with embedded sensors.", "title": "" }, { "docid": "4b1a02a1921a33a8c2f4d01670174f77", "text": "In this paper we propose an approach for articulated tracking of multiple people in unconstrained videos. Our starting point is a model that resembles existing architectures for single-frame pose estimation but is several orders of magnitude faster. We achieve this in two ways: (1) by simplifying and sparsifying the body-part relationship graph and leveraging recent methods for faster inference, and (2) by offloading a substantial share of computation onto a feed-forward convolutional architecture that is able to detect and associate body joints of the same person even in clutter. We use this model to generate proposals for body joint locations and formulate articulated tracking as spatio-temporal grouping of such proposals. This allows to jointly solve the association problem for all people in the scene by propagating evidence from strong detections through time and enforcing constraints that each proposal can be assigned to one person only. We report results on a public MPII Human Pose benchmark and on a new dataset of videos with multiple people. We demonstrate that our model achieves state-of-the-art results while using only a fraction of time and is able to leverage temporal information to improve state-of-the-art for crowded scenes1.", "title": "" }, { "docid": "095f8d5c3191d6b70b2647b562887aeb", "text": "Hardware specialization, in the form of datapath and control circuitry customized to particular algorithms or applications, promises impressive performance and energy advantages compared to traditional architectures. Current research in accelerators relies on RTL-based synthesis flows to produce accurate timing, power, and area estimates. Such techniques not only require significant effort and expertise but also are slow and tedious to use, making large design space exploration infeasible. To overcome this problem, the authors developed Aladdin, a pre-RTL, power-performance accelerator modeling framework and demonstrated its application to system-on-chip (SoC) simulation. Aladdin estimates performance, power, and area of accelerators within 0.9, 4.9, and 6.6 percent with respect to RTL implementations. Integrated with architecture-level general-purpose core and memory hierarchy simulators, Aladdin provides researchers with a fast but accurate way to model the power and performance of accelerators in an SoC environment.", "title": "" } ]
scidocsrr
1bb7bc2568ca25431e1c081234350a9d
Learning a time-dependent master saliency map from eye-tracking data in videos
[ { "docid": "825b567c1a08d769aa334b707176f607", "text": "A critical function in both machine vision and biological vision systems is attentional selection of scene regions worthy of further analysis by higher-level processes such as object recognition. Here we present the first model of spatial attention that (1) can be applied to arbitrary static and dynamic image sequences with interactive tasks and (2) combines a general computational implementation of both bottom-up (BU) saliency and dynamic top-down (TD) task relevance; the claimed novelty lies in the combination of these elements and in the fully computational nature of the model. The BU component computes a saliency map from 12 low-level multi-scale visual features. The TD component computes a low-level signature of the entire image, and learns to associate different classes of signatures with the different gaze patterns recorded from human subjects performing a task of interest. We measured the ability of this model to predict the eye movements of people playing contemporary video games. We found that the TD model alone predicts where humans look about twice as well as does the BU model alone; in addition, a combined BU*TD model performs significantly better than either individual component. Qualitatively, the combined model predicts some easy-to-describe but hard-to-compute aspects of attentional selection, such as shifting attention leftward when approaching a left turn along a racing track. Thus, our study demonstrates the advantages of integrating BU factors derived from a saliency map and TD factors learned from image and task contexts in predicting where humans look while performing complex visually-guided behavior.", "title": "" }, { "docid": "de1165d7ca962c5bbd141d571e50dbd3", "text": "A model of bottom-up overt attention is proposed based on the principle of maximizing information sampled from a scene. The proposed operation is based on Shannon's self-information measure and is achieved in a neural circuit, which is demonstrated as having close ties with the circuitry existent in the primate visual cortex. It is further shown that the proposed saliency measure may be extended to address issues that currently elude explanation in the domain of saliency based models. Results on natural images are compared with experimental eye tracking data revealing the efficacy of the model in predicting the deployment of overt attention as compared with existing efforts.", "title": "" }, { "docid": "97c5b202cdc1f7d8220bf83663a0668f", "text": "Despite significant recent progress, the best available visual saliency models still lag behind human performance in predicting eye fixations in free-viewing of natural scenes. Majority of models are based on low-level visual features and the importance of top-down factors has not yet been fully explored or modeled. Here, we combine low-level features such as orientation, color, intensity, saliency maps of previous best bottom-up models with top-down cognitive visual features (e.g., faces, humans, cars, etc.) and learn a direct mapping from those features to eye fixations using Regression, SVM, and AdaBoost classifiers. By extensive experimenting over three benchmark eye-tracking datasets using three popular evaluation scores, we show that our boosting model outperforms 27 state-of-the-art models and is so far the closest model to the accuracy of human model for fixation prediction. Furthermore, our model successfully detects the most salient object in a scene without sophisticated image processings such as region segmentation.", "title": "" }, { "docid": "dd2267e380de2bc5ef71ee7ffd2eb00a", "text": "We propose a formal Bayesian definition of surprise to capture subjective aspects of sensory information. Surprise measures how data affects an observer, in terms of differences between posterior and prior beliefs about the world. Only data observations which substantially affect the observer's beliefs yield surprise, irrespectively of how rare or informative in Shannon's sense these observations are. We test the framework by quantifying the extent to which humans may orient attention and gaze towards surprising events or items while watching television. To this end, we implement a simple computational model where a low-level, sensory form of surprise is computed by simple simulated early visual neurons. Bayesian surprise is a strong attractor of human attention, with 72% of all gaze shifts directed towards locations more surprising than the average, a figure rising to 84% when focusing the analysis onto regions simultaneously selected by all observers. The proposed theory of surprise is applicable across different spatio-temporal scales, modalities, and levels of abstraction.", "title": "" } ]
[ { "docid": "d16693b6b6f95105321508c114154edc", "text": "Classification of hyperspectral image (HSI) is an important research topic in the remote sensing community. Significant efforts (e.g., deep learning) have been concentrated on this task. However, it is still an open issue to classify the high-dimensional HSI with a limited number of training samples. In this paper, we propose a semi-supervised HSI classification method inspired by the generative adversarial networks (GANs). Unlike the supervised methods, the proposed HSI classification method is semi-supervised, which can make full use of the limited labeled samples as well as the sufficient unlabeled samples. Core ideas of the proposed method are twofold. First, the three-dimensional bilateral filter (3DBF) is adopted to extract the spectral-spatial features by naturally treating the HSI as a volumetric dataset. The spatial information is integrated into the extracted features by 3DBF, which is propitious to the subsequent classification step. Second, GANs are trained on the spectral-spatial features for semi-supervised learning. A GAN contains two neural networks (i.e., generator and discriminator) trained in opposition to one another. The semi-supervised learning is achieved by adding samples from the generator to the features and increasing the dimension of the classifier output. Experimental results obtained on three benchmark HSI datasets have confirmed the effectiveness of the proposed method, especially with a limited number of labeled samples.", "title": "" }, { "docid": "bd3a2546d9f91f224e76759c087a7a1e", "text": "In this paper, we present a practical relay attack that can be mounted on RFID systems found in many applications nowadays. The described attack uses a self-designed proxy device to forward the RF communication from a reader to a modern NFC-enabled smart phone (Google Nexus S). The phone acts as a mole to inquire a victim’s card in the vicinity of the system. As a practical demonstration of our attack, we target a widely used accesscontrol application that usually grants access to office buildings using a strong AES authentication feature. Our attack successfully relays this authentication process via a Bluetooth channel (> 50 meters) within several hundred milliseconds. As a result, we were able to impersonate an authorized user and to enter the building without being detected.", "title": "" }, { "docid": "6ba7f7390490da05cca6c4ab4d9d9fab", "text": "Object detection and localization is a challenging task. Among several approaches, more recently hierarchical methods of feature-based object recognition have been developed and demonstrated high-end performance measures. Inspired by the knowledge about the architecture and function of the primate visual system, the computational HMAX model has been proposed. At the same time robust visual object recognition was proposed using feature distributions, e.g. histograms of oriented gradients (HOGs). Since both models build upon an edge representation of the input image, the question arises, whether one kind of approach might be superior to the other. Introducing a new biologically inspired attention steered processing framework, we demonstrate that the combination of both approaches gains the best results.", "title": "" }, { "docid": "6f7c81d869b4389d5b84e80b4c306381", "text": "Environmental, genetic, and immune factors are at play in the development of the variable clinical manifestations of Graves' ophthalmopathy (GO). Among the environmental contributions, smoking is the risk factor most consistently linked to the development or worsening of the disease. The close temporal relationship between the diagnoses of Graves' hyperthyroidism and GO have long suggested that these 2 autoimmune conditions may share pathophysiologic features. The finding that the thyrotropin receptor (TSHR) is expressed in orbital fibroblasts, the target cells in GO, supported the notion of a common autoantigen. Both cellular and humeral immunity directed against TSHR expressed on orbital fibroblasts likely initiate the disease process. Activation of helper T cells recognizing TSHR peptides and ligation of TSHR by TRAb lead to the secretion of inflammatory cytokines and chemokines, and enhanced hyaluronic acid (HA) production and adipogenesis. The resulting connective tissue remodeling results in varying degrees extraocular muscle enlargement and orbital fat expansion. A subset of orbital fibroblasts express CD34, are bone-marrow derived, and circulate as fibrocytes that infiltrate connective tissues at sites of injury or inflammation. As these express high levels of TSHR and are capable of producing copious cytokines and chemokines, they may represent an orbital fibroblast population that plays a central role in GO development. In addition to TSHR, orbital fibroblasts from patients with GO express high levels of IGF-1R. Recent studies suggest that these receptors engage in cross-talk induced by TSHR ligation to synergistically enhance TSHR signaling, HA production, and the secretion of inflammatory mediators.", "title": "" }, { "docid": "1e1355e7fbe185c2e69083fe8df2d875", "text": "The problem of reproducing high dynamic range images on devices with restricted dynamic range has gained a lot of interest in the computer graphics community. There exist various approaches to this issue, which span several research areas including computer graphics, image processing, color vision, physiological aspects, etc. These approaches assume a thorough knowledge of both the objective and subjective attributes of an image. However, no comprehensive overview and analysis of such attributes has been published so far. In this contribution, we present an overview about the effects of basic image attributes in HDR tone mapping. Furthermore, we propose a scheme of relationships between these attributes, leading to the definition of an overall image quality measure. We present results of subjective psychophysical experiments that we have performed to prove the proposed relationship scheme. Moreover, we also present an evaluation of existing tone mapping methods (operators) with regard to these attributes. Finally, the execution of with-reference and without a real reference perceptual experiments gave us the opportunity to relate the obtained subjective results. Our effort is not just useful to get into the tone mapping field or when implementing a tone mapping method, but it also sets the stage for well-founded quality comparisons between tone mapping methods. By providing good definitions of the different attributes, user-driven or fully automatic comparisons are made possible.", "title": "" }, { "docid": "7150d210ad78110897c3b3f5078c935b", "text": "Resolution in Magnetic Resonance (MR) is limited by diverse physical, technological and economical considerations. In conventional medical practice, resolution enhancement is usually performed with bicubic or B-spline interpolations, strongly affecting the accuracy of subsequent processing steps such as segmentation or registration. This paper presents a sparse-based super-resolution method, adapted for easily including prior knowledge, which couples up high and low frequency information so that a high-resolution version of a low-resolution brain MR image is generated. The proposed approach includes a whole-image multi-scale edge analysis and a dimensionality reduction scheme, which results in a remarkable improvement of the computational speed and accuracy, taking nearly 26 min to generate a complete 3D high-resolution reconstruction. The method was validated by comparing interpolated and reconstructed versions of 29 MR brain volumes with the original images, acquired in a 3T scanner, obtaining a reduction of 70% in the root mean squared error, an increment of 10.3 dB in the peak signal-to-noise ratio, and an agreement of 85% in the binary gray matter segmentations. The proposed method is shown to outperform a recent state-of-the-art algorithm, suggesting a substantial impact in voxel-based morphometry studies.", "title": "" }, { "docid": "d7ac0414b269202015d29ddaaa4bd436", "text": "Mobile manipulation tasks in shopfloor logistics require robots to grasp objects from various transport containers such as boxes and pallets. In this paper, we present an efficient processing pipeline that detects and localizes boxes and pallets in RGB-D images. Our method is based on edges in both the color image and the depth image and uses a RANSAC approach for reliably localizing the detected containers. Experiments show that the proposed method reliably detects and localizes both container types while guaranteeing low processing times.", "title": "" }, { "docid": "b72f4554f2d7ac6c5a8000d36a099e67", "text": "Sign Language Recognition (SLR) has been an active research field for the last two decades. However, most research to date has considered SLR as a naive gesture recognition problem. SLR seeks to recognize a sequence of continuous signs but neglects the underlying rich grammatical and linguistic structures of sign language that differ from spoken language. In contrast, we introduce the Sign Language Translation (SLT) problem. Here, the objective is to generate spoken language translations from sign language videos, taking into account the different word orders and grammar. We formalize SLT in the framework of Neural Machine Translation (NMT) for both end-to-end and pretrained settings (using expert knowledge). This allows us to jointly learn the spatial representations, the underlying language model, and the mapping between sign and spoken language. To evaluate the performance of Neural SLT, we collected the first publicly available Continuous SLT dataset, RWTH-PHOENIX-Weather 2014T1. It provides spoken language translations and gloss level annotations for German Sign Language videos of weather broadcasts. Our dataset contains over .95M frames with >67K signs from a sign vocabulary of >1K and >99K words from a German vocabulary of >2.8K. We report quantitative and qualitative results for various SLT setups to underpin future research in this newly established field. The upper bound for translation performance is calculated at 19.26 BLEU-4, while our end-to-end frame-level and gloss-level tokenization networks were able to achieve 9.58 and 18.13 respectively.", "title": "" }, { "docid": "6574a8b000e2f08f8b8da323e992d559", "text": "The rapid advances of transportation infrastructure have led to a dramatic increase in the demand for smart systems capable of monitoring traffic and street safety. Fundamental to these applications are a community-based evaluation platform and benchmark for object detection and multiobject tracking. To this end, we organize the AVSS2017 Challenge on Advance Traffic Monitoring, in conjunction with the International Workshop on Traffic and Street Surveillance for Safety and Security (IWT4S), to evaluate the state-of-the-art object detection and multi-object tracking algorithms in the relevance of traffic surveillance. Submitted algorithms are evaluated using the large-scale UA-DETRAC benchmark and evaluation protocol. The benchmark, the evaluation toolkit and the algorithm performance are publicly available from the website http: //detrac-db.rit.albany.edu.", "title": "" }, { "docid": "a2bb54cd5df70c68441da823f90bece1", "text": "This paper describes the development of innovative low-cost home dedicated fire alert detection system (FADS) using ZigBee wireless network. Our home FADS system are consists of an Arduino Uno Microcontroller, Xbee wireless module (Zigbee wireless), Arduino digital temperature sensor, buzzer alarm and X-CTU software. Arduino and wireless ZigBee has advantages in terms of its long battery life and much cheaper compared to the others wireless sensor network. There are several objectives that need to be accomplished in developing this project which are to develop fire alert detection system (FADS) for home user using ZigBee wireless network and to evaluate the effectiveness of the home FADS by testing it in different distances and the functionality of heat sensor. Based from the experiments, the results show that the home FADS could function as expected. It also could detect heat and alarm triggered when temperature is above particular value. Furthermore, this project provides a guideline for implementing and applying home FADS at home and recommendation on future studies for home FADS in monitoring the temperature on the web server.", "title": "" }, { "docid": "7f47a4b5152acf7e38d5c39add680f9d", "text": "unit of computation and a processor a piece of physical hardware In addition to reading to and writing from local memory a process can send and receive messages by making calls to a library of message passing routines The coordinated exchange of messages has the e ect of synchronizing processes This can be achieved by the synchronous exchange of messages in which the sending operation does not terminate until the receive operation has begun A di erent form of synchronization occurs when a message is sent asynchronously but the receiving process must wait or block until the data arrives Processes can be mapped to physical processors in various ways the mapping employed does not a ect the semantics of a program In particular multiple processes may be mapped to a single processor The message passing model provides a mechanism for talking about locality data contained in the local memory of a process are close and other data are remote We now examine some other properties of the message passing programming model performance mapping independence and modularity", "title": "" }, { "docid": "87ae6c0b8bd90bde0cb4876352e222b4", "text": "This study examined the developmental trajectories of three frequently postulated executive function (EF) components, Working Memory, Shifting, and Inhibition of responses, and their relation to performance on standard, but complex, neuropsychological EF tasks, the Wisconsin Card Sorting Task (WCST), and the Tower of London (ToL). Participants in four age groups (7-, 11-, 15-, and 21-year olds) carried out nine basic experimental tasks (three tasks for each EF), the WCST, and the ToL. Analyses were done in two steps: (1) analyses of (co)variance to examine developmental trends in individual EF tasks while correcting for basic processing speed, (2) confirmatory factor analysis to extract latent variables from the nine basic EF tasks, and to explain variance in the performance on WCST and ToL, using these latent variables. Analyses of (co)variance revealed a continuation of EF development into adolescence. Confirmatory factor analysis yielded two common factors: Working Memory and Shifting. However, the variables assumed to tap Inhibition proved unrelated. At a latent level, again correcting for basic processing speed, the development of Shifting was seen to continue into adolescence, while Working Memory continued to develop into young-adulthood. Regression analyses revealed that Working Memory contributed most strongly to WCST performance in all age groups. These results suggest that EF component processes develop at different rates, and that it is important to recognize both the unity and diversity of EF component processes in studying the development of EF.", "title": "" }, { "docid": "2802d66dfa1956bf83649614b76d470e", "text": "Given a classification task, what is the best way to teach the resulting boundary to a human? While machine learning techniques can provide excellent methods for finding the boundary, including the selection of examples in an online setting, they tell us little about how we would teach a human the same task. We propose to investigate the problem of example selection and presentation in the context of teaching humans, and explore a variety of mechanisms in the interests of finding what may work best. In particular, we begin with the baseline of random presentation and then examine combinations of several mechanisms: the indication of an example’s relative difficulty, the use of the shaping heuristic from the cognitive science literature (moving from easier examples to harder ones), and a novel kernel-based “coverage model” of the subject’s mastery of the task. From our experiments on 54 human subjects learning and performing a pair of synthetic classification tasks via our teaching system, we found that we can achieve the greatest gains with a combination of shaping and the coverage model.", "title": "" }, { "docid": "de73e8e382dddfba867068f1099b86fb", "text": "Endophytes are fungi which infect plants without causing symptoms. Fungi belonging to this group are ubiquitous, and plant species not associated to fungal endophytes are not known. In addition, there is a large biological diversity among endophytes, and it is not rare for some plant species to be hosts of more than one hundred different endophytic species. Different mechanisms of transmission, as well as symbiotic lifestyles occur among endophytic species. Latent pathogens seem to represent a relatively small proportion of endophytic assemblages, also composed by latent saprophytes and mutualistic species. Some endophytes are generalists, being able to infect a wide range of hosts, while others are specialists, limited to one or a few hosts. Endophytes are gaining attention as a subject for research and applications in Plant Pathology. This is because in some cases plants associated to endophytes have shown increased resistance to plant pathogens, particularly fungi and nematodes. Several possible mechanisms by which endophytes may interact with pathogens are discussed in this review. Additional key words: biocontrol, biodiversity, symbiosis.", "title": "" }, { "docid": "3c812cad23bffaf36ad485dbd530e040", "text": "Social tags are user-generated keywords associated with some resource on the Web. In the case of music, social tags have become an important component of “Web2.0” recommender systems, allowing users to generate playlists based on use-dependent terms such as chill or jogging that have been applied to particular songs. In this paper, we propose a method for predicting these social tags directly from MP3 files. Using a set of boosted classifiers, we map audio features onto social tags collected from the Web. The resulting automatic tags (or autotags) furnish information about music that is otherwise untagged or poorly tagged, allowing for insertion of previously unheard music into a social recommender. This avoids the ”cold-start problem” common in such systems. Autotags can also be used to smooth the tag space from which similarities and recommendations are made by providing a set of comparable baseline tags for all tracks in a recommender system.", "title": "" }, { "docid": "1ad06e5eee4d4f29dd2f0e8f0dd62370", "text": "Recent research on map matching algorithms for land vehicle navigation has been based on either a conventional topological analysis or a probabilistic approach. The input to these algorithms normally comes from the global positioning system and digital map data. Although the performance of some of these algorithms is good in relatively sparse road networks, they are not always reliable for complex roundabouts, merging or diverging sections of motorways and complex urban road networks. In high road density areas where the average distance between roads is less than 100 metres, there may be many road patterns matching the trajectory of the vehicle reported by the positioning system at any given moment. Consequently, it may be difficult to precisely identify the road on which the vehicle is travelling. Therefore, techniques for dealing with qualitative terms such as likeliness are essential for map matching algorithms to identify a correct link. Fuzzy logic is one technique that is an effective way to deal with qualitative terms, linguistic vagueness, and human intervention. This paper develops a map matching algorithm based on fuzzy logic theory. The inputs to the proposed algorithm are from the global positioning system augmented with data from deduced reckoning sensors to provide continuous navigation. The algorithm is tested on different road networks of varying complexity. The validation of this algorithm is carried out using high precision positioning data obtained from GPS carrier phase observables. The performance of the developed map matching algorithm is evaluated against the performance of several well-accepted existing map matching algorithms. The results show that the fuzzy logic-based map matching algorithm provides a significant improvement over existing map matching algorithms both in terms of identifying correct links and estimating the vehicle position on the links.", "title": "" }, { "docid": "cbf32934e275e8d95a584762b270a5c2", "text": "Online telemedicine systems are useful due to the possibility of timely and efficient healthcare services. These systems are based on advanced wireless and wearable sensor technologies. The rapid growth in technology has remarkably enhanced the scope of remote health monitoring systems. In this paper, a real-time heart monitoring system is developed considering the cost, ease of application, accuracy, and data security. The system is conceptualized to provide an interface between the doctor and the patients for two-way communication. The main purpose of this study is to facilitate the remote cardiac patients in getting latest healthcare services which might not be possible otherwise due to low doctor-to-patient ratio. The developed monitoring system is then evaluated for 40 individuals (aged between 18 and 66 years) using wearable sensors while holding an Android device (i.e., smartphone under supervision of the experts). The performance analysis shows that the proposed system is reliable and helpful due to high speed. The analyses showed that the proposed system is convenient and reliable and ensures data security at low cost. In addition, the developed system is equipped to generate warning messages to the doctor and patient under critical circumstances.", "title": "" }, { "docid": "fc904f979f7b00941852ac9db66f7129", "text": "The Orchidaceae are one of the most species-rich plant families and their floral diversity and pollination biology have long intrigued evolutionary biologists. About one-third of the estimated 18,500 species are thought to be pollinated by deceit. To date, the focus has been on how such pollination evolved, how the different types of deception work, and how it is maintained, but little progress has been made in understanding its evolutionary consequences. To address this issue, we discuss here how deception affects orchid mating systems, the evolution of reproductive isolation, speciation processes and neutral genetic divergence among species. We argue that pollination by deceit is one of the keys to orchid floral and species diversity. A better understanding of its evolutionary consequences could help evolutionary biologists to unravel the reasons for the evolutionary success of orchids.", "title": "" }, { "docid": "89fff85bba64d7411948c2a09345093a", "text": "Classification procedures are some of the most widely used statistical methods in ecology. Random forests (RF) is a new and powerful statistical classifier that is well established in other disciplines but is relatively unknown in ecology. Advantages of RF compared to other statistical classifiers include (1) very high classification accuracy; (2) a novel method of determining variable importance; (3) ability to model complex interactions among predictor variables; (4) flexibility to perform several types of statistical data analysis, including regression, classification, survival analysis, and unsupervised learning; and (5) an algorithm for imputing missing values. We compared the accuracies of RF and four other commonly used statistical classifiers using data on invasive plant species presence in Lava Beds National Monument, California, USA, rare lichen species presence in the Pacific Northwest, USA, and nest sites for cavity nesting birds in the Uinta Mountains, Utah, USA. We observed high classification accuracy in all applications as measured by cross-validation and, in the case of the lichen data, by independent test data, when comparing RF to other common classification methods. We also observed that the variables that RF identified as most important for classifying invasive plant species coincided with expectations based on the literature.", "title": "" }, { "docid": "327450c9470de1254ecc209afcd8addb", "text": "Intra-individual performance variability may be an important index of the efficiency with which executive control processes are implemented, Lesion studies suggest that damage to the frontal lobes is accompanied by an increase in such variability. Here we sought for the first time to investigate how the functional neuroanatomy of executive control is modulated by performance variability in healthy subjects by using an event-related functional magnetic resonance imaging (ER-fMRI) design and a Go/No-go response inhibition paradigm. Behavioural results revealed that individual differences in Go response time variability were a strong predictor of inhibitory success and that differences in mean Go response time could not account for this effect. Task-related brain activation was positively correlated with intra-individual variability within a distributed inhibitory network consisting of bilateral middle frontal areas and right inferior parietal and thalamic regions. Both the behavioural and fMRI data are consistent with the interpretation that those subjects with relatively higher intra-individual variability activate inhibitory regions to a greater extent, perhaps reflecting a greater requirement for top-down executive control in this group, a finding that may be relevant to disorders of executive/attentional control.", "title": "" } ]
scidocsrr
888c8c24d4760426b1cad758776d0c47
Learning an Invariant Hilbert Space for Domain Adaptation
[ { "docid": "cb2dd47932aa4709e2497fdb16b5e5f2", "text": "In this paper, we propose an approach to the domain adaptation, dubbed Second-or Higher-order Transfer of Knowledge (So-HoT), based on the mixture of alignments of second-or higher-order scatter statistics between the source and target domains. The human ability to learn from few labeled samples is a recurring motivation in the literature for domain adaptation. Towards this end, we investigate the supervised target scenario for which few labeled target training samples per category exist. Specifically, we utilize two CNN streams: the source and target networks fused at the classifier level. Features from the fully connected layers fc7 of each network are used to compute second-or even higher-order scatter tensors, one per network stream per class. As the source and target distributions are somewhat different despite being related, we align the scatters of the two network streams of the same class (within-class scatters) to a desired degree with our bespoke loss while maintaining good separation of the between-class scatters. We train the entire network in end-to-end fashion. We provide evaluations on the standard Office benchmark (visual domains) and RGB-D combined with Caltech256 (depth-to-rgb transfer). We attain state-of-the-art results.", "title": "" } ]
[ { "docid": "d7310e830f85541aa1d4b94606c1be0c", "text": "We present a practical framework to automatically detect shadows in real world scenes from a single photograph. Previous works on shadow detection put a lot of effort in designing shadow variant and invariant hand-crafted features. In contrast, our framework automatically learns the most relevant features in a supervised manner using multiple convolutional deep neural networks (ConvNets). The 7-layer network architecture of each ConvNet consists of alternating convolution and sub-sampling layers. The proposed framework learns features at the super-pixel level and along the object boundaries. In both cases, features are extracted using a context aware window centered at interest points. The predicted posteriors based on the learned features are fed to a conditional random field model to generate smooth shadow contours. Our proposed framework consistently performed better than the state-of-the-art on all major shadow databases collected under a variety of conditions.", "title": "" }, { "docid": "d7a143bdb62e4aaeaf18b0aabe35588e", "text": "BACKGROUND\nShort-acting insulin analogue use for people with diabetes is still controversial, as reflected in many scientific debates.\n\n\nOBJECTIVES\nTo assess the effects of short-acting insulin analogues versus regular human insulin in adults with type 1 diabetes.\n\n\nSEARCH METHODS\nWe carried out the electronic searches through Ovid simultaneously searching the following databases: Ovid MEDLINE(R), Ovid MEDLINE(R) In-Process & Other Non-Indexed Citations, Ovid MEDLINE(R) Daily and Ovid OLDMEDLINE(R) (1946 to 14 April 2015), EMBASE (1988 to 2015, week 15), the Cochrane Central Register of Controlled Trials (CENTRAL; March 2015), ClinicalTrials.gov and the European (EU) Clinical Trials register (both March 2015).\n\n\nSELECTION CRITERIA\nWe included all randomised controlled trials with an intervention duration of at least 24 weeks that compared short-acting insulin analogues with regular human insulins in the treatment of adults with type 1 diabetes who were not pregnant.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo review authors independently extracted data and assessed trials for risk of bias, and resolved differences by consensus. We graded overall study quality using the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) instrument. We used random-effects models for the main analyses and presented the results as odds ratios (OR) with 95% confidence intervals (CI) for dichotomous outcomes.\n\n\nMAIN RESULTS\nWe identified nine trials that fulfilled the inclusion criteria including 2693 participants. The duration of interventions ranged from 24 to 52 weeks with a mean of about 37 weeks. The participants showed some diversity, mainly with regard to diabetes duration and inclusion/exclusion criteria. The majority of the trials were carried out in the 1990s and participants were recruited from Europe, North America, Africa and Asia. None of the trials was carried out in a blinded manner so that the risk of performance bias, especially for subjective outcomes such as hypoglycaemia, was present in all of the trials. Furthermore, several trials showed inconsistencies in the reporting of methods and results.The mean difference (MD) in glycosylated haemoglobin A1c (HbA1c) was -0.15% (95% CI -0.2% to -0.1%; P value < 0.00001; 2608 participants; 9 trials; low quality evidence) in favour of insulin analogues. The comparison of the risk of severe hypoglycaemia between the two treatment groups showed an OR of 0.89 (95% CI 0.71 to 1.12; P value = 0.31; 2459 participants; 7 trials; very low quality evidence). For overall hypoglycaemia, also taking into account mild forms of hypoglycaemia, the data were generally of low quality, but also did not indicate substantial group differences. Regarding nocturnal severe hypoglycaemic episodes, two trials reported statistically significant effects in favour of the insulin analogue, insulin aspart. However, due to inconsistent reporting in publications and trial reports, the validity of the result remains questionable.We also found no clear evidence for a substantial effect of insulin analogues on health-related quality of life. However, there were few results only based on subgroups of the trial populations. None of the trials reported substantial effects regarding weight gain or any other adverse events. No trial was designed to investigate possible long-term effects (such as all-cause mortality, diabetic complications), in particular in people with diabetes related complications.\n\n\nAUTHORS' CONCLUSIONS\nOur analysis suggests only a minor benefit of short-acting insulin analogues on blood glucose control in people with type 1 diabetes. To make conclusions about the effect of short acting insulin analogues on long-term patient-relevant outcomes, long-term efficacy and safety data are needed.", "title": "" }, { "docid": "49fed572de904ac3bb9aab9cdc874cc6", "text": "Factorized Hidden Layer (FHL) adaptation has been proposed for speaker adaptation of deep neural network (DNN) based acoustic models. In FHL adaptation, a speaker-dependent (SD) transformation matrix and an SD bias are included in addition to the standard affine transformation. The SD transformation is a linear combination of rank-1 matrices whereas the SD bias is a linear combination of vectors. Recently, the Long ShortTerm Memory (LSTM) Recurrent Neural Networks (RNNs) have shown to outperform DNN acoustic models in many Automatic Speech Recognition (ASR) tasks. In this work, we investigate the effectiveness of SD transformations for LSTM-RNN acoustic models. Experimental results show that when combined with scaling of LSTM cell states’ outputs, SD transformations achieve 2.3% and 2.1% absolute improvements over the baseline LSTM systems for the AMI IHM and AMI SDM tasks respectively.", "title": "" }, { "docid": "72ce1e7b2f5f4b7131e121630e86a5c7", "text": "Schizophrenia is a chronic and severe mental illness that poses significant challenges. While many pharmacological and psychosocial interventions are available, many treatment-resistant schizophrenia patients continue to suffer from persistent psychotic symptoms, notably auditory verbal hallucinations (AVH), which are highly disabling. This unmet clinical need requires new innovative treatment options. Recently, a psychological therapy using computerized technology has shown large therapeutic effects on AVH severity by enabling patients to engage in a dialogue with a computerized representation of their voices. These very promising results have been extended by our team using immersive virtual reality (VR). Our study was a 7-week phase-II, randomized, partial cross-over trial. Nineteen schizophrenia patients with refractory AVH were recruited and randomly allocated to either VR-assisted therapy (VRT) or treatment-as-usual (TAU). The group allocated to TAU consisted of antipsychotic treatment and usual meetings with clinicians. The TAU group then received a delayed 7weeks of VRT. A follow-up was ensured 3months after the last VRT therapy session. Changes in psychiatric symptoms, before and after TAU or VRT, were assessed using a linear mixed-effects model. Our findings showed that VRT produced significant improvements in AVH severity, depressive symptoms and quality of life that lasted at the 3-month follow-up period. Consistent with previous research, our results suggest that VRT might be efficacious in reducing AVH related distress. The therapeutic effects of VRT on the distress associated with the voices were particularly prominent (d=1.2). VRT is a highly novel and promising intervention for refractory AVH in schizophrenia.", "title": "" }, { "docid": "b18f98cfad913ebf3ce1780b666277cb", "text": "Deep convolutional neural network (DCNN) has achieved remarkable performance on object detection and speech recognition in recent years. However, the excellent performance of a DCNN incurs high computational complexity and large memory requirement In this paper, an equal distance nonuniform quantization (ENQ) scheme and a K-means clustering nonuniform quantization (KNQ) scheme are proposed to reduce the required memory storage when low complexity hardware or software implementations are considered. For the VGG-16 and the AlexNet, the proposed nonuniform quantization schemes reduce the number of required memory storage by approximately 50% while achieving almost the same or even better classification accuracy compared to the state-of-the-art quantization method. Compared to the ENQ scheme, the proposed KNQ scheme provides a better tradeoff when higher accuracy is required.", "title": "" }, { "docid": "2176518448c89ba977d849f71c86e6a6", "text": "iii I certify that I have read this dissertation and that in my opinion it is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. I certify that I have read this dissertation and that in my opinion it is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. I certify that I have read this dissertation and that in my opinion it is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. _______________________________________ L. Peter Deutsch I certify that I have read this dissertation and that in my opinion it is fully adequate, in scope and quality, as a dissertation for the degree of Doctor of Philosophy. Abstract Object-oriented programming languages confer many benefits, including abstraction, which lets the programmer hide the details of an object's implementation from the object's clients. Unfortunately, crossing abstraction boundaries often incurs a substantial run-time overhead in the form of frequent procedure calls. Thus, pervasive use of abstraction , while desirable from a design standpoint, may be impractical when it leads to inefficient programs. Aggressive compiler optimizations can reduce the overhead of abstraction. However, the long compilation times introduced by optimizing compilers delay the programming environment's responses to changes in the program. Furthermore, optimization also conflicts with source-level debugging. Thus, programmers are caught on the horns of two dilemmas: they have to choose between abstraction and efficiency, and between responsive programming environments and efficiency. This dissertation shows how to reconcile these seemingly contradictory goals by performing optimizations lazily. Four new techniques work together to achieve high performance and high responsiveness: • Type feedback achieves high performance by allowing the compiler to inline message sends based on information extracted from the runtime system. On average, programs run 1.5 times faster than the previous SELF system; compared to a commercial Smalltalk implementation, two medium-sized benchmarks run about three times faster. This level of performance is obtained with a compiler that is both simpler and faster than previous SELF compilers. • Adaptive optimization achieves high responsiveness without sacrificing performance by using a fast non-optimizing compiler to generate initial code while automatically recompiling heavily used parts of the program with an optimizing compiler. On a previous-generation workstation like the SPARCstation-2, fewer than 200 pauses exceeded 200 ms during a 50-minute interaction, and 21 pauses exceeded one second. …", "title": "" }, { "docid": "95045efce8527a68485915d8f9e2c6cf", "text": "OBJECTIVES\nTo update the normal stretched penile length values for children younger than 5 years of age. We also evaluated the association between penile length and anthropometric measures such as body weight, height, and body mass index.\n\n\nMETHODS\nThe study was performed as a cross-section study. The stretched penile lengths of 1040 white uncircumcised male infants and children 0 to 5 years of age were measured, and the mean length for each age group and the rate of increase in penile length were calculated. The correlation between penile length and weight, height, and body mass index of the children was determined by Pearson analysis.\n\n\nRESULTS\nThe stretched penile length was 3.65 +/- 0.27 cm in full-term newborns (n = 165) and 3.95 +/- 0.35 cm in children 1 to 3 months old (n = 112), 4.26 +/- 0.40 cm in those 3.1 to 6 months old (n = 130), 4.65 +/- 0.47 cm in those 6.1 to 12 months old (n = 148), 4.82 +/- 0.44 cm in those 12.1 to 24 months old (n = 135), 5.15 +/- 0.46 cm in those 24.1 to 36 months old (n = 120), 5.58 +/- 0.47 cm in those 36.1 to 48 months old (n = 117), and 6.02 +/- 0.50 cm in those 48.1 to 60 months old (n = 113). The fastest rate of increase in penile length was seen in the first 6 months of age, with a value of 1 mm/mo. A significant correlation was found between penile length and the weight, height, and body mass index of the boys (r = 0.881, r = 0.864, and r = 0.173, respectively; P = 0.001).\n\n\nCONCLUSIONS\nThe age-related values of penile length must be known to be able to determine abnormal penile sizes and to monitor treatment of underlying diseases. Our study has provided updated reference values for penile lengths for Turkish and other white boys aged 0 to 5 years.", "title": "" }, { "docid": "58164220c13b39eb5d2ca48139d45401", "text": "There is general agreement that structural similarity — a match in relational structure — is crucial in analogical processing. However, theories differ in their definitions of structural similarity: in particular, in whether there must be conceptual similarity between the relations in the two domains or whether parallel graph structure is sufficient. In two studies, we demonstrate, first, that people draw analogical correspondences based on matches in conceptual relations, rather than on purely structural graph matches; and, second, that people draw analogical inferences between passages that have matching conceptual relations, but not between passages with purely structural graph matches.", "title": "" }, { "docid": "a979b0a02f2ade809c825b256b3c69d8", "text": "The objective of this review is to analyze in detail the microscopic structure and relations among muscular fibers, endomysium, perimysium, epimysium and deep fasciae. In particular, the multilayer organization and the collagen fiber orientation of these elements are reported. The endomysium, perimysium, epimysium and deep fasciae have not just a role of containment, limiting the expansion of the muscle with the disposition in concentric layers of the collagen tissue, but are fundamental elements for the transmission of muscular force, each one with a specific role. From this review it appears that the muscular fibers should not be studied as isolated elements, but as a complex inseparable from their fibrous components. The force expressed by a muscle depends not only on its anatomical structure, but also the angle at which its fibers are attached to the intramuscular connective tissue and the relation with the epimysium and deep fasciae.", "title": "" }, { "docid": "af4d150e993258124ba0af211fa26841", "text": "..................................................................................................................................................................... 3 RESUMÉ ........................................................................................................................................................................... 4 PREFACE ......................................................................................................................................................................... 5 TABLE OF CONTENTS ................................................................................................................................................. 6 INTRODUCTION ............................................................................................................................................................ 7 PROBLEM ANALYSIS................................................................................................................................................... 8 EXISTING METHODS ........................................................................................................................................................ 9 THE VIOLA-JONES FACE DETECTOR .................................................................................................................. 10 INTRODUCTION TO CHAPTER......................................................................................................................................... 10 METHODS ..................................................................................................................................................................... 10 The scale invariant detector .................................................................................................................................... 10 The modified AdaBoost algorithm........................................................................................................................... 12 The cascaded classifier ........................................................................................................................................... 13 IMPLEMENTATION & RESULTS...................................................................................................................................... 15 Generating positive examples ................................................................................................................................. 15 Generating negative examples ................................................................................................................................ 18 Training a stage in the cascade............................................................................................................................... 20 Training the cascade ............................................................................................................................................... 21 The final face detector............................................................................................................................................. 24 A simple comparison ............................................................................................................................................... 27 Discussion ............................................................................................................................................................... 29 FUTURE WORK ............................................................................................................................................................. 31 CONCLUSION ............................................................................................................................................................... 32 APPENDIX 1 LITERATURE LIST AND REFERENCES...................................................................................... 33 APPENDIX 2 CONTENTS OF THE ENCLOSED DVD ......................................................................................... 34 APPENDIX 3 IMAGE 2, 3 AND 4 AFTER DETECTION....................................................................................... 35", "title": "" }, { "docid": "58061318f47a2b96367fe3e8f3cd1fce", "text": "The growth of lymphatic vessels (lymphangiogenesis) is actively involved in a number of pathological processes including tissue inflammation and tumor dissemination but is insufficient in patients suffering from lymphedema, a debilitating condition characterized by chronic tissue edema and impaired immunity. The recent explosion of knowledge on the molecular mechanisms governing lymphangiogenesis provides new possibilities to treat these diseases.", "title": "" }, { "docid": "aa98236ba9b9468b4780a3c8be27b62c", "text": "The final goal of Interpretable Semantic Textual Similarity (iSTS) is to build systems that explain which are the differences and commonalities between two sentences. The task adds an explanatory level on top of STS, formalized as an alignment between the chunks in the two input sentences, indicating the relation and similarity score of each alignment. The task provides train and test data on three datasets: news headlines, image captions and student answers. It attracted nine teams, totaling 20 runs. All datasets and the annotation guideline are freely available1", "title": "" }, { "docid": "4723129771fb19967d6e55c5e2bcf3e1", "text": "The semantic interpretation of images can benefit from representations of useful concepts and the links between them as ontologies. In this paper, we propose an ontology of spatial relations, in order to guide image interpretation and the recognition of the structures it contains using structural information on the spatial arrangement of these structures. As an original theoretical contribution, this ontology is then enriched by fuzzy representations of concepts, which define their semantics, and allow establishing the link between these concepts (which are often expressed in linguistic terms) and the information that can be extracted from images. This contributes to reducing the semantic gap and it constitutes a new methodological approach to guide semantic image interpretation. This methodological approach is illustrated on a medical example, dealing with knowledge-based recognition of brain structures in 3D magnetic resonance images using the proposed fuzzy spatial relation ontology. © 2008 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "18c56e9d096ba4ea48a0579626f83edc", "text": "PURPOSE\nThe purpose of this study was to provide an overview of platelet-rich plasma (PRP) injected into the scalp for the management of androgenic alopecia.\n\n\nMATERIALS AND METHODS\nA literature review was performed to evaluate the benefits of PRP in androgenic alopecia.\n\n\nRESULTS\nHair restoration has been increasing. PRP's main components of platelet-derived growth factor, transforming growth factor, and vascular endothelial growth factor have the potential to stimulate hard and soft tissue wound healing. In general, PRP showed a benefit on patients with androgenic alopecia, including increased hair density and quality. Currently, different PRP preparations are being used with no standard technique.\n\n\nCONCLUSION\nThis review found beneficial effects of PRP on androgenic alopecia. However, more rigorous study designs, including larger samples, quantitative measurements of effect, and longer follow-up periods, are needed to solidify the utility of PRP for treating patients with androgenic alopecia.", "title": "" }, { "docid": "64e2b73e8a2d12a1f0bbd7d07fccba72", "text": "Point-of-interest (POI) recommendation is an important service to Location-Based Social Networks (LBSNs) that can benefit both users and businesses. In recent years, a number of POI recommender systems have been proposed, but there is still a lack of systematical comparison thereof. In this paper, we provide an allaround evaluation of 12 state-of-the-art POI recommendation models. From the evaluation, we obtain several important findings, based on which we can better understand and utilize POI recommendation models in various scenarios. We anticipate this work to provide readers with an overall picture of the cutting-edge research on POI recommendation.", "title": "" }, { "docid": "c06bfd970592c62f952fa98289f9e3b9", "text": "This paper proposes a new inequality-based criterion/constraint with its algorithmic and computational details for obstacle avoidance of redundant robot manipulators. By incorporating such a dynamically updated inequality constraint and the joint physical constraints (such as joint-angle limits and joint-velocity limits), a novel minimum-velocity-norm (MVN) scheme is presented and investigated for robotic redundancy resolution. The resultant obstacle-avoidance MVN scheme resolved at the joint-velocity level is further reformulated as a general quadratic program (QP). Two QP solvers, i.e., a simplified primal-dual neural network based on linear variational inequalities (LVI) and an LVI-based numerical algorithm, are developed and applied for online solution of the QP problem as well as the inequality-based obstacle-avoidance MVN scheme. Simulative results that are based on PA10 robot manipulator and a six-link planar robot manipulator in the presence of window-shaped and point obstacles demonstrate the efficacy and superiority of the proposed obstacle-avoidance MVN scheme. Moreover, experimental results of the proposed MVN scheme implemented on the practical six-link planar robot manipulator substantiate the physical realizability and effectiveness of such a scheme for obstacle avoidance of redundant robot manipulator.", "title": "" }, { "docid": "d9daeb451c69b7eeab8ef00a8ea6af05", "text": "This paper describes the effectiveness of knowledge distillation using teacher student training for building accurate and compact neural networks. We show that with knowledge distillation, information from multiple acoustic models like very deep VGG networks and Long Short-Term Memory (LSTM) models can be used to train standard convolutional neural network (CNN) acoustic models for a variety of systems requiring a quick turnaround. We examine two strategies to leverage multiple teacher labels for training student models. In the first technique, the weights of the student model are updated by switching teacher labels at the minibatch level. In the second method, student models are trained on multiple streams of information from various teacher distributions via data augmentation. We show that standard CNN acoustic models can achieve comparable recognition accuracy with much smaller number of model parameters compared to teacher VGG and LSTM acoustic models. Additionally we also investigate the effectiveness of using broadband teacher labels as privileged knowledge for training better narrowband acoustic models within this framework. We show the benefit of this simple technique by training narrowband student models with broadband teacher soft labels on the Aurora 4 task.", "title": "" }, { "docid": "2579cb11b9d451d6017ebb642d6a35cb", "text": "The presence of bots has been felt in many aspects of social media. Twitter, one example of social media, has especially felt the impact, with bots accounting for a large portion of its users. These bots have been used for malicious tasks such as spreading false information about political candidates and inflating the perceived popularity of celebrities. Furthermore, these bots can change the results of common analyses performed on social media. It is important that researchers and practitioners have tools in their arsenal to remove them. Approaches exist to remove bots, however they focus on precision to evaluate their model at the cost of recall. This means that while these approaches are almost always correct in the bots they delete, they ultimately delete very few, thus many bots remain. We propose a model which increases the recall in detecting bots, allowing a researcher to delete more bots. We evaluate our model on two real-world social media datasets and show that our detection algorithm removes more bots from a dataset than current approaches.", "title": "" }, { "docid": "95ff1a86eedad42b0d869cca0d7d6e33", "text": "360° videos give viewers a spherical view and immersive experience of surroundings. However, one challenge of watching 360° videos is continuously focusing and re-focusing intended targets. To address this challenge, we developed two Focus Assistance techniques: Auto Pilot (directly bringing viewers to the target), and Visual Guidance (indicating the direction of the target). We conducted an experiment to measure viewers' video-watching experience and discomfort using these techniques and obtained their qualitative feedback. We showed that: 1) Focus Assistance improved ease of focus. 2) Focus Assistance techniques have specificity to video content. 3) Participants' preference of and experience with Focus Assistance depended not only on individual difference but also on their goal of watching the video. 4) Factors such as view-moving-distance, salience of the intended target and guidance, and language comprehension affected participants' video-watching experience. Based on these findings, we provide design implications for better 360° video focus assistance.", "title": "" }, { "docid": "0885f805c8a5226642c28904b5df6818", "text": "Blind people need some aid to feel safe while moving. Smart stick comes as a proposed solution to improve the mobility of both blind and visually impaired people. Stick solution use different technologies like ultrasonic, infrared and laser but they still have drawbacks. In this paper we propose, light weight, cheap, user friendly, fast response and low power consumption, smart stick based on infrared technology. A pair of infrared sensors can detect stair-cases and other obstacles presence in the user path, within a range of two meters. The experimental results achieve good accuracy and the stick is able to detect all of obstacles.", "title": "" } ]
scidocsrr
48bca357490b39bf6df44ebe16bb7579
RETracer: Triaging Crashes by Reverse Execution from Partial Memory Dumps
[ { "docid": "09aa131819a67f8569ca4dba27ce207d", "text": "A widely shared belief in the software engineering community is that stack traces are much sought after by developers to support them in debugging. But limited empirical evidence is available to confirm the value of stack traces to developers. In this paper, we seek to provide such evidence by conducting an empirical study on the usage of stack traces by developers from the ECLIPSE project. Our results provide strong evidence to this effect and also throws light on some of the patterns in bug fixing using stack traces. We expect the findings of our study to further emphasize the importance of adding stack traces to bug reports and that in the future, software vendors will provide more support in their products to help general users make such information available when filing bug reports.", "title": "" } ]
[ { "docid": "fbb6c8566fbe79bf8f78af0dc2dedc7b", "text": "Automatic essay evaluation (AEE) systems are designed to assist a teacher in the task of classroom assessment in order to alleviate the demands of manual subject evaluation. However, although numerous AEE systems are available, most of these systems do not use elaborate domain knowledge for evaluation, which limits their ability to give informative feedback to students and also their ability to constructively grade a student based on a particular domain of study. This paper is aimed at improving on the achievements of previous studies by providing a subject-focussed evaluation system that considers the domain knowledge while scoring and provides informative feedback to its user. The study employs a combination of techniques such as system design and modelling using Unified Modelling Language (UML), information extraction, ontology development, data management, and semantic matching in order to develop a prototype subject-focussed AEE system. The developed system was evaluated to determine its level of performance and usability. The result of the usability evaluation showed that the system has an overall mean rating of 4.17 out of maximum of 5, which indicates ‘good usability’. In terms of performance, the assessment done by the system was also found to have sufficiently high correlation with those done by domain experts, in addition to providing appropriate feedback to the user.", "title": "" }, { "docid": "5e7a87078f92b7ce145e24a2e7340f1b", "text": "Unsupervised artificial neural networks are now considered as a likely alternative to classical computing models in many application domains. For example, recent neural models defined by neuro-scientists exhibit interesting properties for an execution in embedded and autonomous systems: distributed computing, unsupervised learning, self-adaptation, self-organisation, tolerance. But these properties only emerge from large scale and fully connected neural maps that result in intensive computation coupled with high synaptic communications. We are interested in deploying these powerful models in the embedded context of an autonomous bio-inspired robot learning its environment in realtime. So we study in this paper in what extent these complex models can be simplified and deployed in hardware accelerators compatible with an embedded integration. Thus we propose a Neural Processing Unit designed as a programmable accelerator implementing recent equations close to self-organizing maps and neural fields. The proposed architecture is validated on FPGA devices and compared to state of the art solutions. The trade-off proposed by this dedicated but programmable neural processing unit allows to achieve significant improvements and makes our architecture adapted to many embedded systems.", "title": "" }, { "docid": "e022d5b292d391e201d15e8b2317bc30", "text": "This article describes the most prominent approaches to apply artificial intelligence technologies to information retrieval (IR). Information retrieval is a key technology for knowledge management. It deals with the search for information and the representation, storage and organization of knowledge. Information retrieval is concerned with search processes in which a user needs to identify a subset of information which is relevant for his information need within a large amount of knowledge. The information seeker formulates a query trying to describe his information need. The query is compared to document representations which were extracted during an indexing phase. The representations of documents and queries are typically matched by a similarity function such as the Cosine. The most similar documents are presented to the users who can evaluate the relevance with respect to their problem (Belkin, 2000). The problem to properly represent documents and to match imprecise representations has soon led to the application of techniques developed within Artificial Intelligence to information retrieval.", "title": "" }, { "docid": "21511302800cd18d21dbc410bec3cbb2", "text": "We investigate theoretical and practical aspects of the design of far-field RF power extraction systems consisting of antennas, impedance matching networks and rectifiers. Fundamental physical relationships that link the operating bandwidth and range are related to technology dependent quantities like threshold voltage and parasitic capacitances. This allows us to design efficient planar antennas, coupled resonator impedance matching networks and low-power rectifiers in standard CMOS technologies (0.5-mum and 0.18-mum) and accurately predict their performance. Experimental results from a prototype power extraction system that operates around 950 MHz and integrates these components together are presented. Our measured RF power-up threshold (in 0.18-mum, at 1 muW load) was 6 muWplusmn10%, closely matching the predicted value of 5.2 muW.", "title": "" }, { "docid": "f7121b434ae326469780f300256367a8", "text": "Aerial Manipulators (AMs) are a special class of underactuated mechanical systems formed by the join of Unmanned Aerial Vehicles (UAVs) and manipulators. A thorough analysis of the dynamics and a fully constructive controller design for a quadrotor plus n-link manipulator in a free-motion on an arbitrary plane is provided, via the lDA-PBC methodology. A controller is designed with the manipulator locked at any position ensuring global asymptotic stability in an open set and avoiding the AM goes upside down (autonomous). The major result of stability/robustness arises when it is proved that, additionally, the controller guarantees the boundedness of the trajectories for bounded movements of the manipulator, i.e. the robot manipulator executing planned tasks, giving rise to a non-autonomous port-controlled Hamiltonian system in closed loop. Moreover, all trajectories converge to a positive limit set, a strong result for matching-type controllers.", "title": "" }, { "docid": "c72940e6154fa31f6bedca17336f8a94", "text": "Following on from ecological theories of perception, such as the one proposed by [Gibson, J. J. (1966). The senses considered as perceptual systems. Boston: Houghton Mifflin] this paper reviews the literature on the multisensory interactions underlying the perception of flavor in order to determine the extent to which it is really appropriate to consider flavor perception as a distinct perceptual system. We propose that the multisensory perception of flavor may be indicative of the fact that the taxonomy currently used to define our senses is simply not appropriate. According to the view outlined here, the act of eating allows the different qualities of foodstuffs to be combined into unified percepts; and flavor can be used as a term to describe the combination of tastes, smells, trigeminal, and tactile sensations as well as the visual and auditory cues, that we perceive when tasting food.", "title": "" }, { "docid": "99a874fd9545649f517eb2a949a9b934", "text": "Sensor miniaturisation, improved battery technology and the availability of low-cost yet advanced Unmanned Aerial Vehicles (UAV) have provided new opportunities for environmental remote sensing. The UAV provides a platform for close-range aerial photography. Detailed imagery captured from micro-UAV can produce dense point clouds using multi-view stereopsis (MVS) techniques combining photogrammetry and computer vision. This study applies MVS techniques to imagery acquired from a multi-rotor micro-UAV of a natural coastal site in southeastern Tasmania, Australia. A very dense point cloud (<1–3 cm point spacing) is produced in an arbitrary coordinate system using full resolution imagery, whereas other studies usually downsample the original imagery. The point cloud is sparse in areas of complex vegetation and where surfaces have a homogeneous texture. Ground control points collected with Differential Global Positioning System (DGPS) are identified and used for georeferencing via a Helmert transformation. This study compared georeferenced point clouds to a Total Station survey in order to assess and quantify their geometric accuracy. The results indicate that a georeferenced point cloud accurate to 25–40 mm can be obtained from imagery acquired from ∼50 m. UAV-based image capture provides the spatial and temporal resolution required to map and monitor natural landscapes. This paper assesses the accuracy of the generated point clouds based on field survey points. Based on our key findings we conclude that sub-decimetre terrain change (in this case coastal erosion) can be monitored. Remote Sens. 2012, 4 1574", "title": "" }, { "docid": "d159ddace8c8d33963a304e04484aeff", "text": "This work addresses the problem of semantic scene understanding under fog. Although marked progress has been made in semantic scene understanding, it is mainly concentrated on clear-weather scenes. Extending semantic segmentation methods to adverse weather conditions such as fog is crucial for outdoor applications. In this paper, we propose a novel method, named Curriculum Model Adaptation (CMAda), which gradually adapts a semantic segmentation model from light synthetic fog to dense real fog in multiple steps, using both labeled synthetic foggy data and unlabeled real foggy data. The method is based on the fact that the results of semantic segmentation in moderately adverse conditions (light fog) can be bootstrapped to solve the same problem in highly adverse conditions (dense fog). CMAda is extensible to other adverse conditions and provides a new paradigm for learning with synthetic data and unlabeled real data. In addition, we present three other main stand-alone contributions: 1) a novel method to add synthetic fog to real, clear-weather scenes using semantic input; 2) a new fog density estimator; 3) a novel fog densification method to densify the fog in real foggy scenes without known depth; and 4) the Foggy Zurich dataset comprising 3808 real foggy images, with pixel-level semantic annotations for 40 images under dense fog. Our experiments show that 1) our fog simulation and fog density estimator outperform their state-of-theart counterparts with respect to the task of semantic foggy scene understanding (SFSU); 2) CMAda improves the performance of state-of-the-art models for SFSU significantly, benefiting both from our synthetic and real foggy data. The datasets and code are available at the project website. D. Dai · C. Sakaridis · S. Hecker · L. Van Gool ETH Zürich, Zurich, Switzerland L. Van Gool KU Leuven, Leuven, Belgium", "title": "" }, { "docid": "9027d974a3bb5c48c1d8f3103e6035d6", "text": "The creation of memories about real-life episodes requires rapid neuronal changes that may appear after a single occurrence of an event. How is such demand met by neurons in the medial temporal lobe (MTL), which plays a fundamental role in episodic memory formation? We recorded the activity of MTL neurons in neurosurgical patients while they learned new associations. Pairs of unrelated pictures, one of a person and another of a place, were used to construct a meaningful association modeling the episodic memory of meeting a person in a particular place. We found that a large proportion of responsive MTL neurons expanded their selectivity to encode these specific associations within a few trials: cells initially responsive to one picture started firing to the associated one but not to others. Our results provide a plausible neural substrate for the inception of associations, which are crucial for the formation of episodic memories.", "title": "" }, { "docid": "c0db65ce1428099d5bfb00071d820096", "text": "With the rise of soft robotics technology and applications, there have been increasing interests in the development of controllers appropriate for their particular design. Being fundamentally different from traditional rigid robots, there is still not a unified framework for the design, analysis, and control of these high-dimensional robots. This review article attempts to provide an insight into various controllers developed for continuum/soft robots as a guideline for future applications in the soft robotics field. A comprehensive assessment of various control strategies and an insight into the future areas of research in this field are presented.", "title": "" }, { "docid": "700b1a3fd913d2980f87def5540938f1", "text": "Foursquare is an online social network and can be represented with a bipartite network of users and venues. A user-venue pair is connected if a user has checked-in at that venue. In the case of Foursquare, network analysis techniques can be used to enhance the user experience. One such technique is link prediction, which can be used to build a personalized recommendation system of venues. Recommendation systems in bipartite networks are very often designed using the global ranking method and collaborative filtering. A less known methodnetwork based inference is also a feasible choice for link prediction in bipartite networks and sometimes performs better than the previous two. In this paper we test these techniques on the Foursquare network. The best technique proves to be the network based inference. We also show that taking into account the available metadata can be beneficial.", "title": "" }, { "docid": "ea0b94e3ad27603d45f56de039c39388", "text": "Recent work on generative text modeling has found that variational autoencoders (VAE) with LSTM decoders perform worse than simpler LSTM language models (Bowman et al., 2015). This negative result is so far poorly understood, but has been attributed to the propensity of LSTM decoders to ignore conditioning information from the encoder. In this paper, we experiment with a new type of decoder for VAE: a dilated CNN. By changing the decoder’s dilation architecture, we control the size of context from previously generated words. In experiments, we find that there is a trade-off between contextual capacity of the decoder and effective use of encoding information. We show that when carefully managed, VAEs can outperform LSTM language models. We demonstrate perplexity gains on two datasets, representing the first positive language modeling result with VAE. Further, we conduct an in-depth investigation of the use of VAE (with our new decoding architecture) for semi-supervised and unsupervised labeling tasks, demonstrating gains over several strong baselines.", "title": "" }, { "docid": "4cc4a6644e367afacee006fdb9f5e68a", "text": "A lifetime optimization methodology for planning the inspection and repair of structures that deteriorate over time is introduced and illustrated through numerical examples. The optimization is based on minimizing the expected total life-cycle cost while maintaining an allowable lifetime reliability for the structure. This method incorporates: (a) the quality of inspection techniques with different detection capabilities; (b) all repair possibilities based on an event tree; (c) the effects of aging, deterior~ti~m: an~ subsequent. r~~air on structural reliability; and (d) the time value of money. The overall cost to be minimized Includes the initial cost and the costs of preventive maintenance, inspection, repair, and failure. The methodology is illustrated using the reinforced concrete T-girders from a highway bridge. An optimum inspection/repair strategy is deve~~ped for these girders that are deteriorating due to corrosion in an aggressive environment. The effect of cntlcal. pa­ rameters such as rate of corrosion, quality of the inspection technique, and the expected cost of structural fallure are all investigated, along with the effects of both uniform and nonuniform inspection time intervals. Ultimately, the reliability-based lifetime approach to developing an optimum inspection/repair strategy demonstrates the potential for cost savings and improved efficiency. INTRODUCTION The management of the nation's infrastructure is a vitally important function of government. The inspection and repair of the transportation network is needed for uninterrupted com­ merce and a functioning economy. With about 600,000 high­ way bridges in the national inventory, the maintenance of these structures alone represents a commitment of billions of dollars annually. In fact, the nation spends at least $5,000,000,000 per year for highway bridge design, construction, replacement, and rehabilitation (Status 1993). Given this huge investment along with an increasing scarcity of resources, it is essential that the funds be used as efficiently as possible. Highway bridges deteriorate over time and need mainte­ nance/inspection programs that detect damage, deterioration, loss of effective strength in members, missing fasteners, frac­ tures, and cracks. Bridge serviceability is highly dependent on the frequency and quality of these maintenance programs. Be­ cause the welfare of many people depends on the health of the highway system, it is important that these bridges be main­ tained and inspected routinely. An efficient bridge maintenance program requires careful planning base~ on potenti~ modes of failure of the structural elements, the history of major struc­ tural repairs done to the bridge, and, of course, the frequency and intensity of the applied loads. Effective maintenal1;ce/in­ spection can extend the life expectancy of a system while re­ ducing the possibility of costly failures in the future. In any bridge, there are many defects that may appear dur­ ing a projected service period, such as potholes in the deck, scour on the piers, or the deterioration of joints or bearings. Corrosion of steel reinforcement, initiated by high chloride concentrations in the concrete, is a serious cause of degrada­ tion in concrete structures (Ting 1989). The corrosion damage is revealed by the initiation and propagation of cracks, which can be detected and repaired by scheduled maintenance and inspection procedures. As a result, the reliability of corrosive 'Prof., Dept. of Civ., Envir., and Arch. Engrg., Univ. of Colorado, Boulder, CO 80309-0428. 'Proj. Mgr., Chung-Shen Inst. of Sci. and Techno\\., Taiwan, Republic of China; formerly, Grad. Student, Dept. of Civ., Envir., and Arch. Engrg., Univ. of Colorado, Boulder, CO. 'Grad. Siudent, Dept. of Civ., Envir., and Arch. Engrg., Univ. of Col­ orado. Boulder, CO. critical structures depends not only on the structural design, but also on the inspection and repair procedures. This paper proposes a method to optimize the lifetime inspection/repair strategy of corrosion-critical concrete struc­ tures based on the reliability of the structure and cost-effec­ tiveness. The method is applicable for any type of damage whose evolution can be modeled over time. The reliability­ based analysis of structures, with or without maintenance/in­ spection procedures, is attracting the increased attention of re­ searchers (Thoft-Christensen and Sr6rensen 1987; Mori and Ellingwood 1994a). The optimal lifetime inspection/repair strategy is obtained by minimizing the expected total life-cycle cost while satisfying the constraints on the allowable level of structural lifetime reliability in service. The expected total life­ cycle cost includes the initial cost and the costs of preventive maintenance, inspection, repair, and failure. MAINTENANCEIINSPECTION For many bridges, both preventive and repair maintenance are typically performed. Preventive or routine maintenance in­ cludes replacing small parts, patching concrete, repairing cracks, changing lubricants, and cleaning and painting expo~ed parts. The structure is kept in working condition by delaymg and mitigating the aging effects of wear, fatigue, and related phenomena. In contrast, repair maintenance m~gh~ inclu~e re­ placing a bearing, resurfacing a deck, or modlfymg. a girder. Repair maintenance tends to be less frequent, reqUlres more effort, is usually more costly, and results in a measurable in­ crease in reliability. A sample maintenance strategy is shown in Fig. 1, where T l , T2 , T3 , and T4 represent the times of repair maintenance, and effort is a generic quantity that reflects cost, amount of work performed, and benefit derived from the main­ tenance. While guidance for routine maintenance exists, many repair maintenance strategies are based on experience and local ~rac­ tice rather than on sound theoretical investigations. Mamte-", "title": "" }, { "docid": "b0382aa0f8c8171b78dba1c179554450", "text": "This paper is concerned with the hard thresholding operator which sets all but the k largest absolute elements of a vector to zero. We establish a tight bound to quantitatively characterize the deviation of the thresholded solution from a given signal. Our theoretical result is universal in the sense that it holds for all choices of parameters, and the underlying analysis depends only on fundamental arguments in mathematical optimization. We discuss the implications for two domains: Compressed Sensing. On account of the crucial estimate, we bridge the connection between the restricted isometry property (RIP) and the sparsity parameter for a vast volume of hard thresholding based algorithms, which renders an improvement on the RIP condition especially when the true sparsity is unknown. This suggests that in essence, many more kinds of sensing matrices or fewer measurements are admissible for the data acquisition procedure. Machine Learning. In terms of large-scale machine learning, a significant yet challenging problem is learning accurate sparse models in an efficient manner. In stark contrast to prior work that attempted the `1-relaxation for promoting sparsity, we present a novel stochastic algorithm which performs hard thresholding in each iteration, hence ensuring such parsimonious solutions. Equipped with the developed bound, we prove the global linear convergence for a number of prevalent statistical models under mild assumptions, even though the problem turns out to be non-convex.", "title": "" }, { "docid": "a7c37a5ee66fb2db6288a6314bdea78f", "text": "Radial, space-filling visualizations can be useful for depi cting information hierarchies, but they suffer from one major problem. As the hierarchy grows in size, ma ny items become small, peripheral slices that are difficult to distinguish. We have developed t hree visualization/interaction techniques that provide flexible browsing of the display. The technique s allow viewers to examine the small items in detail while providing context within the entire in formation hierarchy. Additionally, smooth transitions between views help users maintain orientation within the complete information space.", "title": "" }, { "docid": "10512cddabf509100205cb241f2f206a", "text": "Due to an increasing growth of Internet usage, cybercrimes has been increasing at an Alarming rate and has become most profitable criminal activity. Botnet is an emerging threat to the cyber security and existence of Command and Control Server(C&C Server) makes it very dangerous attack as compare to all other malware attacks. Botnet is a network of compromised machines which are remotely controlled by bot master to do various malicious activities with the help of command and control server and n-number of slave machines called bots. The main motive behind botnet is Identity theft, Denial of Service attack, Click fraud, Phishing and many other malware activities. Botnets rely on different protocols such as IRC, HTTP and P2P for transmission. Different botnet detection techniques have been proposed in recent years. This paper discusses Botnet, Botnet history, and life cycle of Botnet apart from classifying various Botnet detection techniques. Paper highlights the recent research work under botnets in cyber realm and proposes directions for future research in this area.", "title": "" }, { "docid": "83ac82ef100fdf648a5214a50d163fe3", "text": "We consider the problem of multi-robot taskallocation when robots have to deal with uncertain utility estimates. Typically an allocation is performed to maximize expected utility; we consider a means for measuring the robustness of a given optimal allocation when robots have some measure of the uncertainty (e.g., a probability distribution, or moments of such distributions). We introduce a new O(n) algorithm, the Interval Hungarian algorithm, that extends the classic KuhnMunkres Hungarian algorithm to compute the maximum interval of deviation (for each entry in the assignment matrix) which will retain the same optimal assignment. This provides an efficient measurement of the tolerance of the allocation to the uncertainties, for both a specific interval and a set of interrelated intervals. We conduct experiments both in simulation and with physical robots to validate the approach and to gain insight into the effect of location uncertainty on allocations for multi-robot multi-target navigation tasks.", "title": "" }, { "docid": "cc05dca89bf1e3f53cf7995e547ac238", "text": "Ensembles of randomized decision trees, known as Random Forests, have become a valuable machine learning tool for addressing many computer vision problems. Despite their popularity, few works have tried to exploit contextual and structural information in random forests in order to improve their performance. In this paper, we propose a simple and effective way to integrate contextual information in random forests, which is typically reflected in the structured output space of complex problems like semantic image labelling. Our paper has several contributions: We show how random forests can be augmented with structured label information and be used to deliver structured low-level predictions. The learning task is carried out by employing a novel split function evaluation criterion that exploits the joint distribution observed in the structured label space. This allows the forest to learn typical label transitions between object classes and avoid locally implausible label configurations. We provide two approaches for integrating the structured output predictions obtained at a local level from the forest into a concise, global, semantic labelling. We integrate our new ideas also in the Hough-forest framework with the view of exploiting contextual information at the classification level to improve the performance on the task of object detection. Finally, we provide experimental evidence for the effectiveness of our approach on different tasks: Semantic image labelling on the challenging MSRCv2 and CamVid databases, reconstruction of occluded handwritten Chinese characters on the Kaist database and pedestrian detection on the TU Darmstadt databases.", "title": "" }, { "docid": "fb89fd2d9bf526b8bc7f1433274859a6", "text": "In multidimensional image analysis, there are, and will continue to be, situations wherein automatic image segmentation methods fail, calling for considerable user assistance in the process. The main goals of segmentation research for such situations ought to be (i) to provide ffective controlto the user on the segmentation process while it is being executed, and (ii) to minimize the total user’s time required in the process. With these goals in mind, we present in this paper two paradigms, referred to aslive wireandlive lane, for practical image segmentation in large applications. For both approaches, we think of the pixel vertices and oriented edges as forming a graph, assign a set of features to each oriented edge to characterize its “boundariness,” and transform feature values to costs. We provide training facilities and automatic optimal feature and transform selection methods so that these assignments can be made with consistent effectiveness in any application. In live wire, the user first selects an initial point on the boundary. For any subsequent point indicated by the cursor, an optimal path from the initial point to the current point is found and displayed in real time. The user thus has a live wire on hand which is moved by moving the cursor. If the cursor goes close to the boundary, the live wire snaps onto the boundary. At this point, if the live wire describes the boundary appropriately, the user deposits the cursor which now becomes the new starting point and the process continues. A few points (livewire segments) are usually adequate to segment the whole 2D boundary. In live lane, the user selects only the initial point. Subsequent points are selected automatically as the cursor is moved within a lane surrounding the boundary whose width changes", "title": "" }, { "docid": "c973dc425e0af0f5253b71ae4ebd40f9", "text": "A growing body of research on Bitcoin and other permissionless cryptocurrencies that utilize Nakamoto’s blockchain has shown that they do not easily scale to process a high throughput of transactions, or to quickly approve individual transactions; blocks must be kept small, and their creation rates must be kept low in order to allow nodes to reach consensus securely. As of today, Bitcoin processes a mere 3-7 transactions per second, and transaction confirmation takes at least several minutes. We present SPECTRE, a new protocol for the consensus core of crypto-currencies that remains secure even under high throughput and fast confirmation times. At any throughput, SPECTRE is resilient to attackers with up to 50% of the computational power (up until the limit defined by network congestion and bandwidth constraints). SPECTRE can operate at high block creation rates, which implies that its transactions confirm in mere seconds (limited mostly by the round-trip-time in the network). Key to SPECTRE’s achievements is the fact that it satisfies weaker properties than classic consensus requires. In the conventional paradigm, the order between any two transactions must be decided and agreed upon by all non-corrupt nodes. In contrast, SPECTRE only satisfies this with respect to transactions performed by honest users. We observe that in the context of money, two conflicting payments that are published concurrently could only have been created by a dishonest user, hence we can afford to delay the acceptance of such transactions without harming the usability of the system. Our framework formalizes this weaker set of requirements for a crypto-currency’s distributed ledger. We then provide a formal proof that SPECTRE satisfies these requirements.", "title": "" } ]
scidocsrr
f733c12163b7ad9cafd560d8fe668e72
Extraction of Salient Sentences from Labelled Documents
[ { "docid": "6eeeb343309fc24326ed42b62d5524b1", "text": "We introduce a novel machine learning framework based on recursive autoencoders for sentence-level prediction of sentiment label distributions. Our method learns vector space representations for multi-word phrases. In sentiment prediction tasks these representations outperform other state-of-the-art approaches on commonly used datasets, such as movie reviews, without using any pre-defined sentiment lexica or polarity shifting rules. We also evaluate the model’s ability to predict sentiment distributions on a new dataset based on confessions from the experience project. The dataset consists of personal user stories annotated with multiple labels which, when aggregated, form a multinomial distribution that captures emotional reactions. Our algorithm can more accurately predict distributions over such labels compared to several competitive baselines.", "title": "" }, { "docid": "64330f538b3d8914cbfe37565ab0d648", "text": "The compositionality of meaning extends beyond the single sentence. Just as words combine to form the meaning of sentences, so do sentences combine to form the meaning of paragraphs, dialogues and general discourse. We introduce both a sentence model and a discourse model corresponding to the two levels of compositionality. The sentence model adopts convolution as the central operation for composing semantic vectors and is based on a novel hierarchical convolutional neural network. The discourse model extends the sentence model and is based on a recurrent neural network that is conditioned in a novel way both on the current sentence and on the current speaker. The discourse model is able to capture both the sequentiality of sentences and the interaction between different speakers. Without feature engineering or pretraining and with simple greedy decoding, the discourse model coupled to the sentence model obtains state of the art performance on a dialogue act classification experiment.", "title": "" }, { "docid": "062c970a14ac0715ccf96cee464a4fec", "text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.", "title": "" }, { "docid": "7b908fa217f75f75254ccbb433818416", "text": "This paper presents a new approach to perform the estimation of the translation model probabilities of a phrase-based statistical machine translation system. We use neural networks to directly learn the translation probability of phrase pairs using continuous representations. The system can be easily trained on the same data used to build standard phrase-based systems. We provide experimental evidence that the approach seems to be able to infer meaningful translation probabilities for phrase pairs not seen in the training data, or even predict a list of the most likely translations given a source phrase. The approach can be used to rescore n-best lists, but we also discuss an integration into the Moses decoder. A preliminary evaluation on the English/French IWSLT task achieved improvements in the BLEU score and a human analysis showed that the new model often chooses semantically better translations. Several extensions of this work are discussed.", "title": "" }, { "docid": "55b9284f9997b18d3b1fad9952cd4caa", "text": "This paper presents a system which learns to answer questions on a broad range of topics from a knowledge base using few handcrafted features. Our model learns low-dimensional embeddings of words and knowledge base constituents; these representations are used to score natural language questions against candidate answers. Training our system using pairs of questions and structured representations of their answers, and pairs of question paraphrases, yields competitive results on a recent benchmark of the literature.", "title": "" } ]
[ { "docid": "480fe848464a80774e3b7963e53c09d8", "text": "We are witnessing daily acquisition of large amounts of aerial and satellite imagery. Analysis of such large quantities of data can be helpful for many practical applications. In this letter, we present an automatic content-based analysis of aerial imagery in order to detect and mark arbitrary objects or regions in high-resolution images. For that purpose, we proposed a method for automatic object detection based on a convolutional neural network. A novel two-stage approach for network training is implemented and verified in the tasks of aerial image classification and object detection. First, we tested the proposed training approach using UCMerced data set of aerial images and achieved accuracy of approximately 98.6%. Second, the method for automatic object detection was implemented and verified. For implementation on GPGPU, a required processing time for one aerial image of size 5000 × 5000 pixels was around 30 s.", "title": "" }, { "docid": "605b95e3c0448b5ce9755ce6289894d7", "text": "Website success hinges on how credible the consumers consider the information on the website. Unless consumers believe the website's information is credible, they are not likely to be willing to act on the advice and will not develop loyalty to the website. This paper reports on how individual differences and initial website impressions affect perceptions of information credibility of an unfamiliar advice website. Results confirm that several individual difference variables and initial impression variables (perceived reputation, perceived website quality, and willingness to explore the website) play an important role in developing information credibility of an unfamiliar website, with first impressions and individual differences playing equivalent roles. The study also confirms the import of information credibility by demonstrating it positively influences perceived usefulness, perceived site risk, willingness to act on website advice, and perceived consumer loyalty toward the website.", "title": "" }, { "docid": "24bd9a2f85b33b93609e03fc67e9e3a9", "text": "With the rapid development of high-throughput technologies, researchers can sequence the whole metagenome of a microbial community sampled directly from the environment. The assignment of these metagenomic reads into different species or taxonomical classes is a vital step for metagenomic analysis, which is referred to as binning of metagenomic data. In this paper, we propose a new method TM-MCluster for binning metagenomic reads. First, we represent each metagenomic read as a set of \"k-mers\" with their frequencies occurring in the read. Then, we employ a probabilistic topic model -- the Latent Dirichlet Allocation (LDA) model to the reads, which generates a number of hidden \"topics\" such that each read can be represented by a distribution vector of the generated topics. Finally, as in the MCluster method, we apply SKWIC -- a variant of the classical K-means algorithm with automatic feature weighting mechanism to cluster these reads represented by topic distributions. Experiments show that the new method TM-MCluster outperforms major existing methods, including AbundanceBin, MetaCluster 3.0/5.0 and MCluster. This result indicates that the exploitation of topic modeling can effectively improve the binning performance of metagenomic reads.", "title": "" }, { "docid": "318daea2ef9b0d7afe2cb08edcfe6025", "text": "Stock market prediction has become an attractive investigation topic due to its important role in economy and beneficial offers. There is an imminent need to uncover the stock market future behavior in order to avoid investment risks. The large amount of data generated by the stock market is considered a treasure of knowledge for investors. This study aims at constructing an effective model to predict stock market future trends with small error ratio and improve the accuracy of prediction. This prediction model is based on sentiment analysis of financial news and historical stock market prices. This model provides better accuracy results than all previous studies by considering multiple types of news related to market and company with historical stock prices. A dataset containing stock prices from three companies is used. The first step is to analyze news sentiment to get the text polarity using naïve Bayes algorithm. This step achieved prediction accuracy results ranging from 72.73% to 86.21%. The second step combines news polarities and historical stock prices together to predict future stock prices. This improved the prediction accuracy up to 89.80%.", "title": "" }, { "docid": "fc6726bddf3d70b7cb3745137f4583c1", "text": "Maximum power point tracking (MPPT) is a very important necessity in a system of energy conversion from a renewable energy source. Many research papers have been produced with various schemes over past decades for the MPPT in photovoltaic (PV) system. This research paper inspires its motivation from the fact that the keen study of these existing techniques reveals that there is still quite a need for an absolutely generic and yet very simple MPPT controller which should have all the following traits: total independence from system's parameters, ability to reach the global maxima in minimal possible steps, the correct sense of tracking direction despite the abrupt atmospheric or parametrical changes, and finally having a very cost-effective and energy efficient hardware with the complexity no more than that of a minimal MPPT algorithm like Perturb and Observe (P&O). The MPPT controller presented in this paper is a successful attempt to fulfil all these requirements. It extends the MPPT techniques found in the recent research papers with some innovations in the control algorithm and a simplistic hardware. The simulation results confirm that the proposed MPPT controller is very fast, very efficient, very simple and low cost as compared to the contemporary ones.", "title": "" }, { "docid": "39340461bb4e7352ab6af3ce10460bd7", "text": "This paper presents an 8 bit 1.8 V 500 MSPS digital- to analog converter using 0.18mum double poly five metal CMOS technology for frequency domain applications. The proposed DAC is composed of four unit cell matrix. A novel decoding logic is used to remove the inter block code transition (IBT) glitch. The proposed DAC shows less number of switching for a monotonic input and the product of number of switching and the current value associated with switching is also less than the segmented DAC. The SPICE simulated DNL and INL is 0.1373 LSB and 0.331 LSB respectively and are better than the segmented DAC. The proposed DAC also shows better SNDR and THD than the segmented DAC. The MATLAB simulated THD, SFDR and SNDR is more than 45 dB, 35 dB and 44 dB respectively at 500MS/s with a 10 MHz input sine wave with incoherent timing response between current switches.", "title": "" }, { "docid": "3d11b4b645a32ff0d269fc299e7cf646", "text": "The static one-to-one binding of hosts to IP addresses allows adversaries to conduct thorough reconnaissance in order to discover and enumerate network assets. Specifically, this fixed address mapping allows distributed network scanners to aggregate information gathered at multiple locations over different times in order to construct an accurate and persistent view of the network. The unvarying nature of this view enables adversaries to collaboratively share and reuse their collected reconnaissance information in various stages of attack planning and execution. This paper presents a novel moving target defense (MTD) technique which enables host-to-IP binding of each destination host to vary randomly across the network based on the source identity (spatial randomization) as well as time (temporal randomization). This spatio-temporal randomization will distort attackers' view of the network by causing the collected reconnaissance information to expire as adversaries transition from one host to another or if they stay long enough in one location. Consequently, adversaries are forced to re-scan the network frequently at each location or over different time intervals. These recurring probings significantly raises the bar for the adversaries by slowing down the attack progress, while improving its detectability. We introduce three novel metrics for quantifying the effectiveness of MTD defense techniques: deterrence, deception, and detectability. Using these metrics, we perform rigorous theoretical and experimental analysis to evaluate the efficacy of this approach. These analyses show that our approach is effective in countering a significant number of sophisticated threat models including collaborative reconnaissance, worm propagation, and advanced persistent threat (APT), in an evasion-free manner.", "title": "" }, { "docid": "050ca96de473a83108b5ac26f4ac4349", "text": "The concept of graphene-based two-dimensional leaky-wave antenna (LWA), allowing both frequency tuning and beam steering in the terahertz band, is proposed in this paper. In its design, a graphene sheet is used as a tuning part of the high-impedance surface (HIS) that acts as the ground plane of such 2-D LWA. It is shown that, by adjusting the graphene conductivity, the reflection phase of the HIS can be altered effectively, thus controlling the resonant frequency of the 2-D LWA over a broad band. In addition, a flexible adjustment of its pointing direction can be achieved over a wide range, while keeping the operating frequency fixed. Transmission-line methods are used to accurately predict the antenna reconfigurable characteristics, which are further verified by means of commercial full-wave analysis tools.", "title": "" }, { "docid": "4535a5961d6628f2f4bafb1d99821bbb", "text": "The prevalence of diabetes has dramatically increased worldwide due to the vast increase in the obesity rate. Diabetic nephropathy is one of the major complications of type 1 and type 2 diabetes and it is currently the leading cause of end-stage renal disease. Hyperglycemia is the driving force for the development of diabetic nephropathy. It is well known that hyperglycemia increases the production of free radicals resulting in oxidative stress. While increases in oxidative stress have been shown to contribute to the development and progression of diabetic nephropathy, the mechanisms by which this occurs are still being investigated. Historically, diabetes was not thought to be an immune disease; however, there is increasing evidence supporting a role for inflammation in type 1 and type 2 diabetes. Inflammatory cells, cytokines, and profibrotic growth factors including transforming growth factor-β (TGF-β), monocyte chemoattractant protein-1 (MCP-1), connective tissue growth factor (CTGF), tumor necrosis factor-α (TNF-α), interleukin-1 (IL-1), interleukin-6 (IL-6), interleukin-18 (IL-18), and cell adhesion molecules (CAMs) have all been implicated in the pathogenesis of diabetic nephropathy via increased vascular inflammation and fibrosis. The stimulus for the increase in inflammation in diabetes is still under investigation; however, reactive oxygen species are a primary candidate. Thus, targeting oxidative stress-inflammatory cytokine signaling could improve therapeutic options for diabetic nephropathy. The current review will focus on understanding the relationship between oxidative stress and inflammatory cytokines in diabetic nephropathy to help elucidate the question of which comes first in the progression of diabetic nephropathy, oxidative stress, or inflammation.", "title": "" }, { "docid": "ef4ea289a20a833df9495f7bbe8d337f", "text": "Plant growth and development are adversely affected by salinity – a major environmental stress that limits agricultural production. This chapter provides an overview of the physiological mechanisms by which growth and development of crop plants are affected by salinity. The initial phase of growth reduction is due to an osmotic effect, is similar to the initial response to water stress and shows little genotypic differences. The second, slower effect is the result of salt toxicity in leaves. In the second phase a salt sensitive species or genotype differs from a more salt tolerant one by its inability to prevent salt accumulation in leaves to toxic levels. Most crop plants are salt tolerant at germination but salt sensitive during emergence and vegetative development. Root and shoot growth is inhibited by salinity; however, supplemental Ca partly alleviates the growth inhibition. The Ca effect appears related to the maintenance of plasma membrane selectivity for K over Na. Reproductive development is considered less sensitive to salt stress than vegetative growth, although in wheat salt stress can hasten reproductive growth, inhibit spike development and decrease the yield potential, whereas in the more salt sensitive rice, low yield is primarily associated with reduction in tillers, and by sterile spikelets in some cultivars. Plants with improved salt tolerance must thrive under saline field conditions with numerous additional stresses. Salinity shows interactions with several stresses, among others with boron toxicity, but the mechanisms of salinity-boron interactions are still poorly known. To better understand crop tolerance under saline field conditions, future research should focus on tolerance of crops to a combination of stresses", "title": "" }, { "docid": "45c917e024842ff7e087e4c46a05be25", "text": "A centrifugal pump that employs a bearingless motor with 5-axis active control has been developed. In this paper, a novel bearingless canned motor pump is proposed, and differences from the conventional structure are explained. A key difference between the proposed and conventional bearingless canned motor pumps is the use of passive magnetic bearings; in the proposed pump, the amount of permanent magnets (PMs) is reduced by 30% and the length of the rotor is shortened. Despite the decrease in the total volume of PMs, the proposed structure can generate large suspension forces and high torque compared with the conventional design by the use of the passive magnetic bearings. In addition, levitation and rotation experiments demonstrated that the proposed motor is suitable for use as a bearingless canned motor pump.", "title": "" }, { "docid": "7368671d20b4f4b30a231d364eb501bc", "text": "In this article, we study the problem of Web user profiling, which is aimed at finding, extracting, and fusing the “semantic”-based user profile from the Web. Previously, Web user profiling was often undertaken by creating a list of keywords for the user, which is (sometimes even highly) insufficient for main applications. This article formalizes the profiling problem as several subtasks: profile extraction, profile integration, and user interest discovery. We propose a combination approach to deal with the profiling tasks. Specifically, we employ a classification model to identify relevant documents for a user from the Web and propose a Tree-Structured Conditional Random Fields (TCRF) to extract the profile information from the identified documents; we propose a unified probabilistic model to deal with the name ambiguity problem (several users with the same name) when integrating the profile information extracted from different sources; finally, we use a probabilistic topic model to model the extracted user profiles, and construct the user interest model. Experimental results on an online system show that the combination approach to different profiling tasks clearly outperforms several baseline methods. The extracted profiles have been applied to expert finding, an important application on the Web. Experiments show that the accuracy of expert finding can be improved (ranging from +6% to +26% in terms of MAP) by taking advantage of the profiles.", "title": "" }, { "docid": "f174469e907b60cd481da6b42bafa5f9", "text": "A static program checker that performs modular checking can check one program module for errors without needing to analyze the entire program. Modular checking requires that each module be accompanied by annotations that specify the module. To help reduce the cost of writing specifications, this paper presents Houdini, an annotation assistant for the modular checker ESC/Java. To infer suitable ESC/Java annotations for a given program, Houdini generates a large number of candidate annotations and uses ESC/Java to verify or refute each of these annotations. The paper describes the design, implementation, and preliminary evaluation of Houdini.", "title": "" }, { "docid": "aa32c46e8d2c5daf2f126b8c5d8b9223", "text": "We demonstrate the application of advanced 3D visualization techniques to determine the optimal implant design and position in hip joint replacement planning. Our methods take as input the physiological stress distribution inside a patient's bone under load and the stress distribution inside this bone under the same load after a simulated replacement surgery. The visualization aims at showing principal stress directions and magnitudes, as well as differences in both distributions. By visualizing changes of normal and shear stresses with respect to the principal stress directions of the physiological state, a comparative analysis of the physiological stress distribution and the stress distribution with implant is provided, and the implant parameters that most closely replicate the physiological stress state in order to avoid stress shielding can be determined. Our method combines volume rendering for the visualization of stress magnitudes with the tracing of short line segments for the visualization of stress directions. To improve depth perception, transparent, shaded, and antialiased lines are rendered in correct visibility order, and they are attenuated by the volume rendering. We use a focus+context approach to visually guide the user to relevant regions in the data, and to support a detailed stress analysis in these regions while preserving spatial context information. Since all of our techniques have been realized on the GPU, they can immediately react to changes in the simulated stress tensor field and thus provide an effective means for optimal implant selection and positioning in a computational steering environment.", "title": "" }, { "docid": "f8093849e9157475149d00782c60ae60", "text": "Social media use, potential and challenges in innovation have received little attention in literature, especially from the standpoint of the business-to-business sector. Therefore, this paper focuses on bridging this gap with a survey of social media use, potential and challenges, combined with a social media - focused innovation literature review of state-of-the-art. The study also studies the essential differences between business-to-consumer and business-to-business in the above respects. The paper starts by defining of social media and web 2.0, and then characterizes social media in business, social media in business-to-business sector and social media in business-to-business innovation. Finally we present and analyze the results of our empirical survey of 122 Finnish companies. This paper suggests that there is a significant gap between perceived potential of social media and social media use in innovation activity in business-to-business companies, recognizes potentially effective ways to reduce the gap, and clarifies the found differences between B2B's and B2C's.", "title": "" }, { "docid": "78c477aeb6a27cf5b4de028c0ecd7b43", "text": "This paper addresses the problem of speaker clustering in telephone conversations. Recently, a new clustering algorithm named affinity propagation (AP) is proposed. It exhibits fast execution speed and finds clusters with low error. However, AP is an unsupervised approach which may make the resulting number of clusters different from the actual one. This deteriorates the speaker purity dramatically. This paper proposes a modified method named supervised affinity propagation (SAP), which automatically reruns the AP procedure to make the final number of clusters converge to the specified number. Experiments are carried out to compare SAP with traditional k-means and agglomerative hierarchical clustering on 4-hour summed channel conversations in the NIST 2004 Speaker Recognition Evaluation. Experiment results show that the SAP method leads to a noticeable speaker purity improvement with slight cluster purity decrease compared with AP.", "title": "" }, { "docid": "edfaa4259def05daba17f71ffafac407", "text": "Access control is one of the most important security mechanisms in cloud computing. Attributed based encryption provides an approach that allows data owners to integrate data access policies within the encrypted data. However, little work has been done to explore flexible authorization in specifying the data user's privileges and enforcing the data owner's policy in cloud based environments. In this paper, we propose a hierarchical attribute based access control scheme by extending ciphertext-policy attribute-based encryption (CP-ABE) with a hierarchical structure of multiauthorities and exploiting attribute-based signature (ABS). The proposed scheme not only achieves scalability due to its hierarchical structure, but also inherits fine-grained access control with authentication in supporting write privilege on outsourced data in cloud computing. In addition, we decouple the task of policy management from security enforcement by using the extensible access control markup language (XACML) framework. Extensive analysis shows that our scheme is both efficient and scalable in dealing with access control for outsourced data in cloud computing.", "title": "" }, { "docid": "af9768101a634ab57eb2554953ef63ec", "text": "Very recently, there has been a perfect storm of technical advances that has culminated in the emergence of a new interaction modality: on-body interfaces. Such systems enable the wearer to use their body as an input and output platform with interactive graphics. Projects such as PALMbit and Skinput sought to answer the initial and fundamental question: whether or not on-body interfaces were technologically possible. Although considerable technical work remains, we believe it is important to begin shifting the question away from how and what, and towards where, and ultimately why. These are the class of questions that inform the design of next generation systems. To better understand and explore this expansive space, we employed a mixed-methods research process involving more than two thousand individuals. This started with high-resolution, but low-detail crowdsourced data. We then combined this with rich, expert interviews, exploring aspects ranging from aesthetics to kinesthetics. The results of this complimentary, structured exploration, point the way towards more comfortable, efficacious, and enjoyable on-body user experiences.", "title": "" }, { "docid": "3b6cef052cd7a7acc765b44292af51cc", "text": "Minimizing travel time is critical for the successful operation of emergency vehicles. Preemption can significantly help emergency vehicles reach the intended destination faster. Majority of the current studies focus on minimizing and/or eliminating delays for EVs and do not consider the negative impacts of preemption on urban traffic. One primary negative impact is extended delays for non-EV traffic due to preemption that is addressed in this paper. We propose an Adaptive Preemption of Traffic (APT) system for Emergency Vehicles in an Intelligent Transportation System. We utilize the knowledge of current traffic conditions in the transportation system to adaptively preempt traffic at signals along the path of EVs so as to minimize, if not eliminate stopped delays for EVs while simultaneously minimizing the delays for non-emergency vehicles in the system. Through extensive simulation results, we show substantial reduction in delays for both EVs.", "title": "" } ]
scidocsrr
e0551968ae38bf34b3fdc11cc6ee79e9
TCGA Expedition: A Data Acquisition and Management System for TCGA Data
[ { "docid": "cc3788c4690446efe9a0a3eea38ee832", "text": "Papillary thyroid carcinoma (PTC) is the most common type of thyroid cancer. Here, we describe the genomic landscape of 496 PTCs. We observed a low frequency of somatic alterations (relative to other carcinomas) and extended the set of known PTC driver alterations to include EIF1AX, PPM1D, and CHEK2 and diverse gene fusions. These discoveries reduced the fraction of PTC cases with unknown oncogenic driver from 25% to 3.5%. Combined analyses of genomic variants, gene expression, and methylation demonstrated that different driver groups lead to different pathologies with distinct signaling and differentiation characteristics. Similarly, we identified distinct molecular subgroups of BRAF-mutant tumors, and multidimensional analyses highlighted a potential involvement of oncomiRs in less-differentiated subgroups. Our results propose a reclassification of thyroid cancers into molecular subtypes that better reflect their underlying signaling and differentiation properties, which has the potential to improve their pathological classification and better inform the management of the disease.", "title": "" } ]
[ { "docid": "f9eed4f99d70c51dc626a61724540d3c", "text": "A soft-start circuit with soft-recovery function for DC-DC converters is presented in this paper. The soft-start strategy is based on a linearly ramped-up reference and an error amplifier with minimum selector implemented with a three-limb differential pair skillfully. The soft-recovery strategy is based on a compact clamp circuit. The ramp voltage would be clamped once the feedback voltage is detected lower than a threshold, which could control the output to be recovered slowly and linearly. A monolithic DC-DC buck converter with proposed circuit has been fabricated with a 0.5μm CMOS process for validation. The measurement result shows that the ramp-based soft-start and soft-recovery circuit have good performance and agree well with the theoretical analysis.", "title": "" }, { "docid": "0da9197d2f6839d01560b46cbb1fbc8d", "text": "Estimating the traversability of rough terrain is a critical task for an outdoor mobile robot. While classifying structured environment can be learned from large number of training data, it is an extremely difficult task to learn and estimate traversability of unstructured rough terrain. Moreover, in many cases information from a single sensor may not be sufficient for estimating traversability reliably in the absence of artificial landmarks such as lane markings or curbs. Our approach estimates traversability of the terrain and build a 2D probabilistic grid map online using 3D-LIDAR and camera. The combination of LIDAR and camera is favoured in many robotic application because they provide complementary information. Our approach assumes the data captured by these two sensors are independent and build separate traversability maps, each with information captured from one sensor. Traversability estimation with vision sensor autonomously collects training data and update classifier without human intervention as the vehicle traverse the terrain. Traversability estimation with 3D-LIDAR measures the slopes of the ground to predict the traversability. Two independently built probabilistic maps are fused using Bayes' rule to improve the detection performance. This is in contrast with other methods in which each sensor performs different tasks. We have implemented the algorithm on a UGV(Unmanned Ground Vehicle) and tested our approach on a rough terrain to evaluate the detection performance.", "title": "" }, { "docid": "9ebdf3493d6a80d12c97348a2d203d3e", "text": "Agile software development methodologies have been greeted with enthusiasm by many software developers, yet their widespread adoption has also resulted in closer examination of their strengths and weaknesses. While analyses and evaluations abound, the need still remains for an objective and systematic appraisal of Agile processes specifically aimed at defining strategies for their improvement. We provide a review of the strengths and weaknesses identified in Agile processes, based on which a strengths- weaknesses-opportunities-threats (SWOT) analysis of the processes is performed. We suggest this type of analysis as a useful tool for highlighting and addressing the problem issues in Agile processes, since the results can be used as improvement strategies.", "title": "" }, { "docid": "4476e4616e727c9c0f003acebb1a4933", "text": "We argue that the optimization plays a crucial role in generalization of deep learning models through implicit regularization. We do this by demonstrating that generalization ability is not controlled by network size but rather by some other implicit control. We then demonstrate how changing the empirical optimization procedure can improve generalization, even if actual optimization quality is not affected. We do so by studying the geometry of the parameter space of deep networks, and devising an optimization algorithm attuned to this geometry.", "title": "" }, { "docid": "f7c92b4342944a1f937f19b144a61d8a", "text": "Randomization in randomized controlled trials involves more than generation of a random sequence by which to assign subjects. For randomization to be successfully implemented, the randomization sequence must be adequately protected (concealed) so that investigators, involved health care providers, and subjects are not aware of the upcoming assignment. The absence of adequate allocation concealment can lead to selection bias, one of the very problems that randomization was supposed to eliminate. Authors of reports of randomized trials should provide enough details on how allocation concealment was achieved so the reader can determine the likelihood of success. Fortunately, a plan of allocation concealment can always be incorporated into the design of a randomized trial. Certain methods minimize the risk of concealment failing more than others. Keeping knowledge of subjects' assignment after allocation from subjects, investigators/health care providers, or those assessing outcomes is referred to as masking (also known as blinding). The goal of masking is to prevent ascertainment bias. In contrast to allocation concealment, masking cannot always be incorporated into a randomized controlled trial. Both allocation concealment and masking add to the elimination of bias in randomized controlled trials.", "title": "" }, { "docid": "58917e3cbb1542185ac1af9edcf950eb", "text": "The Energy Committee of the Royal Swedish Academy of Sciences has in a series of projects gathered information and knowledge on renewable energy from various sources, both within and outside the academic world. In this article, we synthesize and summarize some of the main points on renewable energy from the various Energy Committee projects and the Committee’s Energy 2050 symposium, regarding energy from water and wind, bioenergy, and solar energy. We further summarize the Energy Committee’s scenario estimates of future renewable energy contributions to the global energy system, and other presentations given at the Energy 2050 symposium. In general, international coordination and investment in energy research and development is crucial to enable future reliance on renewable energy sources with minimal fossil fuel use.", "title": "" }, { "docid": "4d5820e9e137c96d4d63e25772c577c6", "text": "facial topography clinical anatomy of the face upsky facial topography: clinical anatomy of the face by joel e facial topography clinical anatomy of the face [c796.ebook] free ebook facial topography: clinical the anatomy of the aging face: volume loss and changes in facial topographyclinical anatomy of the face ebook facial anatomy mccc dmca / copyrighted works removal title anatomy for plastic surgery thieme medical publishers the face sample quintessence publishing! facial anatomy 3aface academy facial topography clinical anatomy of the face ebook download the face der medizinverlag facial topography clinical anatomy of the face liive facial topography clinical anatomy of the face user clinical anatomy of the head univerzita karlova pdf download the face: pictorial atlas of clinical anatomy clinical anatomy anatomic landmarks for localisation of j m perry co v commissioner internal bouga international journal of anatomy and research, case report anatomy and physiology of the aging neck the clinics topographical anatomy of the head eng nikolaizarovo crc title list: change of ownership a guide to childrens books about asian americans fractography: observing, measuring and interpreting nystce students with disabilities study guide tibca army ranger survival guide compax sharp grill 2 convection manual iwsun nursing diagnosis handbook 9th edition apa citation the surgical management of facial nerve injury lipteh the outermost house alongz cosmetic voted best plastic surgeon in dallas texas c tait a dachau 1933 1945 teleip select your ebook amazon s3 quotation of books all india institute of medical latest ten anatomy acquisitions british dental association lindens complete auto repair reviews mires department of topographic anatomy and operative surgery", "title": "" }, { "docid": "b2895d35c6ffddfb9adc7c1d88cef793", "text": "We develop algorithms for a stochastic appointment sequencing and scheduling problem with waiting time, idle time, and overtime costs. Scheduling surgeries in an operating room motivates the work. The problem is formulated as an integer stochastic program using sample average approximation. A heuristic solution approach based on Benders’ decomposition is developed and compared to exact methods and to previously proposed approaches. Extensive computational testing based on real data shows that the proposed methods produce good results compared to previous approaches. In addition we prove that the finite scenario sample average approximation problem is NP-complete.", "title": "" }, { "docid": "2545a267cedac5924ecfceeddc01a4dc", "text": "The Transport Layer Security (TLS) protocol is a de facto standard of secure client-server communication on the Internet. Its security can be diminished by a variety of attacks that leverage on weaknesses in its design and implementations. An example of a major weakness is the public-key infrastructure (PKI) that TLS deploys, which is a weakest-link system and introduces hundreds of links (i.e., trusted entities). Consequently, an adversary compromising a single trusted entity can impersonate any website. Notary systems, based on multi-path probing, were early and promising proposals to detect and prevent such attacks. Unfortunately, despite their benefits, they are not widely deployed, mainly due to their long-standing unresolved problems. In this paper, we present Persistent and Accountable Domain Validation (PADVA), which is a next-generation TLS notary service. PADVA combines the advantages of previous proposals, enhancing them, introducing novel mechanisms, and leveraging a blockchain platform which provides new features. PADVA keeps notaries auditable and accountable, introduces service-level agreements and mechanisms to enforce them, relaxes availability requirements for notaries, and works with the legacy TLS ecosystem. We implemented and evaluated PADVA, and our experiments indicate its efficiency and deployability.", "title": "" }, { "docid": "355fca41993ea19b08d2a9fc19e25722", "text": "People and companies selling goods or providing services have always desired to know what people think about their products. The number of opinions on the Web has significantly increased with the emergence of microblogs. In this paper we present a novel method for sentiment analysis of a text that allows the recognition of opinions in microblogs which are connected to a particular target or an entity. This method differs from other approaches in utilizing appraisal theory, which we employ for the analysis of microblog posts. The results of the experiments we performed on Twitter showed that our method improves sentiment classification and is feasible even for such specific content as presented on microblogs.", "title": "" }, { "docid": "6e4f0a770fe2a34f99957f252110b6bd", "text": "Universal Dependencies (UD) provides a cross-linguistically uniform syntactic representation, with the aim of advancing multilingual applications of parsing and natural language understanding. Reddy et al. (2016) recently developed a semantic interface for (English) Stanford Dependencies, based on the lambda calculus. In this work, we introduce UDEPLAMBDA, a similar semantic interface for UD, which allows mapping natural language to logical forms in an almost language-independent framework. We evaluate our approach on semantic parsing for the task of question answering against Freebase. To facilitate multilingual evaluation, we provide German and Spanish translations of the WebQuestions and GraphQuestions datasets. Results show that UDEPLAMBDA outperforms strong baselines across languages and datasets. For English, it achieves the strongest result to date on GraphQuestions, with competitive results on WebQuestions.", "title": "" }, { "docid": "95d767d1b9a2ba2aecdf26443b3dd4af", "text": "Advanced sensing and measurement techniques are key technologies to realize a smart grid. The giant magnetoresistance (GMR) effect has revolutionized the fields of data storage and magnetic measurement. In this work, a design of a GMR current sensor based on a commercial analog GMR chip for applications in a smart grid is presented and discussed. Static, dynamic and thermal properties of the sensor were characterized. The characterizations showed that in the operation range from 0 to ±5 A, the sensor had a sensitivity of 28 mV·A(-1), linearity of 99.97%, maximum deviation of 2.717%, frequency response of −1.5 dB at 10 kHz current measurement, and maximum change of the amplitude response of 0.0335%·°C(-1) with thermal compensation. In the distributed real-time measurement and monitoring of a smart grid system, the GMR current sensor shows excellent performance and is cost effective, making it suitable for applications such as steady-state and transient-state monitoring. With the advantages of having a high sensitivity, high linearity, small volume, low cost, and simple structure, the GMR current sensor is promising for the measurement and monitoring of smart grids.", "title": "" }, { "docid": "830240e9425b93c354cb9a2be0378961", "text": "Systems for structured knowledge extraction and inference have made giant strides in the last decade. Starting from shallow linguistic tagging and coarse-grained recognition of named entities at the resolution of people, places, organizations, and times, modern systems link billions of pages of unstructured text with knowledge graphs having hundreds of millions of entities belonging to tens of thousands of types, and related by tens of thousands of relations. Via deep learning, systems build continuous representations of words, entities, types, and relations, and use these to continually discover new facts to add to the knowledge graph, and support search systems that go far beyond page-level \"ten blue links''. We will present a comprehensive catalog of the best practices in traditional and deep knowledge extraction, inference and search. We will trace the development of diverse families of techniques, explore their interrelationships, and point out various loose ends.", "title": "" }, { "docid": "4720a84220e37eca1d0c75697f247b23", "text": "We describe a form of nonlinear decomposition that is well-suited for efficient encoding of natural signals. Signals are initially decomposed using a bank of linear filters. Each filter response is then rectified and divided by a weighted sum of rectified responses of neighboring filters. We show that this decomposition, with parameters optimized for the statistics of a generic ensemble of natural images or sounds, provides a good characterization of the nonlinear response properties of typical neurons in primary visual cortex or auditory nerve, respectively. These results suggest that nonlinear response properties of sensory neurons are not an accident of biological implementation, but have an important functional role.", "title": "" }, { "docid": "26cc16cfb31222c7f800ac75a9cbbd13", "text": "In the WZ factorization the outermost parallel loop decreases the number of iterations executed at each step and this changes the amount of parallelism in each step. The aim of the paper is to present four strategies of parallelizing nested loops on multicore architectures on the example of the WZ factorization.", "title": "" }, { "docid": "227a6e820b101073d5621b2f399883a5", "text": "Studying the quality requirements (aka Non-Functional Requirements (NFR)) of a system is crucial in Requirements Engineering. Many software projects fail because of neglecting or failing to incorporate the NFR during the software life development cycle. This paper focuses on analyzing the importance of the quality requirements attributes in software effort estimation models based on the Desharnais dataset. The Desharnais dataset is a collection of eighty one software projects of twelve attributes developed by a Canadian software house. The analysis includes studying the influence of each of the quality requirements attributes, as well as the influence of all quality requirements attributes combined when calculating software effort using regression and Artificial Neural Network (ANN) models. The evaluation criteria used in this investigation include the Mean of the Magnitude of Relative Error (MMRE), the Prediction Level (PRED), Root Mean Squared Error (RMSE), Mean Error and the Coefficient of determination (R). Results show that the quality attribute “Language” is the most statistically significant when calculating software effort. Moreover, if all quality requirements attributes are eliminated in the training stage and software effort is predicted based on software size only, the value of the error (MMRE) is doubled. KeywordsNon-Functional Requirements, Quality Attributes, Software Effort Estimation, Desharnais Dataset", "title": "" }, { "docid": "3d04155f68912f84b02788f93e9da74c", "text": "Data partitioning significantly improves the query performance in distributed database systems. A large number of techniques have been proposed to efficiently partition a dataset for a given query workload. However, many modern analytic applications involve ad-hoc or exploratory analysis where users do not have a representative query workload upfront. Furthermore, workloads change over time as businesses evolve or as analysts gain better understanding of their data. Static workload-based data partitioning techniques are therefore not suitable for such settings. In this paper, we describe the demonstration of Amoeba, a distributed storage system which uses adaptive multi-attribute data partitioning to efficiently support ad-hoc as well as recurring queries. Amoeba applies a robust partitioning algorithm such that ad-hoc queries on all attributes have similar performance gains. Thereafter, Amoeba adaptively repartitions the data based on the observed query sequence, i.e., the system improves over time. All along Amoeba offers both adaptivity (i.e., adjustments according to workload changes) as well as robustness (i.e., avoiding performance spikes due to workload changes). We propose to demonstrate Amoeba on scenarios from an internet-ofthings startup that tracks user driving patterns. We invite the audience to interactively fire fast ad-hoc queries, observe multi-dimensional adaptivity, and play with a robust/reactive knob in Amoeba. The web front end displays the layout changes, runtime costs, and compares it to Spark with both default and workload-aware partitioning.", "title": "" }, { "docid": "66fce3b6c516a4fa4281d19d6055b338", "text": "This paper presents the mechatronic design and experimental validation of a novel powered knee-ankle orthosis for testing torque-driven rehabilitation control strategies. The modular actuator of the orthosis is designed with a torque dense motor and a custom low-ratio transmission (24:1) to provide mechanical transparency to the user, allowing them to actively contribute to their joint kinematics during gait training. The 4.88 kg orthosis utilizes frameless components and light materials, such as aluminum alloy and carbon fiber, to reduce its mass. A human subject experiment demonstrates accurate torque control with high output torque during stance and low backdrive torque during swing at fast walking speeds. This work shows that backdrivability, precise torque control, high torque output, and light weight can be achieved in a powered orthosis without the high cost and complexity of variable transmissions, clutches, and/or series elastic components.", "title": "" }, { "docid": "be5b0dd659434e77ce47034a51fd2767", "text": "Current obstacles in the study of social media marketing include dealing with massive data and real-time updates have motivated to contribute solutions that can be adopted for viral marketing. Since information diffusion and social networks are the core of viral marketing, this article aims to investigate the constellation of diffusion methods for viral marketing. Studies on diffusion methods for viral marketing have applied different computational methods, but a systematic investigation of these methods has limited. Most of the literature have focused on achieving objectives such as influence maximization or community detection. Therefore, this article aims to conduct an in-depth review of works related to diffusion for viral marketing. Viral marketing has applied to business-to-consumer transactions but has seen limited adoption in business-to-business transactions. The literature review reveals a lack of new diffusion methods, especially in dynamic and large-scale networks. It also offers insights into applying various mining methods for viral marketing. It discusses some of the challenges, limitations, and future research directions of information diffusion for viral marketing. The article also introduces a viral marketing information diffusion model. The proposed model attempts to solve the dynamicity and large-scale data of social networks by adopting incremental clustering and a stochastic differential equation for business-to-business transactions. Keywords—information diffusion; viral marketing; social media marketing; social networks", "title": "" }, { "docid": "5d704992b738a084215f520ed8074d6b", "text": "Recognizing and generating paraphrases is an important component in many natural language processing applications. A wellestablished technique for automatically extracting paraphrases leverages bilingual corpora to find meaning-equivalent phrases in a single language by “pivoting” over a shared translation in another language. In this paper we revisit bilingual pivoting in the context of neural machine translation and present a paraphrasing model based purely on neural networks. Our model represents paraphrases in a continuous space, estimates the degree of semantic relatedness between text segments of arbitrary length, or generates candidate paraphrases for any source input. Experimental results across tasks and datasets show that neural paraphrases outperform those obtained with conventional phrase-based pivoting approaches.", "title": "" } ]
scidocsrr
ebfd1d12f8f1dc683b8f95c46cb5881d
PyMT: a post-WIMP multi-touch user interface toolkit
[ { "docid": "b992e02ee3366d048bbb4c30a2bf822c", "text": "Structured graphics models such as Scalable Vector Graphics (SVG) enable designers to create visually rich graphics for user interfaces. Unfortunately current programming tools make it difficult to implement advanced interaction techniques for these interfaces. This paper presents the Hierarchical State Machine Toolkit (HsmTk), a toolkit targeting the development of rich interactions. The key aspect of the toolkit is to consider interactions as first-class objects and to specify them with hierarchical state machines. This approach makes the resulting behaviors self-contained, easy to reuse and easy to modify. Interactions can be attached to graphical elements without knowing their detailed structure, supporting the parallel refinement of the graphics and the interaction.", "title": "" }, { "docid": "f69ba8c401cd61057888dfa023bfee30", "text": "Since its introduction, the Nintendo Wii remote has become one of the world's most sophisticated and common input devices. Combining its impressive capability with a low cost and high degree of accessibility make it an ideal platform for exploring a variety of interaction research concepts. The author describes the technology inside the Wii remote, existing interaction techniques, what's involved in creating custom applications, and several projects ranging from multiobject tracking to spatial augmented reality that challenge the way its developers meant it to be used.", "title": "" } ]
[ { "docid": "9b9a2a9695f90a6a9a0d800192dd76f6", "text": "Due to high competition in today's business and the need for satisfactory communication with customers, companies understand the inevitable necessity to focus not only on preventing customer churn but also on predicting their needs and providing the best services for them. The purpose of this article is to predict future services needed by wireless users, with data mining techniques. For this purpose, the database of customers of an ISP in Shiraz, which logs the customer usage of wireless internet connections, is utilized. Since internet service has three main factors to define (Time, Speed, Traffics) we predict each separately. First, future service demand is predicted by implementing a simple Recency, Frequency, Monetary (RFM) as a basic model. Other factors such as duration from first use, slope of customer's usage curve, percentage of activation, Bytes In, Bytes Out and the number of retries to establish a connection and also customer lifetime value are considered and added to RFM model. Then each one of R, F, M criteria is alternately omitted and the result is evaluated. Assessment is done through analysis node which determines the accuracy of evaluated data among partitioned data. The result shows that CART and C5.0 are the best algorithms to predict future services in this case. As for the features, depending upon output of each features, duration and transfer Bytes are the most important after RFM. An ISP may use the model discussed in this article to meet customers' demands and ensure their loyalty and satisfaction.", "title": "" }, { "docid": "3b2aa97c0232857dffa971d9c040d430", "text": "This paper provides a critical analysis of Mobile Learning projects published before the end of 2007. The review uses a Mobile Learning framework to evaluate and categorize 102 Mobile Learning projects, and to briefly introduce exemplary projects for each category. All projects were analysed with the criteria: context, tools, control, communication, subject and objective. Although a significant number of projects have ventured to incorporate the physical context into the learning experience, few projects include a socializing context. Tool support ranges from pure content delivery to content construction by the learners. Although few projects explicitly discuss the Mobile Learning control issues, one can find all approaches from pure teacher control to learner control. Despite the fact that mobile phones initially started as a communication device, communication and collaboration play a surprisingly small role in Mobile Learning projects. Most Mobile Learning projects support novices, although one might argue that the largest potential is supporting advanced learners. All results show the design space and reveal gaps in Mobile Learning research.", "title": "" }, { "docid": "903d00a02846450ebd18a8ce865889b5", "text": "The ability to solve probability word problems such as those found in introductory discrete mathematics textbooks, is an important cognitive and intellectual skill. In this paper, we develop a two-step endto-end fully automated approach for solving such questions that is able to automatically provide answers to exercises about probability formulated in natural language. In the first step, a question formulated in natural language is analysed and transformed into a highlevel model specified in a declarative language. In the second step, a solution to the high-level model is computed using a probabilistic programming system. On a dataset of 2160 probability problems, our solver is able to correctly answer 97.5% of the questions given a correct model. On the end-toend evaluation, we are able to answer 12.5% of the questions (or 31.1% if we exclude examples not supported by design).", "title": "" }, { "docid": "36e4a38e31c7715cd8f7754076b89223", "text": "We investigate the effectiveness of semantic generalizations/classifications for capturing the regularities of the behavior of verbs in terms of their metaphoricity. Starting from orthographic word unigrams, we experiment with various ways of defining semantic classes for verbs (grammatical, resource-based, distributional) and measure the effectiveness of these classes for classifying all verbs in a running text as metaphor or non metaphor.", "title": "" }, { "docid": "450a0ffcd35400f586e766d68b75cc98", "text": "While there has been a success in 2D human pose estimation with convolutional neural networks (CNNs), 3D human pose estimation has not been thoroughly studied. In this paper, we tackle the 3D human pose estimation task with end-to-end learning using CNNs. Relative 3D positions between one joint and the other joints are learned via CNNs. The proposed method improves the performance of CNN with two novel ideas. First, we added 2D pose information to estimate a 3D pose from an image by concatenating 2D pose estimation result with the features from an image. Second, we have found that more accurate 3D poses are obtained by combining information on relative positions with respect to multiple joints, instead of just one root joint. Experimental results show that the proposed method achieves comparable performance to the state-of-the-art methods on Human 3.6m dataset.", "title": "" }, { "docid": "be95384cb710593dd7c620becff334be", "text": "1College of Mathematics and Informatics, Fujian Normal University, Fuzhou 350117, Fujian, China 2College of Computer and Information, Hohai University, Nanjing 211100, Jiangsu, China 3Jiangsu Key Laboratory of Big Data Security & Intelligent Processing, Nanjing University of Posts and Telecommunications, China 4Mathematics and Computer Science Department, Gannan Normal University, Ganzhou 341000, Jiangxi, China", "title": "" }, { "docid": "23bf81699add38814461d5ac3e6e33db", "text": "This paper examined a steering behavior based fatigue monitoring system. The advantages of using steering behavior for detecting fatigue are that these systems measure continuously, cheaply, non-intrusively, and robustly even under extremely demanding environmental conditions. The expected fatigue induced changes in steering behavior are a pattern of slow drifting and fast corrective counter steering. Using advanced signal processing procedures for feature extraction, we computed 3 feature set in the time, frequency and state space domain (a total number of 1251 features) to capture fatigue impaired steering patterns. Each feature set was separately fed into 5 machine learning methods (e.g. Support Vector Machine, K-Nearest Neighbor). The outputs of each single classifier were combined to an ensemble classification value. Finally we combined the ensemble values of 3 feature subsets to a of meta-ensemble classification value. To validate the steering behavior analysis, driving samples are taken from a driving simulator during a sleep deprivation study (N=12). We yielded a recognition rate of 86.1% in classifying slight from strong fatigue.", "title": "" }, { "docid": "49b0ba019f6f968804608aeacec2a959", "text": "In this paper, we introduce a novel problem of audio-visual event localization in unconstrained videos. We define an audio-visual event as an event that is both visible and audible in a video segment. We collect an Audio-Visual Event (AVE) dataset to systemically investigate three temporal localization tasks: supervised and weakly-supervised audio-visual event localization, and cross-modality localization. We develop an audio-guided visual attention mechanism to explore audio-visual correlations, propose a dual multimodal residual network (DMRN) to fuse information over the two modalities, and introduce an audio-visual distance learning network to handle the cross-modality localization. Our experiments support the following findings: joint modeling of auditory and visual modalities outperforms independent modeling, the learned attention can capture semantics of sounding objects, temporal alignment is important for audio-visual fusion, the proposed DMRN is effective in fusing audio-visual features, and strong correlations between the two modalities enable cross-modality localization.", "title": "" }, { "docid": "a81b08428081cd15e7c705d5a6e79a6f", "text": "Attributes act as intermediate representations that enable parameter sharing between classes, a must when training data is scarce. We propose to view attribute-based image classification as a label-embedding problem: each class is embedded in the space of attribute vectors. We introduce a function that measures the compatibility between an image and a label embedding. The parameters of this function are learned on a training set of labeled samples to ensure that, given an image, the correct classes rank higher than the incorrect ones. Results on the Animals With Attributes and Caltech-UCSD-Birds datasets show that the proposed framework outperforms the standard Direct Attribute Prediction baseline in a zero-shot learning scenario. Label embedding enjoys a built-in ability to leverage alternative sources of information instead of or in addition to attributes, such as, e.g., class hierarchies or textual descriptions. Moreover, label embedding encompasses the whole range of learning settings from zero-shot learning to regular learning with a large number of labeled examples.", "title": "" }, { "docid": "376c9736ccd7823441fd62c46eee0242", "text": "Description: Infrastructure for Homeland Security Environments Wireless Sensor Networks helps readers discover the emerging field of low-cost standards-based sensors that promise a high order of spatial and temporal resolution and accuracy in an ever-increasing universe of applications. It shares the latest advances in science and engineering paving the way towards a large plethora of new applications in such areas as infrastructure protection and security, healthcare, energy, food safety, RFID, ZigBee, and processing. Unlike other books on wireless sensor networks that focus on limited topics in the field, this book is a broad introduction that covers all the major technology, standards, and application topics. It contains everything readers need to know to enter this burgeoning field, including current applications and promising research and development; communication and networking protocols; middleware architecture for wireless sensor networks; and security and management. The straightforward and engaging writing style of this book makes even complex concepts and processes easy to follow and understand. In addition, it offers several features that help readers grasp the material and then apply their knowledge in designing their own wireless sensor network systems: Examples illustrate how concepts are applied to the development and application of wireless sensor networks Detailed case studies set forth all the steps of design and implementation needed to solve real-world problems Chapter conclusions that serve as an excellent review by stressing the chapter's key concepts References in each chapter guide readers to in-depth discussions of individual topics This book is ideal for networking designers and engineers who want to fully exploit this new technology and for government employees who are concerned about homeland security. With its examples, it is appropriate for use as a coursebook for upper-level undergraduates and graduate students.", "title": "" }, { "docid": "2756c08346bfeafaed177a6bf1fde09e", "text": "Current implementations of Internet systems are very hard to be upgraded. The ossification of existing standards restricts the development of more advanced communication systems. New research initiatives, such as virtualization, software-defined radios, and software-defined networks, allow more flexibility for networks. However, until now, those initiatives have been developed individually. We advocate that the convergence of these overlying and complementary technologies can expand the amount of programmability on the network and support different innovative applications. Hence, this paper surveys the most recent research initiatives on programmable networks. We characterize programmable networks, where programmable devices execute specific code, and the network is separated into three planes: data, control, and management planes. We discuss the modern programmable network architectures, emphasizing their research issues, and, when possible, highlight their practical implementations. We survey the wireless and wired elements on the programmable data plane. Next, on the programmable control plane, we survey the divisor and controller elements. We conclude with final considerations, open issues and future challenges.", "title": "" }, { "docid": "3c1c9644df655b2a96fc593bd2982da2", "text": "We present the IIT Bombay English-Hindi Parallel Corpus. The corpus is a compilation of parallel corpora previously available in the public domain as well as new parallel corpora we collected. The corpus contains 1.49 million parallel segments, of which 694k segments were not previously available in the public domain. The corpus has been pre-processed for machine translation, and we report baseline phrase-based SMT and NMT translation results on this corpus. This corpus has been used in two editions of shared tasks at the Workshop on Asian Language Translation (2016 and 2017). The corpus is freely available for non-commercial research. To the best of our knowledge, this is the largest publicly available English-Hindi parallel corpus.", "title": "" }, { "docid": "688fde854293b0902911d967c5e0a906", "text": "As Internet users increasingly rely on social media sites like Facebook and Twitter to receive news, they are faced with a bewildering number of news media choices. For example, thousands of Facebook pages today are registered and categorized as some form of news media outlets. Inferring the bias (or slant) of these media pages poses a difficult challenge for media watchdog organizations that traditionally rely on con-", "title": "" }, { "docid": "cf1431a2f97fae07128ebac0c727941c", "text": "Laser microscopy has generally poor temporal resolution, caused by the serial scanning of each pixel. This is a significant problem for imaging or optically manipulating neural circuits, since neuronal activity is fast. To help surmount this limitation, we have developed a \"scanless\" microscope that does not contain mechanically moving parts. This microscope uses a diffractive spatial light modulator (SLM) to shape an incoming two-photon laser beam into any arbitrary light pattern. This allows the simultaneous imaging or photostimulation of different regions of a sample with three-dimensional precision. To demonstrate the usefulness of this microscope, we perform two-photon uncaging of glutamate to activate dendritic spines and cortical neurons in brain slices. We also use it to carry out fast (60 Hz) two-photon calcium imaging of action potentials in neuronal populations. Thus, SLM microscopy appears to be a powerful tool for imaging and optically manipulating neurons and neuronal circuits. Moreover, the use of SLMs expands the flexibility of laser microscopy, as it can substitute traditional simple fixed lenses with any calculated lens function.", "title": "" }, { "docid": "a56efa3471bb9e3091fffc6b1585f689", "text": "Rogowski current transducers combine a high bandwidth, an easy to use thin flexible coil, and low insertion impedance making them an ideal device for measuring pulsed currents in power electronic applications. Practical verification of a Rogowski transducer's ability to measure current transients due to the fastest MOSFET and IGBT switching requires a calibrated test facility capable of generating a pulse with a rise time of the order of a few 10's ns. A flexible 8-module system has been built which gives a 2000A peak current with a rise time of 40ns. The modular approach enables verification for a range of transducer coil sizes and ratings.", "title": "" }, { "docid": "40fe24e70fd1be847e9f89b82ff75b28", "text": "Multi-task learning in Convolutional Networks has displayed remarkable success in the field of recognition. This success can be largely attributed to learning shared representations from multiple supervisory tasks. However, existing multi-task approaches rely on enumerating multiple network architectures specific to the tasks at hand, that do not generalize. In this paper, we propose a principled approach to learn shared representations in ConvNets using multitask learning. Specifically, we propose a new sharing unit: \"cross-stitch\" unit. These units combine the activations from multiple networks and can be trained end-to-end. A network with cross-stitch units can learn an optimal combination of shared and task-specific representations. Our proposed method generalizes across multiple tasks and shows dramatically improved performance over baseline methods for categories with few training examples.", "title": "" }, { "docid": "ae961e9267b1571ec606347f56b0d4ca", "text": "A benchmark turbulent Backward Facing Step (BFS) airflow was studied in detail through a program of tightly coupled experimental and CFD analysis. The theoretical and experimental approaches were developed simultaneously in a “building block” approach and the results used to verify each “block”. Information from both CFD and experiment was used to develop confidence in the accuracy of each technique and to increase our understanding of the BFS flow.", "title": "" }, { "docid": "0fdd7f5c5cd1225567e89b456ef25ea0", "text": "In this work we propose a hierarchical approach for labeling semantic objects and regions in scenes. Our approach is reminiscent of early vision literature in that we use a decomposition of the image in order to encode relational and spatial information. In contrast to much existing work on structured prediction for scene understanding, we bypass a global probabilistic model and instead directly train a hierarchical inference procedure inspired by the message passing mechanics of some approximate inference procedures in graphical models. This approach mitigates both the theoretical and empirical difficulties of learning probabilistic models when exact inference is intractable. In particular, we draw from recent work in machine learning and break the complex inference process into a hierarchical series of simple machine learning subproblems. Each subproblem in the hierarchy is designed to capture the image and contextual statistics in the scene. This hierarchy spans coarse-to-fine regions and explicitly models the mixtures of semantic labels that may be present due to imperfect segmentation. To avoid cascading of errors and overfitting, we train the learning problems in sequence to ensure robustness to likely errors earlier in the inference sequence and leverage the stacking approach developed by Cohen et al.", "title": "" }, { "docid": "109644763e3a5ee5f59ec8e83719cc8d", "text": "The field of Natural Language Processing (NLP) is growing rapidly, with new research published daily along with an abundance of tutorials, codebases and other online resources. In order to learn this dynamic field or stay up-to-date on the latest research, students as well as educators and researchers must constantly sift through multiple sources to find valuable, relevant information. To address this situation, we introduce TutorialBank, a new, publicly available dataset which aims to facilitate NLP education and research. We have manually collected and categorized over 6,300 resources on NLP as well as the related fields of Artificial Intelligence (AI), Machine Learning (ML) and Information Retrieval (IR). Our dataset is notably the largest manually-picked corpus of resources intended for NLP education which does not include only academic papers. Additionally, we have created both a search engine 1 and a command-line tool for the resources and have annotated the corpus to include lists of research topics, relevant resources for each topic, prerequisite relations among topics, relevant subparts of individual resources, among other annotations. We are releasing the dataset and present several avenues for further research.", "title": "" }, { "docid": "851a966bbfee843e5ae1eaf21482ef87", "text": "The Pittsburgh Sleep Quality Index (PSQI) is a widely used measure of sleep quality in adolescents, but information regarding its psychometric strengths and weaknesses in this population is limited. In particular, questions remain regarding whether it measures one or two sleep quality domains. The aims of the present study were to (a) adapt the PSQI for use in adolescents and young adults, and (b) evaluate the psychometric properties of the adapted measure in this population. The PSQI was slightly modified to make it more appropriate for use in youth populations and was translated into Spanish for administration to the sample population available to the study investigators. It was then administered with validity criterion measures to a community-based sample of Spanish adolescents and young adults (AYA) between 14 and 24 years old (N = 216). The results indicated that the questionnaire (AYA-PSQI-S) assesses a single factor. The total score evidenced good convergent and divergent validity and moderate reliability (Cronbach's alpha = .72). The AYA-PSQI-S demonstrates adequate psychometric properties for use in clinical trials involving adolescents and young adults. Additional research to further evaluate the reliability and validity of the measure for use in clinical settings is warranted.", "title": "" } ]
scidocsrr
c5635211f4b70ed2d9f4e5c7e90d6f99
What Do Different Evaluation Metrics Tell Us About Saliency Models?
[ { "docid": "289694f2395a6a2afc7d86d475b9c02d", "text": "Recently, large breakthroughs have been observed in saliency modeling. The top scores on saliency benchmarks have become dominated by neural network models of saliency, and some evaluation scores have begun to saturate. Large jumps in performance relative to previous models can be found across datasets, image types, and evaluation metrics. Have saliency models begun to converge on human performance? In this paper, we re-examine the current state-of-the-art using a finegrained analysis on image types, individual images, and image regions. Using experiments to gather annotations for high-density regions of human eye fixations on images in two established saliency datasets, MIT300 and CAT2000, we quantify up to 60% of the remaining errors of saliency models. We argue that to continue to approach human-level performance, saliency models will need to discover higher-level concepts in images: text, objects of gaze and action, locations of motion, and expected locations of people in images. Moreover, they will need to reason about the relative importance of image regions, such as focusing on the most important person in the room or the most informative sign on the road. More accurately tracking performance will require finer-grained evaluations and metrics. Pushing performance further will require higher-level image understanding.", "title": "" }, { "docid": "37a8fe29046ec94d54e62f202a961129", "text": "Detection of salient image regions is useful for applications like image segmentation, adaptive compression, and region-based image retrieval. In this paper we present a novel method to determine salient regions in images using low-level features of luminance and color. The method is fast, easy to implement and generates high quality saliency maps of the same size and resolution as the input image. We demonstrate the use of the algorithm in the segmentation of semantically meaningful whole objects from digital images.", "title": "" } ]
[ { "docid": "a066ff1b4dfa65a67b79200366021542", "text": "OBJECTIVES\nWe sought to assess the shave biopsy technique, which is a new surgical procedure for complete removal of longitudinal melanonychia. We evaluated the quality of the specimen submitted for pathological examination, assessed the postoperative outcome, and ascertained its indication between the other types of matrix biopsies.\n\n\nDESIGN\nThis was a retrospective study performed at the dermatologic departments of the Universities of Liège and Brussels, Belgium, of 30 patients with longitudinal or total melanonychia.\n\n\nRESULTS\nPathological diagnosis was made in all cases; 23 patients were followed up during a period of 6 to 40 months. Seventeen patients had no postoperative nail plate dystrophy (74%) but 16 patients had recurrence of pigmentation (70%).\n\n\nLIMITATIONS\nThis was a retrospective study.\n\n\nCONCLUSIONS\nShave biopsy is an effective technique for dealing with nail matrix lesions that cause longitudinal melanonychia over 4 mm wide. Recurrence of pigmentation is the main drawback of the procedure.", "title": "" }, { "docid": "72d38fa8fc9ff402b3ee422a9967e537", "text": "With the continuing growth of modern communications technology, demand for image transmission and storage is increasing rapidly. Advances in computer technology for mass storage and digital processing have paved the way for implementing advanced data compression techniques to improve the efficiency of transmission and storage of images. In this paper a large variety of algorithms for image data compression are considered. Starting with simple techniques of sampling and pulse code modulation (PCM), state of the art algorithms for two-dimensional data transmission are reviewed. Topics covered include differential PCM (DPCM) and predictive coding, transform coding, hybrid coding, interframe coding, adaptive techniques, and applications. Effects of channel errors and other miscellaneous related topics are also considered. While most of the examples and image models have been specialized for visual images, the techniques discussed here could be easily adapted more generally for multidimensional data compression. Our emphasis here is on fundamentals of the various techniques. A comprehensive bibliography with comments is included for a reader interested in further details of the theoretical and experimental results discussed here.", "title": "" }, { "docid": "632fd895e8920cd9b25b79c9d4bd4ef4", "text": "In minimally invasive surgery, instruments are inserted from the exterior of the patient’s body into the surgical field inside the body through the minimum incision, resulting in limited visibility, accessibility, and dexterity. To address this problem, surgical instruments with articulated joints and multiple degrees of freedom have been developed. The articulations in currently available surgical instruments use mainly wire or link mechanisms. These mechanisms are generally robust and reliable, but the miniaturization of the mechanical parts required often results in problems with size, weight, durability, mechanical play, sterilization, and assembly costs. We thus introduced a compliant mechanism to a laparoscopic surgical instrument with multiple degrees of freedom at the tip. To show the feasibility of the concept, we developed a prototype with two degrees of freedom articulated surgical instruments that can perform the grasping and bending movements. The developed prototype is roughly the same size of the conventional laparoscopic instrument, within the diameter of 4 mm. The elastic parts were fabricated by Ni-Ti alloy and SK-85M, rigid parts ware fabricated by stainless steel, covered by 3D- printed ABS resin. The prototype was designed using iterative finite element method analysis, and has a minimal number of mechanical parts. The prototype showed hysteresis in grasping movement presumably due to the friction; however, the prototype showed promising mechanical characteristics and was fully functional in two degrees of freedom. In addition, the prototype was capable to exert over 15 N grasping that is sufficient for the general laparoscopic procedure. The evaluation tests thus positively showed the concept of the proposed mechanism. The prototype showed promising characteristics in the given mechanical evaluation experiments. Use of a compliant mechanism such as in our prototype may contribute to the advancement of surgical instruments in terms of simplicity, size, weight, dexterity, and affordability.", "title": "" }, { "docid": "84569374aa1adb152aee714d053b082d", "text": "PURPOSE\nTo describe the insertions of the superficial medial collateral ligament (sMCL) and posterior oblique ligament (POL) and their related osseous landmarks.\n\n\nMETHODS\nInsertions of the sMCL and POL were identified and marked in 22 unpaired human cadaveric knees. The surface area, location, positional relations, and morphology of the sMCL and POL insertions and related osseous structures were analyzed on 3-dimensional images.\n\n\nRESULTS\nThe femoral insertion of the POL was located 18.3 mm distal to the apex of the adductor tubercle (AT). The femoral insertion of the sMCL was located 21.1 mm distal to the AT and 9.2 mm anterior to the POL. The angle between the femoral axis and femoral insertion of the sMCL was 18.6°, and that between the femoral axis and the POL insertion was 5.1°. The anterior portions of the distal fibers of the POL were attached to the fascia cruris and semimembranosus tendon, whereas the posterior fibers were attached to the posteromedial side of the tibia directly. The tibial insertion of the POL was located just proximal and medial to the superior edge of the semimembranosus groove. The tibial insertion of the sMCL was attached firmly and widely to the tibial crest. The mean linear distances between the tibial insertion of the POL or sMCL and joint line were 5.8 and 49.6 mm, respectively.\n\n\nCONCLUSIONS\nThis study used 3-dimensional images to assess the insertions of the sMCL and POL and their related osseous landmarks. The AT was identified clearly as an osseous landmark of the femoral insertions of the sMCL and POL. The tibial crest and semimembranosus groove served as osseous landmarks of the tibial insertions of the sMCL and POL.\n\n\nCLINICAL RELEVANCE\nBy showing further details of the anatomy of the knee, the described findings can assist surgeons in anatomic reconstruction of the sMCL and POL.", "title": "" }, { "docid": "152e5d8979eb1187e98ecc0424bb1fde", "text": "Face verification remains a challenging problem in very complex conditions with large variations such as pose, illumination, expression, and occlusions. This problem is exacerbated when we rely unrealistically on a single training data source, which is often insufficient to cover the intrinsically complex face variations. This paper proposes a principled multi-task learning approach based on Discriminative Gaussian Process Latent Variable Model (DGPLVM), named GaussianFace, for face verification. In contrast to relying unrealistically on a single training data source, our model exploits additional data from multiple source-domains to improve the generalization performance of face verification in an unknown target-domain. Importantly, our model can adapt automatically to complex data distributions, and therefore can well capture complex face variations inherent in multiple sources. To enhance discriminative power, we introduced a more efficient equivalent form of Kernel Fisher Discriminant Analysis to DGPLVM. To speed up the process of inference and prediction, we exploited the low rank approximation method. Extensive experiments demonstrated the effectiveness of the proposed model in learning from diverse data sources and generalizing to unseen domains. Specifically, the accuracy of our algorithm achieved an impressive accuracy rate of 98.52% on the well-known and challenging Labeled Faces in the Wild (LFW) benchmark. For the first time, the human-level performance in face verification (97.53%) on LFW is surpassed.", "title": "" }, { "docid": "2490ad05628f62881e16338914135d17", "text": "The authors examined the hypothesis that judgments of learning (JOL), if governed by processing fluency during encoding, should be insensitive to the anticipated retention interval. Indeed, neither item-by-item nor aggregate JOLs exhibited \"forgetting\" unless participants were asked to estimate recall rates for several different retention intervals, in which case their estimates mimicked closely actual recall rates. These results and others reported suggest that participants can access their knowledge about forgetting but only when theory-based predictions are made, and then only when the notion of forgetting is accentuated either by manipulating retention interval within individuals or by framing recall predictions in terms of forgetting rather than remembering. The authors interpret their findings in terms of the distinction between experience-based and theory-based JOLs.", "title": "" }, { "docid": "29e07bf313daaa3f6bf1d67224f6e4b6", "text": "An overview of the high-frequency reflectometer technology deployed in Anritsu’s VectorStar Vector Network Analyzer (VNA) family is given, leading to a detailed description of the architecture used to extend the frequency range of VectorStar into the high millimeter waves. It is shown that this technology results in miniature frequency-extension modules that provide unique capabilities such as direct connection to wafer probes, dense multi-port measurements, test-port power leveling, enhanced raw directivity, and reduced measurement complexity when compared with existing solutions. These capabilities, combined with the frequency-scalable nature of the reflectometers provide users with a unique and compelling solution for their current and future high-frequency measurement needs.", "title": "" }, { "docid": "c72940e6154fa31f6bedca17336f8a94", "text": "Following on from ecological theories of perception, such as the one proposed by [Gibson, J. J. (1966). The senses considered as perceptual systems. Boston: Houghton Mifflin] this paper reviews the literature on the multisensory interactions underlying the perception of flavor in order to determine the extent to which it is really appropriate to consider flavor perception as a distinct perceptual system. We propose that the multisensory perception of flavor may be indicative of the fact that the taxonomy currently used to define our senses is simply not appropriate. According to the view outlined here, the act of eating allows the different qualities of foodstuffs to be combined into unified percepts; and flavor can be used as a term to describe the combination of tastes, smells, trigeminal, and tactile sensations as well as the visual and auditory cues, that we perceive when tasting food.", "title": "" }, { "docid": "5e5a2edef28c24197df309b37d892b81", "text": "Systemic lupus erythematosus (SLE) is a chronic autoimmune disease and its pathogenesis is unknown. SLE is regulated by complement receptors, proteins and antibodies such as complement receptor 2 (CR2/CD21), anti-dsDNA antibodies, Cysteine p Guanidine DNA (CpG DNA), toll-like receptor 9 (TLR9), interluekin-6 (IL-6), and interferon(IFN-α). Upon activation of plasmacytoid dendritic cells by bacterial CpG DNA or synthetic CpG ODN, these ligands binds to the cell surface CR2 and TLR9 to generate pro inflammatory cytokines via through NF-kB. In this, binding of these ligands induces releases of IFN-α from the plasmacytoid dendritic cells which further binds to IFN-α 1 & 2 receptors present on B cells. This binding was not completely blocked by an anti-IFNαR1 inhibitory antibody, indicating that the released IFN-α may partially binds to the CR2 present on the surface of B cells. IFN-α and IL-6 released from B cells was partially blocked by anti-CR2 inhibitory mAb171. These studies suggested that the cell surface CR2 partially involved in binding these ligands to generate pro inflammatory cytokines. More importantly these CpG DNA or CpG ODN predominantly binds to the cell surface/cellular TLR9 on B cells in order to induce the release of IL-6 and IFN-α, and other pro-inflammatory cytokines. This review describes how the bacterial CpG DNA/CpG motif/ CpG ODN regulate the innate immune system through B cell surface CR2 and TLR9 in B cell signaling.", "title": "" }, { "docid": "da989da66f8c2019adf49eae97fc2131", "text": "Psychedelic drugs are making waves as modern trials support their therapeutic potential and various media continue to pique public interest. In this opinion piece, we draw attention to a long-recognised component of the psychedelic treatment model, namely ‘set’ and ‘setting’ – subsumed here under the umbrella term ‘context’. We highlight: (a) the pharmacological mechanisms of classic psychedelics (5-HT2A receptor agonism and associated plasticity) that we believe render their effects exceptionally sensitive to context, (b) a study design for testing assumptions regarding positive interactions between psychedelics and context, and (c) new findings from our group regarding contextual determinants of the quality of a psychedelic experience and how acute experience predicts subsequent long-term mental health outcomes. We hope that this article can: (a) inform on good practice in psychedelic research, (b) provide a roadmap for optimising treatment models, and (c) help tackle unhelpful stigma still surrounding these compounds, while developing an evidence base for long-held assumptions about the critical importance of context in relation to psychedelic use that can help minimise harms and maximise potential benefits.", "title": "" }, { "docid": "0d9420b97012ce445fdf39fb009e32c4", "text": "Greater numbers of young children with complicated, serious physical health, mental health, or developmental problems are entering foster care during the early years when brain growth is most active. Every effort should be made to make foster care a positive experience and a healing process for the child. Threats to a child’s development from abuse and neglect should be understood by all participants in the child welfare system. Pediatricians have an important role in assessing the child’s needs, providing comprehensive services, and advocating on the child’s behalf. The developmental issues important for young children in foster care are reviewed, including: 1) the implications and consequences of abuse, neglect, and placement in foster care on early brain development; 2) the importance and challenges of establishing a child’s attachment to caregivers; 3) the importance of considering a child’s changing sense of time in all aspects of the foster care experience; and 4) the child’s response to stress. Additional topics addressed relate to parental roles and kinship care, parent-child contact, permanency decision-making, and the components of comprehensive assessment and treatment of a child’s development and mental health needs. More than 500 000 children are in foster care in the United States.1,2 Most of these children have been the victims of repeated abuse and prolonged neglect and have not experienced a nurturing, stable environment during the early years of life. Such experiences are critical in the shortand long-term development of a child’s brain and the ability to subsequently participate fully in society.3–8 Children in foster care have disproportionately high rates of physical, developmental, and mental health problems1,9 and often have many unmet medical and mental health care needs.10 Pediatricians, as advocates for children and their families, have a special responsibility to evaluate and help address these needs. Legal responsibility for establishing where foster children live and which adults have custody rests jointly with the child welfare and judiciary systems. Decisions about assessment, care, and planning should be made with sufficient information about the particular strengths and challenges of each child. Pediatricians have an important role in helping to develop an accurate, comprehensive profile of the child. To create a useful assessment, it is imperative that complete health and developmental histories are available to the pediatrician at the time of these evaluations. Pediatricians and other professionals with expertise in child development should be proactive advisors to child protection workers and judges regarding the child’s needs and best interests, particularly regarding issues of placement, permanency planning, and medical, developmental, and mental health treatment plans. For example, maintaining contact between children and their birth families is generally in the best interest of the child, and such efforts require adequate support services to improve the integrity of distressed families. However, when keeping a family together may not be in the best interest of the child, alternative placement should be based on social, medical, psychological, and developmental assessments of each child and the capabilities of the caregivers to meet those needs. Health care systems, social services systems, and judicial systems are frequently overwhelmed by their responsibilities and caseloads. Pediatricians can serve as advocates to ensure each child’s conditions and needs are evaluated and treated properly and to improve the overall operation of these systems. Availability and full utilization of resources ensure comprehensive assessment, planning, and provision of health care. Adequate knowledge about each child’s development supports better placement, custody, and treatment decisions. Improved programs for all children enhance the therapeutic effects of government-sponsored protective services (eg, foster care, family maintenance). The following issues should be considered when social agencies intervene and when physicians participate in caring for children in protective services. EARLY BRAIN AND CHILD DEVELOPMENT More children are entering foster care in the early years of life when brain growth and development are most active.11–14 During the first 3 to 4 years of life, the anatomic brain structures that govern personality traits, learning processes, and coping with stress and emotions are established, strengthened, and made permanent.15,16 If unused, these structures atrophy.17 The nerve connections and neurotransmitter networks that are forming during these critical years are influenced by negative environmental conditions, including lack of stimulation, child abuse, or violence within the family.18 It is known that emotional and cognitive disruptions in the early lives of children have the potential to impair brain development.18 Paramount in the lives of these children is their need for continuity with their primary attachment figures and a sense of permanence that is enhanced The recommendations in this statement do not indicate an exclusive course of treatment or serve as a standard of medical care. Variations, taking into account individual circumstances, may be appropriate. PEDIATRICS (ISSN 0031 4005). Copyright © 2000 by the American Acad-", "title": "" }, { "docid": "871298644bc8b7187a20a4803ec7e723", "text": "Intrinsic video decomposition refers to the fundamentally ambiguous task of separating a video stream into its constituent layers, in particular reflectance and shading layers. Such a decomposition is the basis for a variety of video manipulation applications, such as realistic recoloring or retexturing of objects. We present a novel variational approach to tackle this underconstrained inverse problem at real-time frame rates, which enables on-line processing of live video footage. The problem of finding the intrinsic decomposition is formulated as a mixed variational ℓ2-ℓp-optimization problem based on an objective function that is specifically tailored for fast optimization. To this end, we propose a novel combination of sophisticated local spatial and global spatio-temporal priors resulting in temporally coherent decompositions at real-time frame rates without the need for explicit correspondence search. We tackle the resulting high-dimensional, non-convex optimization problem via a novel data-parallel iteratively reweighted least squares solver that runs on commodity graphics hardware. Real-time performance is obtained by combining a local-global solution strategy with hierarchical coarse-to-fine optimization. Compelling real-time augmented reality applications, such as recoloring, material editing and retexturing, are demonstrated in a live setup. Our qualitative and quantitative evaluation shows that we obtain high-quality real-time decompositions even for challenging sequences. Our method is able to outperform state-of-the-art approaches in terms of runtime and result quality -- even without user guidance such as scribbles.", "title": "" }, { "docid": "b2e1b184096433db2bbd46cf01ef99c6", "text": "This is a short overview of a totally ordered broadcast protocol used by ZooKeeper, called Zab. It is conceptually easy to understand, is easy to implement, and gives high performance. In this paper we present the requirements ZooKeeper makes on Zab, we show how the protocol is used, and we give an overview of how the protocol works.", "title": "" }, { "docid": "083d621f946cf3ec5fdead536446c23f", "text": "When deciding whether two stimuli rotated in space are identical or mirror reversed, subjects employ mental rotation to solve the task. In children mental rotation can be trained by extensive repetition of the task, but the improvement seems to rely on the retrieval of previously learned stimuli. We assumed that due to the close relation between mental and manual rotation in children a manual training should improve the mental rotation process itself. The manual training we developed indeed ameliorated mental rotation and the training effect was not limited to learned stimuli. While boys outperformed girls in the mental rotation test before the manual rotation training, we found no gender differences in the results of the manual rotation task. 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "3500278940baaf6f510ad47463cbf5ed", "text": "Different word embedding models capture different aspects of linguistic properties. This inspired us to propose a model (MMaxLSTM-CNN) for employing multiple sets of word embeddings for evaluating sentence similarity/relation. Representing each word by multiple word embeddings, the MaxLSTM-CNN encoder generates a novel sentence embedding. We then learn the similarity/relation between our sentence embeddings via Multi-level comparison. Our method M-MaxLSTMCNN consistently shows strong performances in several tasks (i.e., measure textual similarity, identify paraphrase, recognize textual entailment). According to the experimental results on STS Benchmark dataset and SICK dataset from SemEval, M-MaxLSTM-CNN outperforms the state-of-the-art methods for textual similarity tasks. Our model does not use hand-crafted features (e.g., alignment features, Ngram overlaps, dependency features) as well as does not require pretrained word embeddings to have the same dimension.", "title": "" }, { "docid": "ee0d858955c3c45ac3d990d3ad9d56ed", "text": "Survival analysis is a subfield of statistics where the goal is to analyze and model data where the outcome is the time until an event of interest occurs. One of the main challenges in this context is the presence of instances whose event outcomes become unobservable after a certain time point or when some instances do not experience any event during the monitoring period. This so-called censoring can be handled most effectively using survival analysis techniques. Traditionally, statistical approaches have been widely developed in the literature to overcome the issue of censoring. In addition, many machine learning algorithms have been adapted to deal with such censored data and tackle other challenging problems that arise in real-world data. In this survey, we provide a comprehensive and structured review of the statistical methods typically used and the machine learning techniques developed for survival analysis, along with a detailed taxonomy of the existing methods. We also discuss several topics that are closely related to survival analysis and describe several successful applications in a variety of real-world application domains. We hope that this article will give readers a more comprehensive understanding of recent advances in survival analysis and offer some guidelines for applying these approaches to solve new problems arising in applications involving censored data.", "title": "" }, { "docid": "7de911386f69397afe76e427e7ae3997", "text": "Photonic crystal slabs are a versatile and important platform for molding the flow of light. In this thesis, we consider ways to control the emission of light from photonic crystal slab structures, specifically focusing on directional, asymmetric emission, and on emitting light with interesting topological features. First, we develop a general coupled-mode theory formalism to derive bounds on the asymmetric decay rates to top and bottom of a photonic crystal slab, for a resonance with arbitrary in-plane wavevector. We then employ this formalism to inversionsymmetric structures, and show through numerical simulations that asymmetries of top-down decay rates exceeding 104 can be achieved by tuning the resonance frequency to coincide with the perfectly transmitting Fabry-Perot frequency. The emission direction can also be rapidly switched from top to bottom by tuning the wavevector or frequency. We then consider the generation of Mobius strips of light polarization, i.e. vector beams with half-integer polarization winding, from photonic crystal slabs. We show that a quadratic degeneracy formed by symmetry considerations can be split into a pair of Dirac points, which can be further split into four exceptional points. Through calculations of an analytical two-band model and numerical simulations of two-dimensional photonic crystals and photonic crystal slabs, we demonstrate the existence of isofrequency contours encircling two exceptional points, and show the half-integer polarization winding along these isofrequency contours. We further propose a realistic photonic crystal slab structure and experimental setup to verify the existence of such Mobius strips of light polarization. Thesis Supervisor: Marin Solja-id Title: Professor of Physics and MacArthur Fellow", "title": "" }, { "docid": "c346ddfd1247d335c1a45d094ae2bb60", "text": "In this paper we introduce a novel approach for stereoscopic rendering of virtual environments with a wide Field-of-View (FoV) up to 360°. Handling such a wide FoV implies the use of non-planar projections and generates specific problems such as for rasterization and clipping of primitives. We propose a novel pre-clip stage specifically adapted to geometric approaches for which problems occur with polygons spanning across the projection discontinuities. Our approach integrates seamlessly with immersive virtual reality systems as it is compatible with stereoscopy, head-tracking, and multi-surface projections. The benchmarking of our approach with different hardware setups could show that it is well compliant with real-time constraint, and capable of displaying a wide range of FoVs. Thus, our geometric approach could be used in various VR applications in which the user needs to extend the FoV and apprehend more visual information.", "title": "" }, { "docid": "94ea8b56e8ade27c15e8603606003874", "text": "Mistry A, et al. Arch Dis Child Educ Pract Ed 2017;0:1–3. doi:10.1136/archdischild-2017-312905 A woman was admitted for planned induction at 39+5 weeks gestation. This was her third pregnancy. She had two previous children who were fit and well. Antenatal scans showed a fetal intra-abdominal mass measuring 6.2×5.5×7 cm in the lower abdomen, which was compressing the bladder. The mass was thought to be originating from the ovary or the bowel. On postnatal examination, the baby girl had a distended and full abdomen. There was a right-sided abdominal mass palpable above the umbilicus and 3 cm in size. It was firm, smooth and mobile in consistency. She had a normal anus and external female genitalia, with evidence of a prolapsed vagina on crying. She had passed urine and opened her bowels. The baby was kept nil by mouth and on intravenous fluids until the abdominal radiography was performed. The image is shown in figure 1.", "title": "" }, { "docid": "401aa3faf42ccdc2d63f5d76bd7092e4", "text": "We introduce a Markov-model-based framework for Moving Target Defense (MTD) analysis. The framework allows modeling of a broad range of MTD strategies, provides general theorems about how the probability of a successful adversary defeating an MTD strategy is related to the amount of time/cost spent by the adversary, and shows how a multilevel composition of MTD strategies can be analyzed by a straightforward combination of the analysis for each one of these strategies. Within the proposed framework we define the concept of security capacity which measures the strength or effectiveness of an MTD strategy: the security capacity depends on MTD specific parameters and more general system parameters. We apply our framework to two concrete MTD strategies.", "title": "" } ]
scidocsrr
41024f70f912f9cd77714a8823688ba2
An ensemble classifier system for early diagnosis of acute lymphoblastic leukemia in blood microscopic images
[ { "docid": "48b14b78512a8f63d3a9dcdf70d88182", "text": "A cute lymphocytic leukemia (ALL) is a malignant disease characterized by the accumulation of lymphoblast in the bone marrow. An improved scheme for ALL detection in blood microscopic images is presented here. In this study features i.e. hausdorff dimension and contour signature are employed to classify a lymphocytic cell in the blood image into normal lymphocyte or lymphoblast (blasts). In addition shape and texture features are also extracted for better classification. Initial segmentation is done using K-means clustering which segregates leukocytes or white blood cells (WBC) from other blood components i.e. erythrocytes and platelets. The results of K-means are used for evaluating individual cell shape, texture and other features for final detection of leukemia. Fractal features i.e. hausdorff dimension is implemented for measuring perimeter roughness and hence classifying a lymphocytic cell nucleus. A total of 108 blood smear images were considered for feature extraction and final performance evaluation is validated with the results of a hematologist.", "title": "" } ]
[ { "docid": "70e34d4ccd294d7811e344616638a3af", "text": "The rapid development of social media sharing has created a huge demand for automatic media classification and annotation techniques. Attribute learning has emerged as a promising paradigm for bridging the semantic gap and addressing data sparsity via transferring attribute knowledge in object recognition and relatively simple action classification. In this paper, we address the task of attribute learning for understanding multimedia data with sparse and incomplete labels. In particular, we focus on videos of social group activities, which are particularly challenging and topical examples of this task because of their multimodal content and complex and unstructured nature relative to the density of annotations. To solve this problem, we 1) introduce a concept of semilatent attribute space, expressing user-defined and latent attributes in a unified framework, and 2) propose a novel scalable probabilistic topic model for learning multimodal semilatent attributes, which dramatically reduces requirements for an exhaustive accurate attribute ontology and expensive annotation effort. We show that our framework is able to exploit latent attributes to outperform contemporary approaches for addressing a variety of realistic multimedia sparse data learning tasks including: multitask learning, learning with label noise, N-shot transfer learning, and importantly zero-shot learning.", "title": "" }, { "docid": "22d153c01c82117466777842724bbaca", "text": "State-of-the-art photovoltaics use high-purity, large-area, wafer-scale single-crystalline semiconductors grown by sophisticated, high-temperature crystal growth processes. We demonstrate a solution-based hot-casting technique to grow continuous, pinhole-free thin films of organometallic perovskites with millimeter-scale crystalline grains. We fabricated planar solar cells with efficiencies approaching 18%, with little cell-to-cell variability. The devices show hysteresis-free photovoltaic response, which had been a fundamental bottleneck for the stable operation of perovskite devices. Characterization and modeling attribute the improved performance to reduced bulk defects and improved charge carrier mobility in large-grain devices. We anticipate that this technique will lead the field toward synthesis of wafer-scale crystalline perovskites, necessary for the fabrication of high-efficiency solar cells, and will be applicable to several other material systems plagued by polydispersity, defects, and grain boundary recombination in solution-processed thin films.", "title": "" }, { "docid": "58703ec280887ebdcaeba826bf719b62", "text": "The management and conservation of the world's oceans require synthesis of spatial data on the distribution and intensity of human activities and the overlap of their impacts on marine ecosystems. We developed an ecosystem-specific, multiscale spatial model to synthesize 17 global data sets of anthropogenic drivers of ecological change for 20 marine ecosystems. Our analysis indicates that no area is unaffected by human influence and that a large fraction (41%) is strongly affected by multiple drivers. However, large areas of relatively little human impact remain, particularly near the poles. The analytical process and resulting maps provide flexible tools for regional and global efforts to allocate conservation resources; to implement ecosystem-based management; and to inform marine spatial planning, education, and basic research.", "title": "" }, { "docid": "b93983990101a9dbd363a5d0aa2e4088", "text": "BPMN is an emerging standard for process modelling and has the potential to become a process specification language to capture and exchange process models between stakeholders and tools. Ongoing research and standardisation efforts target a formal behavioural semantics and metamodel. Yet it is hardly specified how humans are embedded in the processes and how the work distribution among human resources can be defined. This paper addresses these issues by identifying the required model information based on the Workflow Resource Patterns. We evaluate BPMN and the upcoming metamodel standard (BPDM) for their capabilities and propose extensions.", "title": "" }, { "docid": "5e8f88f95910e3dbea995108450f8166", "text": "This paper summarizes ongoing research in NLP (Natural Language Processing) driven citation analysis and describes experiments and motivating examples of how this work can be used to enhance traditional scientometrics analysis that is based on simply treating citations as a “vote” from the citing paper to cited paper. In particular, we describe our dataset for citation polarity and citation purpose, present experimental results on the automatic detection of these indicators, and demonstrate the use of such annotations for studying research dynamics and scientific summarization. We also look at two complementary problems that show up in NLP driven citation analysis for a specific target paper. The first problem is extracting citation context, the implicit citation sentences that do not contain explicit anchors to the target paper. The second problem is extracting reference scope, the target relevant segment of a complicated citing sentence that cites multiple papers. We show how these tasks can be helpful in improving sentiment analysis and citation based summarization. ∗This research was conducted while the authors were at University of Michigan. 2 Rahul Jha and others", "title": "" }, { "docid": "8da42ecb961c885e7e744d15bb79c812", "text": "Danielle S. Bassett, Perry Zurn, and Joshua I. Gold Department of Bioengineering, School of Engineering and Applied Sciences, University of Pennsylvania, Philadelphia, PA, 19104 Department of Physics & Astronomy, College of Arts and Sciences, University of Pennsylvania, Philadelphia, PA, 19104 Department of Electrical & Systems Engineering, School of Engineering and Applied Sciences, University of Pennsylvania, Philadelphia, PA, 19104 Department of Neurology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104 Department of Philosophy, American University, Washington, DC, 20016 Department of Neuroscience, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104 and To whom correspondence should be addressed: dsb@seas.upenn.edu", "title": "" }, { "docid": "5bdf4585df04c00ebcf00ce94a86ab38", "text": "High-voltage pulse-generators can be used effectively for bacterial decontamination in water treatment applications. Applying a pulsed electric field to the infected water sample guarantees killing of harmful germs and bacteria. In this paper, a modular high-voltage pulse-generator with sequential charging is proposed for water treatment via underwater pulsed streamer corona discharge. The proposed generator consists of series-connected modules similar to an arm of a modular multilevel converter. The modules' capacitors are charged sequentially from a relatively low-voltage dc supply, then they are connected in series and discharged into the load. Two configurations are proposed in this paper, one for low repetitive pulse rate applications, and the other for high repetitive pulse rates. In the first topology, the equivalent resistance of the infected water sample is used as a charging resistance for the generator's capacitors during the charging process. While in the second topology, the water resistance is bypassed during the charging process, and an external charging resistance with proper value is used instead. In this paper, detailed designs for the proposed pulse-generators are presented and validated by simulation results using MATLAB. A scaled down experimental setup has been built to show the viability of the proposed concept.", "title": "" }, { "docid": "a7d9c920e0cd2521a8df341841c44db4", "text": "bstract. We propose a chromatic aberration (CA) reduction techique that removes artifacts caused by lateral CA and longitudinal A, simultaneously. In general, most visible CA-related artifacts apear locally in the neighborhoods of strong edges. Because these rtifacts usually have local characteristics, they cannot be removed ell by regular global warping methods. Therefore, we designed a onlinear partial differential equation (PDE) in which the local charcteristics of the CA are taken into account. The proposed algorithm stimates the regions with apparent CA artifacts and the ratios of the agnitudes between the color channels. Using this information, the roposed PDE matches the gradients of the edges in the red and lue channels to the gradient in the green channel, which results in n alignment of the positions of the edges while simultaneously perorming a deblurring process on the edges. Experimental results how that the proposed method can effectively remove even signifiant CA artifacts, such as purple fringing as identified by the image ensor. The experimental results show that the proposed algorithm chieves better performance than existing algorithms. © 2010 SPIE nd IS&T. DOI: 10.1117/1.3494278", "title": "" }, { "docid": "0dad686449811de611e9c55dbc9fc255", "text": "Neural networks with tree-based sentence encoders have shown better results on many downstream tasks. Most of existing tree-based encoders adopt syntactic parsing trees as the explicit structure prior. To study the effectiveness of different tree structures, we replace the parsing trees with trivial trees (i.e., binary balanced tree, left-branching tree and right-branching tree) in the encoders. Though trivial trees contain no syntactic information, those encoders get competitive or even better results on all of the ten downstream tasks we investigated. This surprising result indicates that explicit syntax guidance may not be the main contributor to the superior performances of tree-based neural sentence modeling. Further analysis show that tree modeling gives better results when crucial words are closer to the final representation. Additional experiments give more clues on how to design an effective tree-based encoder. Our code is opensource and available at https://github. com/ExplorerFreda/TreeEnc.", "title": "" }, { "docid": "bb253cee8f3b8de7c90e09ef878434f3", "text": "Under most widely-used security mechanisms the programs users run possess more authority than is strictly necessary, with each process typically capable of utilising all of the user’s privileges. Consequently such security mechanisms often fail to protect against contemporary threats, such as previously unknown (‘zero-day’) malware and software vulnerabilities, as processes can misuse a user’s privileges to behave maliciously. Application restrictions and sandboxes can mitigate threats that traditional approaches to access control fail to prevent by limiting the authority granted to each process. This developing field has become an active area of research, and a variety of solutions have been proposed. However, despite the seriousness of the problem and the security advantages these schemes provide, practical obstacles have restricted their adoption. This paper describes the motivation for application restrictions and sandboxes, presenting an indepth review of the literature covering existing systems. This is the most comprehensive review of the field to date. The paper outlines the broad categories of existing application-oriented access control schemes, such as isolation and rule-based schemes, and discusses their limitations. Adoption of these schemes has arguably been impeded by workflow, policy complexity, and usability issues. The paper concludes with a discussion on areas for future work, and points a way forward within this developing field of research with recommendations for usability and abstraction to be considered to a further extent when designing application-oriented access", "title": "" }, { "docid": "8a28f3ad78a77922fd500b805139de4b", "text": "Sina Weibo is the most popular and fast growing microblogging social network in China. However, more and more spam messages are also emerging on Sina Weibo. How to detect these spam is essential for the social network security. While most previous studies attempt to detect the microblogging spam by identifying spammers, in this paper, we want to exam whether we can detect the spam by each single Weibo message, because we notice that more and more spam Weibos are posted by normal users or even popular verified users. We propose a Weibo spam detection method based on machine learning algorithm. In addition, different from most existing microblogging spam detection methods which are based on English microblogs, our method is designed to deal with the features of Chinese microblogs. Our extensive empirical study shows the effectiveness of our approach.", "title": "" }, { "docid": "716f8cadac94110c4a00bc81480a4b66", "text": "The last decade has witnessed the prevalence of sensor and GPS technologies that produce a sheer volume of trajectory data representing the motion history of moving objects. Measuring similarity between trajectories is undoubtedly one of the most important tasks in trajectory data management since it serves as the foundation of many advanced analyses such as similarity search, clustering, and classification. In this light, tremendous efforts have been spent on this topic, which results in a large number of trajectory similarity measures. Generally, each individual work introducing a new distance measure has made specific claims on the superiority of their proposal. However, for most works, the experimental study was focused on demonstrating the efficiency of the search algorithms, leaving the effectiveness aspect unverified empirically. In this paper, we conduct a comparative experimental study on the effectiveness of six widely used trajectory similarity measures based on a real taxi trajectory dataset. By applying a variety of transformations we designed for each original trajectory, our experimental observations demonstrate the advantages and drawbacks of these similarity measures in different circumstances.", "title": "" }, { "docid": "1e06f7e6b7b0d3f9a21a814e50af6e3c", "text": "The context-dependent nature of online aggression makes annotating large collections of data extremely difficult. Previously studied datasets in abusive language detection have been insufficient in size to efficiently train deep learning models. Recently, Hate and Abusive Speech on Twitter, a dataset much greater in size and reliability, has been released. However, this dataset has not been comprehensively studied to its potential. In this paper, we conduct the first comparative study of various learning models on Hate and Abusive Speech on Twitter, and discuss the possibility of using additional features and context data for improvements. Experimental results show that bidirectional GRU networks trained on word-level features, with Latent Topic Clustering modules, is the most accurate model scoring 0.805 F1.", "title": "" }, { "docid": "5bd713c468f48313e42b399f441bb709", "text": "Nowadays, malware is affecting not only PCs but also mobile devices, which became pervasive in everyday life. Mobile devices can access and store personal information (e.g., location, photos, and messages) and thus are appealing to malware authors. One of the most promising approach to analyze malware is by monitoring its execution in a sandbox (i.e., via dynamic analysis). In particular, most malware sandboxing solutions for Android rely on an emulator, rather than a real device. This motivates malware authors to include runtime checks in order to detect whether the malware is running in a virtualized environment. In that case, the malicious app does not trigger the malicious payload. The presence of differences between real devices and Android emulators started an arms race between security researchers and malware authors, where the former want to hide these differences and the latter try to seek them out. In this paper we present Mirage, a malware sandbox architecture for Android focused on dynamic analysis evasion attacks. We designed the components of Mirage to be extensible via software modules, in order to build specific countermeasures against such attacks. To the best of our knowledge, Mirage is the first modular sandbox architecture that is robust against sandbox detection techniques. As a representative case study, we present a proof of concept implementation of Mirage with a module that tackles evasion attacks based on sensors API return values.", "title": "" }, { "docid": "4f2ebb2640a36651fd8c01f3eeb0e13e", "text": "This paper addresses pixel-level segmentation of a human body from a single image. The problem is formulated as a multi-region segmentation where the human body is constrained to be a collection of geometrically linked regions and the background is split into a small number of distinct zones. We solve this problem in a Bayesian framework for jointly estimating articulated body pose and the pixel-level segmentation of each body part. Using an image likelihood function that simultaneously generates and evaluates the image segmentation corresponding to a given pose, we robustly explore the posterior body shape distribution using a data-driven, coarse-to-fine Metropolis Hastings sampling scheme that includes a strongly data-driven proposal term.", "title": "" }, { "docid": "9ba51fcf04fe9dff5bf368a55fa2a1aa", "text": "In social media, demographic inference is a critical task in order to gain a better understanding of a cohort and to facilitate interacting with one’s audience. Most previous work has made independence assumptions over topological, textual and label information on social networks. In this work, we employ recursive neural networks to break down these independence assumptions to obtain inference about demographic characteristics on Twitter. We show that our model performs better than existing models including the state-of-theart.", "title": "" }, { "docid": "1d964bb1b82e6de71a6407967a8d9fa0", "text": "Ensuring reliable access to clean and affordable water is one of the greatest global challenges of this century. As the world's population increases, water pollution becomes more complex and difficult to remove, and global climate change threatens to exacerbate water scarcity in many areas, the magnitude of this challenge is rapidly increasing. Wastewater reuse is becoming a common necessity, even as a source of potable water, but our separate wastewater collection and water supply systems are not designed to accommodate this pressing need. Furthermore, the aging centralized water and wastewater infrastructure in the developed world faces growing demands to produce higher quality water using less energy and with lower treatment costs. In addition, it is impractical to establish such massive systems in developing regions that currently lack water and wastewater infrastructure. These challenges underscore the need for technological innovation to transform the way we treat, distribute, use, and reuse water toward a distributed, differential water treatment and reuse paradigm (i.e., treat water and wastewater locally only to the required level dictated by the intended use). Nanotechnology offers opportunities to develop next-generation water supply systems. This Account reviews promising nanotechnology-enabled water treatment processes and provides a broad view on how they could transform our water supply and wastewater treatment systems. The extraordinary properties of nanomaterials, such as high surface area, photosensitivity, catalytic and antimicrobial activity, electrochemical, optical, and magnetic properties, and tunable pore size and surface chemistry, provide useful features for many applications. These applications include sensors for water quality monitoring, specialty adsorbents, solar disinfection/decontamination, and high performance membranes. More importantly, the modular, multifunctional and high-efficiency processes enabled by nanotechnology provide a promising route both to retrofit aging infrastructure and to develop high performance, low maintenance decentralized treatment systems including point-of-use devices. Broad implementation of nanotechnology in water treatment will require overcoming the relatively high costs of nanomaterials by enabling their reuse and mitigating risks to public and environmental health by minimizing potential exposure to nanoparticles and promoting their safer design. The development of nanotechnology must go hand in hand with environmental health and safety research to alleviate unintended consequences and contribute toward sustainable water management.", "title": "" }, { "docid": "49a778b673ea65340e2bc2ebce8472a2", "text": "Motorcycles have always been the primary mode of transport in developing countries. In recent years, there has been a rise in motorcycle accidents. One of the major reasons for fatalities in accidents is the motorcyclist not wearing a protective helmet. The most prevalent method for ensuring that motorcyclists wear helmet is traffic police manually monitoring motorcyclists at road junctions or through CCTV footage and penalizing those without helmet. But, it requires human intervention and efforts. This paper proposes an automated system for detecting motorcyclists not wearing helmet and retrieving their motorcycle number plates from CCTV footage video. The proposed system first does background subtraction from video to get moving objects. Then, moving objects are classified as motorcyclist or non-motorcyclist. For classified motorcyclist, head portion is located and it is classified as helmet or non-helmet. Finally, for identified motorcyclist without helmet, number plate of motorcycle is detected and the characters on it are extracted. The proposed system uses Convolutional Neural Networks trained using transfer learning on top of pre-trained model for classification which has helped in achieving greater accuracy. Experimental results on traffic videos show an accuracy of 98.72% on detection of motorcyclists without helmet.", "title": "" }, { "docid": "8d49e37ab80dae285dbf694ba1849f68", "text": "In this paper we present a reference architecture for ETL stages of EDM and LA that works with different data formats and different extraction sites, ensuring privacy and making easier for new participants to enter into the process without demanding more computing power. Considering scenarios with a multitude of virtual environments hosting educational activities, accessible through a common infrastructure, we devised a reference model where data generated from interaction between users and among users and the environment itself, are selected, organized and stored in local “baskets”. Local baskets are then collected and grouped in a global basket. Organization resources like item modeling are used in both levels of basket construction. Using this reference upon a client-server architectural style, a reference architecture was developed and has been used to carry out a project for an official foundation linked to Brazilian Ministry of Education, involving educational data mining and sharing of 100+ higher education institutions and their respective virtual environments. In this architecture, a client-collector inside each virtual environment collects information from database and event logs. This information along with definitions obtained from item models are used to build local baskets. A synchronization protocol keeps all item models synced with client-collectors and server-collectors generating global baskets. This approach has shown improvements on ETL like: parallel processing of items, economy on storage space and bandwidth, privacy assurance, better tenacity, and good scalability.", "title": "" }, { "docid": "42d3f666325c3c9e2d61fcbad3c6659a", "text": "Supernumerary or accessory nostrils are a very rare type of congenital nasal anomaly, with only a few cases reported in the literature. They can be associated with such malformations as facial clefts and they can be unilateral or bilateral, with most cases reported being unilateral. The accessory nostril may or may not communicate with the ipsilateral nasal cavity, probably depending on the degree of embryological progression of the anomaly. A case of simple supernumerary left nostril with no nasal cavity communication and with a normally developed nose is presented. The surgical treatment is described and the different speculative theories related to the embryogenesis of supernumerary nostrils are also reviewed.", "title": "" } ]
scidocsrr
474f8eb990c2361d81421943fa55ff87
Angewandte Mathematik Und Informatik Universit at Zu K Oln Level Planar Embedding in Linear Time
[ { "docid": "0c81db10ea2268b640073e3aaa49cb35", "text": "A data structure called a PQ-tree is introduced. PQ-trees can be used to represent the permutations of a set U in which various subsets of U occur consecutively. Efficient algorithms are presented for manipulating PQ-trees. Algorithms using PQ-trecs are then given which test for the consecutive ones property in matrices and for graph planarity. The consecutive ones test is extended to a test for interval graphs using a recently discovered fast recognition algorithm for chordal graphs. All of these algorithms require a number of steps linear in the size of their input.", "title": "" } ]
[ { "docid": "202439978e4bece800aa42b1fea99d7b", "text": "Although they are primitive vertebrates, zebrafish (Danio rerio) and medaka (Oryzias latipes) have surpassed other animals as the most used model organisms based on their many advantages. Studies on gene expression patterns, regulatory cis-elements identification, and gene functions can be facilitated by using zebrafish embryos via a number of techniques, including transgenesis, in vivo transient assay, overexpression by injection of mRNAs, knockdown by injection of morpholino oligonucleotides, knockout and gene editing by CRISPR/Cas9 system and mutagenesis. In addition, transgenic lines of model fish harboring a tissue-specific reporter have become a powerful tool for the study of biological sciences, since it is possible to visualize the dynamic expression of a specific gene in the transparent embryos. In particular, some transgenic fish lines and mutants display defective phenotypes similar to those of human diseases. Therefore, a wide variety of fish model not only sheds light on the molecular mechanisms underlying disease pathogenesis in vivo but also provides a living platform for high-throughput screening of drug candidates. Interestingly, transgenic model fish lines can also be applied as biosensors to detect environmental pollutants, and even as pet fish to display beautiful fluorescent colors. Therefore, transgenic model fish possess a broad spectrum of applications in modern biomedical research, as exampled in the following review.", "title": "" }, { "docid": "f4b92c53dc001d06489093ff302384b2", "text": "Computational topology has recently known an important development toward data analysis, giving birth to the field of topological data analysis. Topological persistence, or persistent homology, appears as a fundamental tool in this field. In this paper, we study topological persistence in general metric spaces, with a statistical approach. We show that the use of persistent homology can be naturally considered in general statistical frameworks and persistence diagrams can be used as statistics with interesting convergence properties. Some numerical experiments are performed in various contexts to illustrate our results.", "title": "" }, { "docid": "d157d7b6e1c5796b6d7e8fedf66e81d8", "text": "Intrusion detection for computer network systems becomes one of the most critical tasks for network administrators today. It has an important role for organizations, governments and our society due to its valuable resources on computer networks. Traditional misuse detection strategies are unable to detect new and unknown intrusion. Besides , anomaly detection in network security is aim to distinguish between illegal or malicious events and normal behavior of network systems. Anomaly detection can be considered as a classification problem where it builds models of normal network behavior, which it uses to detect new patterns that significantly deviate from the model. Most of the current research on anomaly detection is based on the learning of normally and anomaly behaviors. They do not take into account the previous, recent events to detect the new incoming one. In this paper, we propose a real time collective anomaly detection model based on neural network learning and feature operating. Normally a Long Short-Term Memory Recurrent Neural Network (LSTM RNN) is trained only on normal data and it is capable of predicting several time steps ahead of an input. In our approach, a LSTM RNN is trained with normal time series data before performing a live prediction for each time step. Instead of considering each time step separately, the observation of prediction errors from a certain number of time steps is now proposed as a new idea for detecting collective anomalies. The prediction errors from a number of the latest time steps above a threshold will indicate a collective anomaly. The model is built on a time series version of the KDD 1999 dataset. The experiments demonstrate that it is possible to offer reliable and efficient for collective anomaly detection.", "title": "" }, { "docid": "3ccc5fd5bbf570a361b40afca37cec92", "text": "Face detection techniques have been developed for decades, and one of remaining open challenges is detecting small faces in unconstrained conditions. The reason is that tiny faces are often lacking detailed information and blurring. In this paper, we proposed an algorithm to directly generate a clear high-resolution face from a blurry small one by adopting a generative adversarial network (GAN). Toward this end, the basic GAN formulation achieves it by super-resolving and refining sequentially (e.g. SR-GAN and cycle-GAN). However, we design a novel network to address the problem of super-resolving and refining jointly. We also introduce new training losses to guide the generator network to recover fine details and to promote the discriminator network to distinguish real vs. fake and face vs. non-face simultaneously. Extensive experiments on the challenging dataset WIDER FACE demonstrate the effectiveness of our proposed method in restoring a clear high-resolution face from a blurry small one, and show that the detection performance outperforms other state-of-the-art methods.", "title": "" }, { "docid": "af5aaf2d834eec9bf5e47a89be6a30d8", "text": "An often-cited advantage of automatic speech recognition (ASR) is that it is ‘fast’; it is quite easy for a person to speak at several hundred words a minute, well above the rates that are possible using other modes of data entry. However, in order to conduct a fair comparison between alternative data entry methods, it is necessary to consider not the input rate per se, but the rate at which it is possible to enter information that is fully correct. This paper describes a model for predicting the relative success of alternative method of data entry in terms of the effective ‘throughput’ that is achievable taking into account typical input data entry rates, error rates and error correction times. Results are presented for the entry of both conventional and SMS-style text.", "title": "" }, { "docid": "ada320bb2747d539ff6322bbd46bd9f0", "text": "Real job applicants completed a 5-factor model personality measure as part of the job application process. They were rejected; 6 months later they (n = 5,266) reapplied for the same job and completed the same personality measure. Results indicated that 5.2% or fewer improved their scores on any scale on the 2nd occasion; moreover, scale scores were as likely to change in the negative direction as the positive. Only 3 applicants changed scores on all 5 scales beyond a 95% confidence threshold. Construct validity of the personality scales remained intact across the 2 administrations, and the same structural model provided an acceptable fit to the scale score matrix on both occasions. For the small number of applicants whose scores changed beyond the standard error of measurement, the authors found the changes were systematic and predictable using measures of social skill, social desirability, and integrity. Results suggest that faking on personality measures is not a significant problem in real-world selection settings.", "title": "" }, { "docid": "ff71838a3f8f44e30dc69ed2f9371bfc", "text": "The idea that video games or computer-based applications can improve cognitive function has led to a proliferation of programs claiming to \"train the brain.\" However, there is often little scientific basis in the development of commercial training programs, and many research-based programs yield inconsistent or weak results. In this study, we sought to better understand the nature of cognitive abilities tapped by casual video games and thus reflect on their potential as a training tool. A moderately large sample of participants (n=209) played 20 web-based casual games and performed a battery of cognitive tasks. We used cognitive task analysis and multivariate statistical techniques to characterize the relationships between performance metrics. We validated the cognitive abilities measured in the task battery, examined a task analysis-based categorization of the casual games, and then characterized the relationship between game and task performance. We found that games categorized to tap working memory and reasoning were robustly related to performance on working memory and fluid intelligence tasks, with fluid intelligence best predicting scores on working memory and reasoning games. We discuss these results in the context of overlap in cognitive processes engaged by the cognitive tasks and casual games, and within the context of assessing near and far transfer. While this is not a training study, these findings provide a methodology to assess the validity of using certain games as training and assessment devices for specific cognitive abilities, and shed light on the mixed transfer results in the computer-based training literature. Moreover, the results can inform design of a more theoretically-driven and methodologically-sound cognitive training program.", "title": "" }, { "docid": "772b550b1193ee9627cd458c1bac52a6", "text": "We will describe recent developments in a system for machine learning that we’ve been working on for some time (Sol 86, Sol 89). It is meant to be a “Scientist’s Assistant” of great power and versatility in many areas of science and mathematics. It differs from other ambitious work in this area in that we are not so much interested in knowledge itself, as we are in how it is acquired how machines may learn. To start off, the system will learn to solve two very general kinds of problems. Most, but perhaps not all problems in science and engineering are of these two kinds. The first kind is Function Inversion. These are the P and NP problems of computational complexity theory. They include theorem proving, solution of equations, symbolic integration, etc. The second kind of problem is Time Limited Optimization. Inductive inference of all kinds, surface reconstruction, and image restoration are a few examples of this kind of problem. Designing an automobile in 6 months satisfying certain specifications and having minimal cost, is", "title": "" }, { "docid": "6b5a7e58a8407fa5cda402d4996a3a10", "text": "In the last few years, Hadoop become a \"de facto\" standard to process large scale data as an open source distributed system. With combination of data mining techniques, Hadoop improve data analysis utility. That is why, there are amount of research is studied to apply data mining technique to mapreduce framework in Hadoop. However, data mining have a possibility to cause a privacy violation and this threat is a huge obstacle for data mining using Hadoop. To solve this problem, numerous studies have been conducted. However, existing studies were insufficient and had several drawbacks. In this paper, we propose the privacy preserving data mining technique in Hadoop that is solve privacy violation without utility degradation. We focus on association rule mining algorithm that is representative data mining algorithm. We validate the proposed technique to satisfy performance and preserve data privacy through the experimental results.", "title": "" }, { "docid": "28c82ece7caa6e07bf31a143c2d3adbd", "text": "We develop a novel method for training of GANs for unsupervised and class conditional generation of images, called Linear Discriminant GAN (LD-GAN). The discriminator of an LD-GAN is trained to maximize the linear separability between distributions of hidden representations of generated and targeted samples, while the generator is updated based on the decision hyper-planes computed by performing LDA over the hidden representations. LD-GAN provides a concrete metric of separation capacity for the discriminator, and we experimentally show that it is possible to stabilize the training of LD-GAN simply by calibrating the update frequencies between generators and discriminators in the unsupervised case, without employment of normalization methods and constraints on weights. In the class conditional generation tasks, the proposed method shows improved training stability together with better generalization performance compared to WGAN (Arjovsky et al. 2017) that employs an auxiliary classifier.", "title": "" }, { "docid": "a88266320346fd1f518d7e3bdc14a6d6", "text": "Machine learning (ML) is now a fairly established technology, and user experience (UX) designers appear regularly to integrate ML services in new apps, devices, and systems. Interestingly, this technology has not experienced a wealth of design innovation that other technologies have, and this might be because it is a new and difficult design material. To better understand why we have witnessed little design innovation, we conducted a survey of current UX practitioners with regards to how new ML services are envisioned and developed in UX practice. Our survey probed on how ML may or may not have been a part of their UX design education, on how they work to create new things with developers, and on the challenges they have faced working with this material. We use the findings from this survey and our review of related literature to present a series of challenges for UX and interaction design research and education. Finally, we discuss areas where new research and new curriculum might help our community unlock the power of design thinking to re-imagine what ML might be and might do.", "title": "" }, { "docid": "ad8825642d101f9e43522066355467c7", "text": "Inverse Reinforcement Learning (IRL) describes the problem of learning an unknown reward function of a Markov Decision Process (MDP) from observed behavior of an agent. Since the agent’s behavior originates in its policy and MDP policies depend on both the stochastic system dynamics as well as the reward function, the solution of the inverse problem is significantly influenced by both. Current IRL approaches assume that if the transition model is unknown, additional samples from the system’s dynamics are accessible, or the observed behavior provides enough samples of the system’s dynamics to solve the inverse problem accurately. These assumptions are often not satisfied. To overcome this, we present a gradient-based IRL approach that simultaneously estimates the system’s dynamics. By solving the combined optimization problem, our approach takes into account the bias of the demonstrations, which stems from the generating policy. The evaluation on a synthetic MDP and a transfer learning task shows improvements regarding the sample efficiency as well as the accuracy of the estimated reward functions and transition models.", "title": "" }, { "docid": "d4ac52a52e780184359289ecb41e321e", "text": "Interleaving is an increasingly popular technique for evaluating information retrieval systems based on implicit user feedback. While a number of isolated studies have analyzed how this technique agrees with conventional offline evaluation approaches and other online techniques, a complete picture of its efficiency and effectiveness is still lacking. In this paper we extend and combine the body of empirical evidence regarding interleaving, and provide a comprehensive analysis of interleaving using data from two major commercial search engines and a retrieval system for scientific literature. In particular, we analyze the agreement of interleaving with manual relevance judgments and observational implicit feedback measures, estimate the statistical efficiency of interleaving, and explore the relative performance of different interleaving variants. We also show how to learn improved credit-assignment functions for clicks that further increase the sensitivity of interleaving.", "title": "" }, { "docid": "c938996e79711cae64bdcc23d7e3944b", "text": "Decreased antimicrobial efficiency has become a global public health issue. The paucity of new antibacterial drugs is evident, and the arsenal against infectious diseases needs to be improved urgently. The selection of plants as a source of prototype compounds is appropriate, since plant species naturally produce a wide range of secondary metabolites that act as a chemical line of defense against microorganisms in the environment. Although traditional approaches to combat microbial infections remain effective, targeting microbial virulence rather than survival seems to be an exciting strategy, since the modulation of virulence factors might lead to a milder evolutionary pressure for the development of resistance. Additionally, anti-infective chemotherapies may be successfully achieved by combining antivirulence and conventional antimicrobials, extending the lifespan of these drugs. This review presents an updated discussion of natural compounds isolated from plants with chemically characterized structures and activity against the major bacterial virulence factors: quorum sensing, bacterial biofilms, bacterial motility, bacterial toxins, bacterial pigments, bacterial enzymes, and bacterial surfactants. Moreover, a critical analysis of the most promising virulence factors is presented, highlighting their potential as targets to attenuate bacterial virulence. The ongoing progress in the field of antivirulence therapy may therefore help to translate this promising concept into real intervention strategies in clinical areas.", "title": "" }, { "docid": "33d65d9ae8575d9de3b6a7cf0c30db37", "text": "The prediction of collisions amongst N rigid objects may be reduced to a series of computations of the time to first contact for all pairs of objects. Simple enclosing bounds and hierarchical partitions of the space-time domain are often used to avoid testing object-pairs that clearly will not collide. When the remaining pairs involve only polyhedra under straight-line translation, the exact computation of the collision time and of the contacts requires only solving for intersections between linear geometries. When a pair is subject to a more general relative motion, such a direct collision prediction calculation may be intractable. The popular brute force collision detection strategy of executing the motion for a series of small time steps and of checking for static interferences after each step is often computationally prohibitive. We propose instead a less expensive collision prediction strategy, where we approximate the relative motion between pairs of objects by a sequence of screw motion segments, each defined by the relative position and orientation of the two objects at the beginning and at the end of the segment. We reduce the computation of the exact collision time and of the corresponding face/vertex and edge/edge collision points to the numeric extraction of the roots of simple univariate analytic functions. Furthermore, we propose a series of simple rejection tests, which exploit the particularity of the screw motion to immediately decide that some objects do not collide or to speed-up the prediction of collisions by about 30%, avoiding on average 3/4 of the root-finding queries even when the object actually collide.", "title": "" }, { "docid": "c53d4c50930078ac4f49e4bca7ff7485", "text": "A versatile 4-channel bipotentiostat system for biochemical sensing is presented. A 1pA current resolution and 8kHz bandwidth are suited for amperometric detection of neurotransmitters released by cells, monitored in a smart microfluidic culture chamber. Multiple electrochemical measurements can be carried out on arrays of microelectrodes. Key design issues are here discussed along with the results of extensive electrochemical experiments (cyclic voltammetry, chronoamperometry, redox recycling and potentiometry).", "title": "" }, { "docid": "9d7852606784ecb8501d5b26b1b98f7f", "text": "This work describes a visualization tool and sensor testbed that can be used for assessing the performance of both instruments and human observers in support of port and harbor security. Simulation and modeling of littoral environments must take into account the complex interplay of incident light distributions, spatially correlated boundary interfaces, bottom-type variation, and the three-dimensional structure of objects in and out of the water. A general methodology for a two-pass Monte Carlo solution called Photon Mapping has been adopted and developed in the context of littoral hydrologic optics. The resulting tool is an end-to-end technique for simulating spectral radiative transfer in natural waters. A modular design allows arbitrary distributions of optical properties, geometries, and incident radiance to be modeled effectively. This tool has been integrated as part of the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model. DIRSIG has an established history in multi and hyperspectral scene simulation of terrain targets ranging from the visible to the thermal infrared (0.380 20.0 microns). This tool extends its capabilities to the domain of hydrologic optics and can be used to simulate and develop active/passive sensors that could be deployed on either aerial or underwater platforms. Applications of this model as a visualization tool for underwater sensors or divers are also demonstrated.", "title": "" }, { "docid": "c4f30733a0a27f5b6a5e64ffdbcc60fa", "text": "The RLK/Pelle gene family is one of the largest gene families in plants with several hundred to more than a thousand members, but only a few family members exist in animals. This unbalanced distribution indicates a rather dramatic expansion of this gene family in land plants. In this chapter we review what is known about the RLK/Pelle family’s origin in eukaryotes, its domain content evolution, expansion patterns across plant and animal species, and the duplication mechanisms that contribute to its expansion. We conclude by summarizing current knowledge of plant RLK/Pelle functions for a discussion on the relative importance of neutral evolution and natural selection as the driving forces behind continuous expansion and innovation in this gene family.", "title": "" }, { "docid": "d2b06786b6daa023dfd9f58ac99e8186", "text": "A systematic method for deriving soft-switching three-port converters (TPCs), which can interface multiple energy, is proposed in this paper. Novel full-bridge (FB) TPCs featuring single-stage power conversion, reduced conduction loss, and low-voltage stress are derived. Two nonisolated bidirectional power ports and one isolated unidirectional load port are provided by integrating an interleaved bidirectional Buck/Boost converter and a bridgeless Boost rectifier via a high-frequency transformer. The switching bridges on the primary side are shared; hence, the number of active switches is reduced. Primary-side pulse width modulation and secondary-side phase shift control strategy are employed to provide two control freedoms. Voltage and power regulations over two of the three power ports are achieved. Furthermore, the current/voltage ripples on the primary-side power ports are reduced due to the interleaving operation. Zero-voltage switching and zero-current switching are realized for the active switches and diodes, respectively. A typical FB-TPC with voltage-doubler rectifier developed by the proposed method is analyzed in detail. Operation principles, control strategy, and characteristics of the FB-TPC are presented. Experiments have been carried out to demonstrate the feasibility and effectiveness of the proposed topology derivation method.", "title": "" }, { "docid": "0e74994211d0e3c1e85ba0c85aba3df5", "text": "Images of faces manipulated to make their shapes closer to the average are perceived as more attractive. The influences of symmetry and averageness are often confounded in studies based on full-face views of faces. Two experiments are reported that compared the effect of manipulating the averageness of female faces in profile and full-face views. Use of a profile view allows a face to be \"morphed\" toward an average shape without creating an image that becomes more symmetrical. Faces morphed toward the average were perceived as more attractive in both views, but the effect was significantly stronger for full-face views. Both full-face and profile views morphed away from the average shape were perceived as less attractive. It is concluded that the effect of averageness is independent of any effect of symmetry on the perceived attractiveness of female faces.", "title": "" } ]
scidocsrr
8660ab87ee327c21c41fe597b20ef4de
An Artificial Intelligence Approach to Financial Fraud Detection under IoT Environment: A Survey and Implementation
[ { "docid": "007706ad8c73376db70af36a66cedf14", "text": "— With the developments in the Information Technology and improvements in the communication channels, fraud is spreading all over the world, resulting in huge financial losses. Though fraud prevention mechanisms such as CHIP&PIN are developed, these mechanisms do not prevent the most common fraud types such as fraudulent credit card usages over virtual POS terminals or mail orders. As a result, fraud detection is the essential tool and probably the best way to stop such fraud types. In this study, classification models based on decision trees and support vector machines (SVM) are developed and applied on credit card fraud detection problem. This study is one of the firsts to compare the performance of SVM and decision tree methods in credit card fraud detection with a real data set.", "title": "" }, { "docid": "e43c27b652de5c015450f542c1eb8dd2", "text": "Financial fraud is increasing significantly with the development of modern technology and the global superhighways of communication, resulting in the loss of billions of dollars worldwide each year. The companies and financial institution loose huge amounts due to fraud and fraudsters continuously try to find new rules and tactics to commit illegal actions. Thus, fraud detection systems have become essential for all credit card issuing banks to minimize their losses. The most commonly used fraud detection methods are Neural Network (NN), rule-induction techniques, fuzzy system, decision trees, Support Vector Machines (SVM), Artificial Immune System (AIS), genetic algorithms, K-Nearest Neighbor algorithms. These techniques can be used alone or in collaboration using ensemble or meta-learning techniques to build classifiers. This paper presents a survey of various techniques used in credit card fraud detection and evaluates each methodology based on certain design criteria. And this survey enables us to build a hybrid approach for developing some effective algorithms which can perform well for the classification problem with variable misclassification costs and with higher accuracy.", "title": "" }, { "docid": "f36348f2909a9642c18590fca6c9b046", "text": "This study explores the use of data mining methods to detect fraud for on e-ledgers through financial statements. For this purpose, data set were produced by rule-based control application using 72 sample e-ledger and error percentages were calculated and labeled. The financial statements created from the labeled e-ledgers were trained by different data mining methods on 9 distinguishing features. In the training process, Linear Regression, Artificial Neural Networks, K-Nearest Neighbor algorithm, Support Vector Machine, Decision Stump, M5P Tree, J48 Tree, Random Forest and Decision Table were used. The results obtained are compared and interpreted.", "title": "" }, { "docid": "66248db37a0dcf8cb17c075108b513b4", "text": "Since past few years there is tremendous advancement in electronic commerce technology, and the use of credit cards has dramatically increased. As credit card becomes the most popular mode of payment for both online as well as regular purchase, cases of fraud associated with it are also rising. In this paper we present the necessary theory to detect fraud in credit card transaction processing using a Hidden Markov Model (HMM). An HMM is initially trained with the normal behavior of a cardholder. If an incoming credit card transaction is not accepted by the trained HMM with sufficiently high probability, it is considered to be fraudulent. At the same time, we try to ensure that genuine transactions are not rejected by using an enhancement to it(Hybrid model).In further sections we compare different methods for fraud detection and prove that why HMM is more preferred method than other methods.", "title": "" }, { "docid": "5523695d47205129d0e5f6916d2d14f1", "text": "A phenomenal growth in the number of credit card transactions, especially for online purchases, has recently led to a substantial rise in fraudulent activities. Implementation of efficient fraud detection systems has thus become imperative for all credit card issuing banks to minimize their losses. In real life, fraudulent transactions are interspersed with genuine transactions and simple pattern matching is not often sufficient to detect them accurately. Thus, there is a need for combining both anomaly detection as well as misuse detection techniques. In this paper, we propose to use two-stage sequence alignment in which a profile analyzer (PA) first determines the similarity of an incoming sequence of transactions on a given credit card with the genuine cardholder's past spending sequences. The unusual transactions traced by the profile analyzer are next passed on to a deviation analyzer (DA) for possible alignment with past fraudulent behavior. The final decision about the nature of a transaction is taken on the basis of the observations by these two analyzers. In order to achieve online response time for both PA and DA, we suggest a new approach for combining two sequence alignment algorithms BLAST and SSAHA.", "title": "" } ]
[ { "docid": "74235290789c24ce00d54541189a4617", "text": "This article deals with an interesting application of Fractional Order (FO) Proportional Integral Derivative (PID) Controller for speed regulation in a DC Motor Drive. The design of five interdependent Fractional Order controller parameters has been formulated as an optimization problem based on minimization of set point error and controller output. The task of optimization was carried out using Artificial Bee Colony (ABC) algorithm. A comparative study has also been made to highlight the advantage of using a Fractional order PID controller over conventional PID control scheme for speed regulation of application considered. Extensive simulation results are provided to validate the effectiveness of the proposed approach.", "title": "" }, { "docid": "06ef397d13383ff09f2f6741c0626192", "text": "A fully-integrated low-dropout regulator (LDO) with fast transient response and full spectrum power supply rejection (PSR) is proposed to provide a clean supply for noise-sensitive building blocks in wideband communication systems. With the proposed point-of-load LDO, chip-level high-frequency glitches are well attenuated, consequently the system performance is improved. A tri-loop LDO architecture is proposed and verified in a 65 nm CMOS process. In comparison to other fully-integrated designs, the output pole is set to be the dominant pole, and the internal poles are pushed to higher frequencies with only 50 μA of total quiescent current. For a 1.2 V input voltage and 1 V output voltage, the measured undershoot and overshoot is only 43 mV and 82 mV, respectively, for load transient of 0 μA to 10 mA within edge times of 200 ps. It achieves a transient response time of 1.15 ns and the figure-of-merit (FOM) of 5.74 ps. PSR is measured to be better than -12 dB over the whole spectrum (DC to 20 GHz tested). The prototype chip measures 260×90 μm2, including 140 pF of stacked on-chip capacitors.", "title": "" }, { "docid": "4d0921d8dd1004f0eed02df0ff95a092", "text": "The “open classroom” emerged as a reaction against the industrial-era enclosed and authoritarian classroom. Although contemporary school architecture continues to incorporate and express ideas of openness, more research is needed about how teachers adapt to new and different built contexts. Our purpose is to identify teacher reaction to the affordances of open space learning environments. We outline a case study of teacher perceptions of working in new open plan school buildings. The case study demonstrates that affordances of open space classrooms include flexibility, visibility and scrutiny, and a de-emphasis of authority; teacher reactions included collective practice, team orientation, and increased interactions and a democratisation of authority. We argue that teacher reaction to the new open classroom features adaptability, intensification of day-to-day practice, and intraand inter-personal knowledge and skills.", "title": "" }, { "docid": "33b8417f25b56e5ea9944f9f33fc162c", "text": "Researchers have attempted to model information diffusion and topic trends and lifecycle on online social networks. They have investigated the role of content, social connections and communities, familiarity and behavioral similarity in this context. The current article presents a survey of representative models that perform topic analysis, capture information diffusion, and explore the properties of social connections in the context of online social networks. The article concludes with a set of outlines of open problems and possible directions of future research interest. This article is intended for researchers to identify the current literature, and explore possibilities to improve the art.", "title": "" }, { "docid": "1eee94436ff7c65b18908dab7fbfb1c6", "text": "Many efforts have been made in recent years to tackle the unconstrained face recognition challenge. For the benchmark of this challenge, the Labeled Faces in theWild (LFW) database has been widely used. However, the standard LFW protocol is very limited, with only 3,000 genuine and 3,000 impostor matches for classification. Today a 97% accuracy can be achieved with this benchmark, remaining a very limited room for algorithm development. However, we argue that this accuracy may be too optimistic because the underlying false accept rate may still be high (e.g. 3%). Furthermore, performance evaluation at low FARs is not statistically sound by the standard protocol due to the limited number of impostor matches. Thereby we develop a new benchmark protocol to fully exploit all the 13,233 LFW face images for large-scale unconstrained face recognition evaluation under both verification and open-set identification scenarios, with a focus at low FARs. Based on the new benchmark, we evaluate 21 face recognition approaches by combining 3 kinds of features and 7 learning algorithms. The benchmark results show that the best algorithm achieves 41.66% verification rates at FAR=0.1%, and 18.07% open-set identification rates at rank 1 and FAR=1%. Accordingly we conclude that the large-scale unconstrained face recognition problem is still largely unresolved, thus further attention and effort is needed in developing effective feature representations and learning algorithms. We thereby release a benchmark tool to advance research in this field.", "title": "" }, { "docid": "d735547a7b3a79f5935f15da3e51f361", "text": "We propose a new approach for locating forged regions in a video using correlation of noise residue. In our method, block-level correlation values of noise residual are extracted as a feature for classification. We model the distribution of correlation of temporal noise residue in a forged video as a Gaussian mixture model (GMM). We propose a two-step scheme to estimate the model parameters. Consequently, a Bayesian classifier is used to find the optimal threshold value based on the estimated parameters. Two video inpainting schemes are used to simulate two different types of forgery processes for performance evaluation. Simulation results show that our method achieves promising accuracy in video forgery detection.", "title": "" }, { "docid": "dc810b43c71ab591981454ad20e34b7a", "text": "This paper proposes a real-time variable-Q non-stationary Gabor transform (VQ-NSGT) system for speech pitch shifting. The system allows for time-frequency representations of speech on variable-Q (VQ) with perfect reconstruction and computational efficiency. The proposed VQ-NSGT phase vocoder can be used for pitch shifting by simple frequency translation (transposing partials along the frequency axis) instead of spectral stretching in frequency domain by the Fourier transform. In order to retain natural sounding pitch shifted speech, a hybrid of smoothly varying Q scheme is used to retain the formant structure of the original signal at both low and high frequencies. Moreover, the preservation of transients of speech are improved due to the high time resolution of VQ-NSGT at high frequencies. A sliced VQ-NSGT is used to retain inter-partials phase coherence by synchronized overlap-add method. Therefore, the proposed system lends itself to real-time processing while retaining the formant structure of the original signal and inter-partial phase coherence. The simulation results showed that the proposed approach is suitable for pitch shifting of both speech and music signals.", "title": "" }, { "docid": "ff67540fcba29de05415c77744d3a21d", "text": "Using Youla Parametrization and Linear Matrix Inequalities (LMI) a Multiobjective Robust Control (MRC) design for continuous linear time invariant (LTI) systems with bounded uncertainties is described. The design objectives can be a combination of H∞-, H2-performances, constraints on the control signal, etc.. Based on an initial stabilizing controller all stabilizing controllers for the uncertain system can be described by the Youla parametrization. Given this representation, all objectives can be formulated by independent Lyapunov functions, increasing the degree of freedom for the control design.", "title": "" }, { "docid": "67e2bbbbd0820bb47f04258eb4917cc1", "text": "One of the major differences between markets that follow a \" sharing economy \" paradigm and traditional two-sided markets is that the supply side in the sharing economy often includes individual nonprofessional decision makers, in addition to firms and professional agents. Using a data set of prices and availability of listings on Airbnb, we find that there exist substantial differences in the operational and financial performance of professional and nonprofessional hosts. In particular, properties managed by professional hosts earn 16.9% more in daily revenue, have 15.5% higher occupancy rates, and are 13.6% less likely to exit the market compared with properties owned by nonprofessional hosts, while controlling for property and market characteristics. We demonstrate that these performance differences between professionals and nonprofessionals can be partly explained by pricing inefficiencies. Specifically, we provide empirical evidence that nonprofes-sional hosts are less likely to offer different rates across stay dates based on the underlying demand patterns, such as those created by major holidays and conventions. We develop a parsimonious model to analyze the implications of having two such different host groups for a profit-maximizing platform operator and for a social planner. While a profit-maximizing platform operator should charge lower prices to nonprofessional hosts, a social planner would charge the same prices to professionals and nonprofessionals.", "title": "" }, { "docid": "3250454b6363a9bb49590636d9843a92", "text": "A low precision deep neural network training technique for producing sparse, ternary neural networks is presented. The technique incorporates hardware implementation costs during training to achieve significant model compression for inference. Training involves three stages: network training using L2 regularization and a quantization threshold regularizer, quantization pruning, and finally retraining. Resulting networks achieve improved accuracy, reduced memory footprint and reduced computational complexity compared with conventional methods, on MNIST and CIFAR10 datasets. Our networks are up to 98% sparse and 5 & 11 times smaller than equivalent binary and ternary models, translating to significant resource and speed benefits for hardware implementations.", "title": "" }, { "docid": "87e52d72533c26f59af13aaea0ea4b7f", "text": "This study investigated the work role attachment and retirement intentions of public school teachers in Calabar, Nigeria. It was motivated by the observation that most public school workers lack plans for retirement and as such do not prepare for it until it suddenly dawns on them. Few empirical studies were reviewed. Questionnaire was the main instrument used for data collection from a sample of 200 teachers. Independent t-test was used to test the stated hypotheses at 0.05 level of significance. Results showed that the committed/attached/involved workers have retirement intention to take a part-time job after retirement. The uncommitted/unattached/uninvolved workers have intention to retire earlier than those attached to their work. It was recommended that pre-retirement counselling should be adopted to assist teachers to develop good retirement plans.", "title": "" }, { "docid": "c828195cfc88abd598d1825f69932eb0", "text": "The central purpose of passive signal intercept receivers is to perform automatic categorization of unknown radar signals. Currently, there is an urgent need to develop intelligent classification algorithms for these devices due to emerging complexity of radar waveforms. Especially multifunction radars (MFRs) capable of performing several simultaneous tasks by utilizing complex, dynamically varying scheduled waveforms are a major challenge for automatic pattern classification systems. To assist recognition of complex radar emissions in modern intercept receivers, we have developed a novel method to recognize dynamically varying pulse repetition interval (PRI) modulation patterns emitted by MFRs. We use robust feature extraction and classifier design techniques to assist recognition in unpredictable real-world signal environments. We classify received pulse trains hierarchically which allows unambiguous detection of the subpatterns using a sliding window. Accuracy, robustness and reliability of the technique are demonstrated with extensive simulations using both static and dynamically varying PRI modulation patterns.", "title": "" }, { "docid": "b23d73e29fc205df97f073eb571a2b47", "text": "In this paper, we study two different trajectory planning problems for robotmanipulators. In the first case, the end-effector of the robot is constrained to move along a prescribed path in the workspace, whereas in the second case, the trajectory of the end-effector has to be determined in the presence of obstacles. Constraints of this type are called holonomic constraints. Both problems have been solved as optimal control problems. Given the dynamicmodel of the robotmanipulator, the initial state of the system, some specifications about the final state and a set of holonomic constraints, one has to find the trajectory and the actuator torques that minimize the energy consumption during the motion. The presence of holonomic constraints makes the optimal control problem particularly difficult to solve. Our method involves a numerical resolution of a reformulation of the constrained optimal control problem into an unconstrained calculus of variations problem in which the state space constraints and the dynamic equations, also regarded as constraints, are treated by means of special derivative multipliers. We solve the resulting calculus of variations problem using a numerical approach based on the Euler–Lagrange necessary condition in the integral form in which time is discretized and admissible variations for each variable are approximated using a linear combination of piecewise continuous basis functions of time. The use of the Euler–Lagrange necessary condition in integral form avoids the need for numerical corner conditions and thenecessity of patching together solutions between corners. In thisway, a generalmethod for the solution of constrained optimal control problems is obtained inwhich holonomic constraints can be easily treated. Numerical results of the application of thismethod to trajectory planning of planar horizontal robot manipulators with two revolute joints are reported. © 2011 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "0bcec8496b655fffa3591d36fbd5c230", "text": "We propose a novel approach to addressing the adaptation effectiveness issue in parameter adaptation for deep neural network (DNN) based acoustic models for automatic speech recognition by adding one or more small auxiliary output layers modeling broad acoustic units, such as mono-phones or tied-state (often called senone) clusters. In scenarios with a limited amount of available adaptation data, most senones are usually rarely seen or not observed, and consequently the ability to model them in a new condition is often not fully exploited. With the original senone classification task as the primary task, and adding auxiliary mono-phone/senone-cluster classification as the secondary tasks, multi-task learning (MTL) is employed to adapt the DNN parameters. With the proposed MTL adaptation framework, we improve the learning ability of the original DNN structure, then enlarge the coverage of the acoustic space to deal with the unseen senone problem, and thus enhance the discrimination power of the adapted DNN models. Experimental results on the 20,000-word open vocabulary WSJ task demonstrate that the proposed framework consistently outperforms the conventional linear hidden layer adaptation schemes without MTL by providing 3.2% relative word error rate reduction (WERR) with only 1 single adaptation utterance, and 10.7% WERR with 40 adaptation utterances against the un-adapted DNN models.", "title": "" }, { "docid": "2afcc7c1fb9dadc3d46743c991e15bac", "text": "This paper describes the design of a robot head, developed in the framework of the RobotCub project. This project goals consists on the design and construction of a humanoid robotic platform, the iCub, for studying human cognition. The final platform would be approximately 90 cm tall, with 23 kg and with a total number of 53 degrees of freedom. For its size, the iCub is the most complete humanoid robot currently being designed, in terms of kinematic complexity. The eyes can also move, as opposed to similarly sized humanoid platforms. Specifications are made based on biological anatomical and behavioral data, as well as tasks constraints. Different concepts for the neck design (flexible, parallel and serial solutions) are analyzed and compared with respect to the specifications. The eye structure and the proprioceptive sensors are presented, together with some discussion of preliminary work on the face design", "title": "" }, { "docid": "a79c65e76da81044ee7e81fc40fe5f8e", "text": "Most of the equipment required is readily available in most microwave labs: a vector network analyzer, a microwave signal generator, and, of course, a sampling oscilloscope. In this paper, the authors summarize many of the corrections discussed in \" Terminology for high-speed sampling-oscilloscope calibration\" [Williams et al., 2006] and \"Magnitude and phase calibrations for RF, microwave, and high-speed digital signal measurements\" [Remley and Hale, 2007] that are necessary for metrology-grade measurements and Illustrate the application of these oscilloscopes to the characterization of microwave signals.", "title": "" }, { "docid": "25779dfc55dc29428b3939bb37c47d50", "text": "Human daily activity recognition using mobile personal sensing technology plays a central role in the field of pervasive healthcare. One major challenge lies in the inherent complexity of human body movements and the variety of styles when people perform a certain activity. To tackle this problem, in this paper, we present a novel human activity recognition framework based on recently developed compressed sensing and sparse representation theory using wearable inertial sensors. Our approach represents human activity signals as a sparse linear combination of activity signals from all activity classes in the training set. The class membership of the activity signal is determined by solving a l1 minimization problem. We experimentally validate the effectiveness of our sparse representation-based approach by recognizing nine most common human daily activities performed by 14 subjects. Our approach achieves a maximum recognition rate of 96.1%, which beats conventional methods based on nearest neighbor, naive Bayes, and support vector machine by as much as 6.7%. Furthermore, we demonstrate that by using random projection, the task of looking for “optimal features” to achieve the best activity recognition performance is less important within our framework.", "title": "" }, { "docid": "c4aafcc0a98882de931713359e55a04a", "text": "We present a computer vision tool that analyses video from a CCTV system installed on fishing trawlers to monitor discarded fish catch. The system aims to support expert observers who review the footage and verify numbers, species and sizes of discarded fish. The operational environment presents a significant challenge for these tasks. Fish are processed below deck under fluorescent lights, they are randomly oriented and there are multiple occlusions. The scene is unstructured and complicated by the presence of fishermen processing the catch. We describe an approach to segmenting the scene and counting fish that exploits the N4-Fields algorithm. We performed extensive tests of the algorithm on a data set comprising 443 frames from 6 belts. Results indicate the relative count error (for individual fish) ranges from 2% to 16%. We believe this is the first system that is able to handle footage from operational trawlers.", "title": "" }, { "docid": "5546cbb6fac77d2d9fffab8ba0a50ed8", "text": "The next-generation electric power systems (smart grid) are studied intensively as a promising solution for energy crisis. One important feature of the smart grid is the integration of high-speed, reliable and secure data communication networks to manage the complex power systems effectively and intelligently. We provide in this paper a comprehensive survey on the communication architectures in the power systems, including the communication network compositions, technologies, functions, requirements, and research challenges. As these communication networks are responsible for delivering power system related messages, we discuss specifically the network implementation considerations and challenges in the power system settings. This survey attempts to summarize the current state of research efforts in the communication networks of smart grid, which may help us identify the research problems in the continued studies. 2011 Elsevier B.V. All rights reserved.", "title": "" } ]
scidocsrr
f461458407838e67950f57dc87fdc98a
Like It or Not: A Survey of Twitter Sentiment Analysis Methods
[ { "docid": "355fca41993ea19b08d2a9fc19e25722", "text": "People and companies selling goods or providing services have always desired to know what people think about their products. The number of opinions on the Web has significantly increased with the emergence of microblogs. In this paper we present a novel method for sentiment analysis of a text that allows the recognition of opinions in microblogs which are connected to a particular target or an entity. This method differs from other approaches in utilizing appraisal theory, which we employ for the analysis of microblog posts. The results of the experiments we performed on Twitter showed that our method improves sentiment classification and is feasible even for such specific content as presented on microblogs.", "title": "" } ]
[ { "docid": "460a296de1bd13378d71ce19ca5d807a", "text": "Many books discuss applications of data mining. For financial data analysis and financial modeling, see Benninga and Czaczkes [BC00] and Higgins [Hig03]. For retail data mining and customer relationship management, see books by Berry and Linoff [BL04] and Berson, Smith, and Thearling [BST99], and the article by Kohavi [Koh01]. For telecommunication-related data mining, see the book by Mattison [Mat97]. Chen, Hsu, and Dayal [CHD00] reported their work on scalable telecommunication tandem traffic analysis under a data warehouse/OLAP framework. For bioinformatics and biological data analysis, there are a large number of introductory references and textbooks. An introductory overview of bioinformatics for computer scientists was presented by Cohen [Coh04]. Recent textbooks on bioinformatics include Krane and Raymer [KR03], Jones and Pevzner [JP04], Durbin, Eddy, Krogh and Mitchison [DEKM98], Setubal and Meidanis [SM97], Orengo, Jones, and Thornton [OJT03], and Pevzner [Pev03]. Summaries of biological data analysis methods and algorithms can also be found in many other books, such as Gusfield [Gus97], Waterman [Wat95], Baldi and Brunak [BB01], and Baxevanis and Ouellette [BO04]. There are many books on scientific data analysis, such as Grossman, Kamath, Kegelmeyer, et al. (eds.) [GKK01]. For geographic data mining, see the book edited by Miller and Han [MH01]. Valdes-Perez [VP99] discusses the principles of human-computer collaboration for knowledge discovery in science. For intrusion detection, see Barbará [Bar02] and Northcutt and Novak [NN02].", "title": "" }, { "docid": "4933a947f4b0b9a0ca506d50f2010eaf", "text": "For integers <i>k</i>≥1 and <i>n</i>≥2<i>k</i>+1, the <em>Kneser graph</em> <i>K</i>(<i>n</i>,<i>k</i>) is the graph whose vertices are the <i>k</i>-element subsets of {1,…,<i>n</i>} and whose edges connect pairs of subsets that are disjoint. The Kneser graphs of the form <i>K</i>(2<i>k</i>+1,<i>k</i>) are also known as the <em>odd graphs</em>. We settle an old problem due to Meredith, Lloyd, and Biggs from the 1970s, proving that for every <i>k</i>≥3, the odd graph <i>K</i>(2<i>k</i>+1,<i>k</i>) has a Hamilton cycle. This and a known conditional result due to Johnson imply that all Kneser graphs of the form <i>K</i>(2<i>k</i>+2<sup><i>a</i></sup>,<i>k</i>) with <i>k</i>≥3 and <i>a</i>≥0 have a Hamilton cycle. We also prove that <i>K</i>(2<i>k</i>+1,<i>k</i>) has at least 2<sup>2<sup><i>k</i>−6</sup></sup> distinct Hamilton cycles for <i>k</i>≥6. Our proofs are based on a reduction of the Hamiltonicity problem in the odd graph to the problem of finding a spanning tree in a suitably defined hypergraph on Dyck words.", "title": "" }, { "docid": "1f1a8f5f7612e131ce7b99c13aa4d5db", "text": "Background subtraction can be treated as the binary classification problem of highlighting the foreground region in a video whilst masking the background region, and has been broadly applied in various vision tasks such as video surveillance and traffic monitoring. However, it still remains a challenging task due to complex scenes and for lack of the prior knowledge about the temporal information. In this paper, we propose a novel background subtraction model based on 3D convolutional neural networks (3D CNNs) which combines temporal and spatial information to effectively separate the foreground from all the sequences in an end-to-end manner. Different from conventional models, we view background subtraction as three-class classification problem, i.e., the foreground, the background and the boundary. This design can obtain more reasonable results than existing baseline models. Experiments on the Change Detection 2012 dataset verify the potential of our model in both quantity and quality.", "title": "" }, { "docid": "afc5259cfa23aa94dd032127d147dde9", "text": "This paper is a reflection of our experience with the specification and subsequent execution of model transformations in the QVT core and Relations languages. Since this technology for executing transformations written in high-level, declarative specification languages is of very recent date, we observe that there is little knowledge available on how to write such declarative model transformations. Consequently, there is a need for a body of knowledge on transformation engineering. With this paper we intend to make an initial contribution to this emerging discipline. Based on our experiences we propose a number of useful design patterns for transformation specification. In addition we provide a method for specifying such transformation patterns in QVT, such that others can add their own patterns to a catalogue and the body of knowledge can grow as experience is built up. Finally, we illustrate how these patterns can be used in the specification of complex transformations.", "title": "" }, { "docid": "4eb937f806ca01268b5ed1348d0cc40c", "text": "The paradigms of transformational planning, case-based planning, and plan debugging all involve a process known as plan adaptation | modifying or repairing an old plan so it solves a new problem. In this paper we provide a domain-independent algorithm for plan adaptation, demonstrate that it is sound, complete, and systematic, and compare it to other adaptation algorithms in the literature. Our approach is based on a view of planning as searching a graph of partial plans. Generative planning starts at the graph's root and moves from node to node using planre nement operators. In planning by adaptation, a library plan|an arbitrary node in the plan graph|is the starting point for the search, and the plan-adaptation algorithm can apply both the same re nement operators available to a generative planner and can also retract constraints and steps from the plan. Our algorithm's completeness ensures that the adaptation algorithm will eventually search the entire graph and its systematicity ensures that it will do so without redundantly searching any parts of the graph.", "title": "" }, { "docid": "4dd28201b87acf7705ea91f9e9e4a330", "text": "Because individual crowd workers often exhibit high variance in annotation accuracy, we often ask multiple crowd workers to label each example to infer a single consensus label. While simple majority vote computes consensus by equally weighting each worker’s vote, weighted voting assigns greater weight to more accurate workers, where accuracy is estimated by inner-annotator agreement (unsupervised) and/or agreement with known expert labels (supervised). In this paper, we investigate the annotation cost vs. consensus accuracy benefit from increasing the amount of expert supervision. To maximize benefit from supervision, we propose a semi-supervised approach which infers consensus labels using both labeled and unlabeled examples. We compare our semi-supervised approach with several existing unsupervised and supervised baselines, evaluating on both synthetic data and Amazon Mechanical Turk data. Results show (a) a very modest amount of supervision can provide significant benefit, and (b) consensus accuracy from full supervision with a large amount of labeled data is matched by our semi-supervised approach with much less supervision.", "title": "" }, { "docid": "a52a90bb69f303c4a31e4f24daf609e6", "text": "The effects of Arctium lappa L. (root) on anti-inflammatory and free radical scavenger activity were investigated. Subcutaneous administration of A. lappa crude extract significantly decreased carrageenan-induced rat paw edema. When simultaneously treated with CCl4, it produced pronounced activities against CCl4-induced acute liver damage. The free radical scavenging activity of its crude extract was also examined by means of an electron spin resonance (ESR) spectrometer. The IC50 of A. lappa extract on superoxide and hydroxyl radical scavenger activity was 2.06 mg/ml and 11.8 mg/ml, respectively. These findings suggest that Arctium lappa possess free radical scavenging activity. The inhibitory effects on carrageenan-induced paw edema and CCl4-induced hepatotoxicity could be due to the scavenging effect of A. lappa.", "title": "" }, { "docid": "a078933ffbb2f0488b3b425b78fb7dd0", "text": "Semantic Role Labeling (SRL) has become one of the standard tasks of natural language processing and proven useful as a source of information for a number of other applications. We address the problem of transferring an SRL model from one language to another using a shared feature representation. This approach is then evaluated on three language pairs, demonstrating competitive performance as compared to a state-of-the-art unsupervised SRL system and a cross-lingual annotation projection baseline. We also consider the contribution of different aspects of the feature representation to the performance of the model and discuss practical applicability of this method. 1 Background and Motivation Semantic role labeling has proven useful in many natural language processing tasks, such as question answering (Shen and Lapata, 2007; Kaisser and Webber, 2007), textual entailment (Sammons et al., 2009), machine translation (Wu and Fung, 2009; Liu and Gildea, 2010; Gao and Vogel, 2011) and dialogue systems (Basili et al., 2009; van der Plas et al., 2009). Multiple models have been designed to automatically predict semantic roles, and a considerable amount of data has been annotated to train these models, if only for a few more popular languages. As the annotation is costly, one would like to leverage existing resources to minimize the human effort required to construct a model for a new language. A number of approaches to the construction of semantic role labeling models for new languages have been proposed. On one end of the scale is unsupervised SRL, such as Grenager and Manning (2006), which requires some expert knowledge, but no labeled data. It clusters together arguments that should bear the same semantic role, but does not assign a particular role to each cluster. On the other end is annotating a new dataset from scratch. There are also intermediate options, which often make use of similarities between languages. This way, if an accurate model exists for one language, it should help simplify the construction of a model for another, related language. The approaches in this third group often use parallel data to bridge the gap between languages. Cross-lingual annotation projection systems (Padó and Lapata, 2009), for example, propagate information directly via word alignment links. However, they are very sensitive to the quality of parallel data, as well as the accuracy of a sourcelanguage model on it. An alternative approach, known as cross-lingual model transfer, or cross-lingual model adaptation, consists of modifying a source-language model to make it directly applicable to a new language. This usually involves constructing a shared feature representation across the two languages. McDonald et al. (2011) successfully apply this idea to the transfer of dependency parsers, using part-ofspeech tags as the shared representation of words. A later extension of Täckström et al. (2012) enriches this representation with cross-lingual word clusters, considerably improving the performance. In the case of SRL, a shared representation that is purely syntactic is likely to be insufficient, since structures with different semantics may be realized by the same syntactic construct, for example “in August” vs “in Britain”. However with the help of recently introduced cross-lingual word represen-", "title": "" }, { "docid": "11ad0993b62e016175638d80f9acd694", "text": "Progressive macular hypomelanosis (PMH) is a skin disorder that is characterized by hypopigmented macules and usually seen in young adults. The skin microbiota, in particular the bacterium Propionibacterium acnes, is suggested to play a role. Here, we compared the P. acnes population of 24 PMH lesions from eight patients with corresponding nonlesional skin of the patients and matching control samples from eight healthy individuals using an unbiased, culture-independent next-generation sequencing approach. We also compared the P. acnes population before and after treatment with a combination of lymecycline and benzoylperoxide. We found an association of one subtype of P. acnes, type III, with PMH. This type was predominant in all PMH lesions (73.9% of reads in average) but only detected as a minor proportion in matching control samples of healthy individuals (14.2% of reads in average). Strikingly, successful PMH treatment is able to alter the composition of the P. acnes population by substantially diminishing the proportion of P. acnes type III. Our study suggests that P. acnes type III may play a role in the formation of PMH. Furthermore, it sheds light on substantial differences in the P. acnes phylotype distribution between the upper and lower back and abdomen in healthy individuals.", "title": "" }, { "docid": "d4acd79e2fdbc9b87b2dbc6ebfa2dd43", "text": "Airbnb, an online marketplace for accommodations, has experienced a staggering growth accompanied by intense debates and scattered regulations around the world. Current discourses, however, are largely focused on opinions rather than empirical evidences. Here, we aim to bridge this gap by presenting the first large-scale measurement study on Airbnb, using a crawled data set containing 2.3 million listings, 1.3 million hosts, and 19.3 million reviews. We measure several key characteristics at the heart of the ongoing debate and the sharing economy. Among others, we find that Airbnb has reached a global yet heterogeneous coverage. The majority of its listings across many countries are entire homes, suggesting that Airbnb is actually more like a rental marketplace rather than a spare-room sharing platform. Analysis on star-ratings reveals that there is a bias toward positive ratings, amplified by a bias toward using positive words in reviews. The extent of such bias is greater than Yelp reviews, which were already shown to exhibit a positive bias. We investigate a key issue - commercial hosts who own multiple listings on Airbnb - repeatedly discussed in the current debate. We find that their existence is prevalent, they are early movers towards joining Airbnb, and their listings are disproportionately entire homes and located in the US. Our work advances the current understanding of how Airbnb is being used and may serve as an independent and empirical reference to inform the debate.", "title": "" }, { "docid": "887665ab7f043987b3373628d9cf6021", "text": "In isolated converter, transformer is a main path of common mode current. Methods of how to reduce the noise through transformer have been widely studied. One effective technique is using shield between primary and secondary winding. In this paper, EMI noise transferring path and EMI model for typical isolated converters are analyzed. And the survey about different methods of shielding is discussed. Their pros and cons are analyzed. Then the balance concept is introduced and our proposed double shielding using balance concept for wire winding transformer is raised. It can control the parasitic capacitance accurately and is easy to manufacturing. Next, a newly proposed single layer shielding for PCB winding transformer is discussed. The experiment results are provided to verify the methods.", "title": "" }, { "docid": "0281c96d3990df1159d58c6b5707b1ad", "text": "In the Big Data community, MapReduce has been seen as one of the key enabling approaches for meeting continuously increasing demands on computing resources imposed by massive data sets. The reason for this is the high scalability of the MapReduce paradigm which allows for massively parallel and distributed execution over a large number of computing nodes. This paper identifies MapReduce issues and challenges in handling Big Data with the objective of providing an overview of the field, facilitating better planning and management of Big Data projects, and identifying opportunities for future research in this field. The identified challenges are grouped into four main categories corresponding to Big Data tasks types: data storage (relational databases and NoSQL stores), Big Data analytics (machine learning and interactive analytics), online processing, and security and privacy. Moreover, current efforts aimed at improving and extending MapReduce to address identified challenges are presented. Consequently, by identifying issues and challenges MapReduce faces when handling Big Data, this study encourages future Big Data research.", "title": "" }, { "docid": "a354949d97de673e71510618a604e264", "text": "Fast Magnetic Resonance Imaging (MRI) is highly in demand for many clinical applications in order to reduce the scanning cost and improve the patient experience. This can also potentially increase the image quality by reducing the motion artefacts and contrast washout. However, once an image field of view and the desired resolution are chosen, the minimum scanning time is normally determined by the requirement of acquiring sufficient raw data to meet the Nyquist–Shannon sampling criteria. Compressive Sensing (CS) theory has been perfectly matched to the MRI scanning sequence design with much less required raw data for the image reconstruction. Inspired by recent advances in deep learning for solving various inverse problems, we propose a conditional Generative Adversarial Networks-based deep learning framework for de-aliasing and reconstructing MRI images from highly undersampled data with great promise to accelerate the data acquisition process. By coupling an innovative content loss with the adversarial loss our de-aliasing results are more realistic. Furthermore, we propose a refinement learning procedure for training the generator network, which can stabilise the training with fast convergence and less parameter tuning. We demonstrate that the proposed framework outperforms state-of-the-art CS-MRI methods, in terms of reconstruction error and perceptual image quality. In addition, our method can reconstruct each image in 0.22ms–0.37ms, which is promising for real-time applications.", "title": "" }, { "docid": "4997de0d1663a8362fb47abcf9e34df9", "text": "Our goal is to segment a video sequence into moving objects and the world scene. In recent work, spectral embedding of point trajectories based on 2D motion cues accumulated from their lifespans, has shown to outperform factorization and per frame segmentation methods for video segmentation. The scale and kinematic nature of the moving objects and the background scene determine how close or far apart trajectories are placed in the spectral embedding. Such density variations may confuse clustering algorithms, causing over-fragmentation of object interiors. Therefore, instead of clustering in the spectral embedding, we propose detecting discontinuities of embedding density between spatially neighboring trajectories. Detected discontinuities are strong indicators of object boundaries and thus valuable for video segmentation. We propose a novel embedding discretization process that recovers from over-fragmentations by merging clusters according to discontinuity evidence along inter-cluster boundaries. For segmenting articulated objects, we combine motion grouping cues with a center-surround saliency operation, resulting in “context-aware”, spatially coherent, saliency maps. Figure-ground segmentation obtained from saliency thresholding, provides object connectedness constraints that alter motion based trajectory affinities, by keeping articulated parts together and separating disconnected in time objects. Finally, we introduce Gabriel graphs as effective per frame superpixel maps for converting trajectory clustering to dense image segmentation. Gabriel edges bridge large contour gaps via geometric reasoning without over-segmenting coherent image regions. We present experimental results of our method that outperform the state-of-the-art in challenging motion segmentation datasets.", "title": "" }, { "docid": "774bf4b0a2c8fe48607e020da2737041", "text": "A class of three-dimensional planar arrays in substrate integrated waveguide (SIW) technology is proposed, designed and demonstrated with 8 × 16 elements at 35 GHz for millimeter-wave imaging radar system applications. Endfire element is generally chosen to ensure initial high gain and broadband characteristics for the array. Fermi-TSA (tapered slot antenna) structure is used as element to reduce the beamwidth. Corrugation is introduced to reduce the resulting antenna physical width without degradation of performance. The achieved measured gain in our demonstration is about 18.4 dBi. A taper shaped air gap in the center is created to reduce the coupling between two adjacent elements. An SIW H-to-E-plane vertical interconnect is proposed in this three-dimensional architecture and optimized to connect eight 1 × 16 planar array sheets to the 1 × 8 final network. The overall architecture is exclusively fabricated by the conventional PCB process. Thus, the developed SIW feeder leads to a significant reduction in both weight and cost, compared to the metallic waveguide-based counterpart. A complete antenna structure is designed and fabricated. The planar array ensures a gain of 27 dBi with low SLL of 26 dB and beamwidth as narrow as 5.15 degrees in the E-plane and 6.20 degrees in the 45°-plane.", "title": "" }, { "docid": "3888dd754c9f7607d7a4cc2f4a436aac", "text": "We propose a distributed algorithm to estimate the 3D trajectories of multiple cooperative robots from relative pose measurements. Our approach leverages recent results [1] which show that the maximum likelihood trajectory is well approximated by a sequence of two quadratic subproblems. The main contribution of the present work is to show that these subproblems can be solved in a distributed manner, using the distributed Gauss-Seidel (DGS) algorithm. Our approach has several advantages. It requires minimal information exchange, which is beneficial in presence of communication and privacy constraints. It has an anytime flavor: after few iterations the trajectory estimates are already accurate, and they asymptotically convergence to the centralized estimate. The DGS approach scales well to large teams, and it has a straightforward implementation. We test the approach in simulations and field tests, demonstrating its advantages over related techniques.", "title": "" }, { "docid": "9420760d6945440048cee3566ce96699", "text": "In this work, we develop a computer vision based fall prevention system for hospital ward application. To prevent potential falls, once the event of patient get up from the bed is automatically detected, nursing staffs are alarmed immediately for assistance. For the detection task, we use a RGBD sensor (Microsoft Kinect). The geometric prior knowledge is exploited by identifying a set of task-specific feature channels, e.g., regions of interest. Extensive motion and shape features from both color and depth image sequences are extracted. Features from multiple modalities and channels are fused via a multiple kernel learning framework for training the event detector. Experimental results demonstrate the high accuracy and efficiency achieved by the proposed system.", "title": "" }, { "docid": "9d8f18265d729a98553f89a8b337e6a0", "text": "Scalable Network Forensics by Matthias Vallentin Doctor of Philosophy in Computer Science University of California, Berkeley Professor Vern Paxson, Chair Network forensics and incident response play a vital role in site operations, but for large networks can pose daunting difficulties to cope with the ever-growing volume of activity and resulting logs. On the one hand, logging sources can generate tens of thousands of events per second, which a system supporting comprehensive forensics must somehow continually ingest. On the other hand, operators greatly benefit from interactive exploration of disparate types of activity when analyzing an incident, which often leaves network operators scrambling to ferret out answers to key questions: How did the attackers get in? What did they do once inside? Where did they come from? What activity patterns serve as indicators reflecting their presence? How do we prevent this attack in the future? Operators can only answer such questions by drawing upon high-quality descriptions of past activity recorded over extended time. A typical analysis starts with a narrow piece of intelligence, such as a local system exhibiting questionable behavior, or a report from another site describing an attack they detected. The analyst then tries to locate the described behavior by examining past activity, often cross-correlating information of different types to build up additional context. Frequently, this process in turn produces new leads to explore iteratively (“peeling the onion”), continuing and expanding until ultimately the analyst converges on as complete of an understanding of the incident as they can extract from the available information. This process, however, remains manual and time-consuming, as no single storage system efficiently integrates the disparate sources of data that investigations often involve. While standard Security Information and Event Management (SIEM) solutions aggregate logs from different sources into a single database, their data models omit crucial semantics, and they struggle to scale to the data rates that large-scale environments require.", "title": "" }, { "docid": "b7da2182bbdf69c46ffba20b272fab02", "text": "Social Media is playing a key role in today's society. Many of the events that are taking place in diverse human activities could be explained by the study of these data. Big Data is a relatively new parading in Computer Science that is gaining increasing interest by the scientific community. Big Data Predictive Analytics is a Big Data discipline that is mostly used to analyze what is in the huge amounts of data and then perform predictions based on such analysis using advanced mathematics and computing techniques. The study of Social Media Data involves disciplines like Natural Language Processing, by the integration of this area to academic studies, useful findings have been achieved. Social Network Rating Systems are online platforms that allow users to know about goods and services, the way in how users review and rate their experience is a field of evolving research. This paper presents a deep investigation in the state of the art of these areas to discover and analyze the current status of the research that has been developed so far by academics of diverse background.", "title": "" } ]
scidocsrr
803392004352b72103594ea25acf9906
Controller design for a bipedal walking robot using variable stiffness actuators
[ { "docid": "2997be0d8b1f7a183e006eba78135b13", "text": "The basic mechanics of human locomotion are associated with vaulting over stiff legs in walking and rebounding on compliant legs in running. However, while rebounding legs well explain the stance dynamics of running, stiff legs cannot reproduce that of walking. With a simple bipedal spring-mass model, we show that not stiff but compliant legs are essential to obtain the basic walking mechanics; incorporating the double support as an essential part of the walking motion, the model reproduces the characteristic stance dynamics that result in the observed small vertical oscillation of the body and the observed out-of-phase changes in forward kinetic and gravitational potential energies. Exploring the parameter space of this model, we further show that it not only combines the basic dynamics of walking and running in one mechanical system, but also reveals these gaits to be just two out of the many solutions to legged locomotion offered by compliant leg behaviour and accessed by energy or speed.", "title": "" } ]
[ { "docid": "8a9489ed62cfa4169b53647b7a51d979", "text": "We present MAESTRO, a framework to describe and analyze CNN dataflows, and predict performance and energy-efficiency when running neural network layers across various hardware configurations. This includes two components: (i) a concise language to describe arbitrary dataflows and (ii) and analysis framework that accepts the dataflow description, hardware resource description, and DNN layer description as inputs and generates buffer requirements, buffer access counts, network-on-chip (NoC) bandwidth requirements, and roofline performance information. We demonstrate both components across several dataflows as case studies.", "title": "" }, { "docid": "42faf2c0053c9f6a0147fc66c8e4c122", "text": "IN 1921, Gottlieb's discovery of the epithelial attachment of the gingiva opened new horizons which served as the basis for a better understanding of the biology of the dental supporting tissues in health and disease. Three years later his pupils, Orban and Kohler (1924), undertook the task of measuring the epithelial attachment as well as the surrounding tissue relations during the four phases of passive eruption of the tooth. Gottlieb and Orban's descriptions of the epithelial attachment unveiled the exact morphology of this epithelial structure, and clarified the relation of this", "title": "" }, { "docid": "1ceab925041160f17163940360354c55", "text": "A complete reconstruction of D.H. Lehmer’s ENIAC set-up for computing the exponents of p modulo 2 is given. This program served as an early test program for the ENIAC (1946). The reconstruction illustrates the difficulties of early programmers to find a way between a man operated and a machine operated computation. These difficulties concern both the content level (the algorithm) and the formal level (the logic of sequencing operations).", "title": "" }, { "docid": "f734b6fc215e8da00641820a2b627be9", "text": "We propose a novel traffic sign detection system that simultaneously estimates the location and precise boundary of traffic signs using convolutional neural network (CNN). Estimating the precise boundary of traffic signs is important in navigation systems for intelligent vehicles where traffic signs can be used as 3-D landmarks for road environment. Previous traffic sign detection systems, including recent methods based on CNN, only provide bounding boxes of traffic signs as output, and thus requires additional processes such as contour estimation or image segmentation to obtain the precise boundary of signs. In this paper, the boundary estimation of traffic sign is formulated as 2-D pose and shape class prediction problem, and this is effectively solved by a single CNN. With the predicted 2-D pose and the shape class of a target traffic sign in the input, we estimate the actual boundary of the target sign by projecting the boundary of a corresponding template sign image into the input image plane. By formulating the boundary estimation problem as a CNN-based pose and shape prediction task, our method is end-to-end trainable, and more robust to occlusion and small targets than other boundary estimation methods that rely on contour estimation or image segmentation. With our architectural optimization of the CNN-based traffic sign detection network, the proposed method shows a detection frame rate higher than seven frames/second while providing highly accurate and robust traffic sign detection and boundary estimation results on a low-power mobile platform.", "title": "" }, { "docid": "b5bb280c7ce802143a86b9261767d9a6", "text": "Existing person re-identification (re-id) benchmarks and algorithms mainly focus on matching cropped pedestrian images between queries and candidates. However, it is different from real-world scenarios where the annotations of pedestrian bounding boxes are unavailable and the target person needs to be found from whole images. To close the gap, we investigate how to localize and match query persons from the scene images without relying on the annotations of candidate boxes. Instead of breaking it down into two separate tasks—pedestrian detection and person re-id, we propose an end-to-end deep learning framework to jointly handle both tasks. A random sampling softmax loss is proposed to effectively train the model under the supervision of sparse and unbalanced labels. On the other hand, existing benchmarks are small in scale and the samples are collected from a few fixed camera views with low scene diversities. To address this issue, we collect a largescale and scene-diversified person search dataset, which contains 18,184 images, 8,432 persons, and 99,809 annotated bounding boxes1. We evaluate our approach and other baselines on the proposed dataset, and study the influence of various factors. Experiments show that our method achieves the best result.", "title": "" }, { "docid": "1768ecf6a2d8a42ea701d7f242edb472", "text": "Satisfaction prediction is one of the prime concerns in search performance evaluation. It is a non-trivial task for two major reasons: (1) The definition of satisfaction is rather subjective and different users may have different opinions in satisfaction judgement. (2) Most existing studies on satisfaction prediction mainly rely on users' click-through or query reformulation behaviors but there are many sessions without such kind of interactions. To shed light on these research questions, we construct an experimental search engine that could collect users' satisfaction feedback as well as mouse click-through/movement data. Different from existing studies, we compare for the first time search users' and external assessors' opinions on satisfaction. We find that search users pay more attention to the utility of results while external assessors emphasize on the efforts spent in search sessions. Inspired by recent studies in predicting result relevance based on mouse movement patterns (namely motifs), we propose to estimate the utilities of search results and the efforts in search sessions with motifs extracted from mouse movement data on search result pages (SERPs). Besides the existing frequency-based motif selection method, two novel selection strategies (distance-based and distribution-based) are also adopted to extract high quality motifs for satisfaction prediction. Experimental results on over 1,000 user sessions show that the proposed strategies outperform existing methods and also have promising generalization capability for different users and queries.", "title": "" }, { "docid": "7a72f69ad4926798e12f6fa8e598d206", "text": "In this work, we revisit atrous convolution, a powerful tool to explicitly adjust filter’s field-of-view as well as control the resolution of feature responses computed by Deep Convolutional Neural Networks, in the application of semantic image segmentation. To handle the problem of segmenting objects at multiple scales, we design modules which employ atrous convolution in cascade or in parallel to capture multi-scale context by adopting multiple atrous rates. Furthermore, we propose to augment our previously proposed Atrous Spatial Pyramid Pooling module, which probes convolutional features at multiple scales, with image-level features encoding global context and further boost performance. We also elaborate on implementation details and share our experience on training our system. The proposed ‘DeepLabv3’ system significantly improves over our previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark.", "title": "" }, { "docid": "860bfe5785eaa759036121e63369c0e8", "text": "In this paper, a robust high speed low input impedance CMOS current comparator is proposed. The circuit uses modified Wilson current-mirror to perform a current subtraction. Negative feedback is employed to reduce input impedances of the circuit. The diode connected transistors of the same type (NMOS) are used at the output making the circuit immune to the process variation. HSPICE is used to verify the circuit performance and the results show the propagation delay of 1.67 nsec with an average power dissipation of 0.63 mW using a standard 0.5 /spl mu/m CMOS technology for an input current of /spl plusmn/0.1 /spl mu/A at the supply voltage of 3 V. The input impedances of the proposed current comparator are 123 /spl Omega/ and 126 /spl Omega/ while the maximum output voltage variation is only 1.9%.", "title": "" }, { "docid": "085ec38c3e756504be93ac0b94483cea", "text": "Low power wide area (LPWA) networks are making spectacular progress from design, standardization, to commercialization. At this time of fast-paced adoption, it is of utmost importance to analyze how well these technologies will scale as the number of devices connected to the Internet of Things inevitably grows. In this letter, we provide a stochastic geometry framework for modeling the performance of a single gateway LoRa network, a leading LPWA technology. Our analysis formulates the unique peculiarities of LoRa, including its chirp spread-spectrum modulation technique, regulatory limitations on radio duty cycle, and use of ALOHA protocol on top, all of which are not as common in today’s commercial cellular networks. We show that the coverage probability drops exponentially as the number of end-devices grows due to interfering signals using the same spreading sequence. We conclude that this fundamental limiting factor is perhaps more significant toward LoRa scalability than for instance spectrum restrictions. Our derivations for co-spreading factor interference found in LoRa networks enables rigorous scalability analysis of such networks.", "title": "" }, { "docid": "93bebbc1112dbfd34fce1b3b9d228f9a", "text": "UNLABELLED\nThere has been no established qualitative system of interpretation for therapy response assessment using PET/CT for head and neck cancers. The objective of this study was to validate the Hopkins interpretation system to assess therapy response and survival outcome in head and neck squamous cell cancer patients (HNSCC).\n\n\nMETHODS\nThe study included 214 biopsy-proven HNSCC patients who underwent a posttherapy PET/CT study, between 5 and 24 wk after completion of treatment. The median follow-up was 27 mo. PET/CT studies were interpreted by 3 nuclear medicine physicians, independently. The studies were scored using a qualitative 5-point scale, for the primary tumor, for the right and left neck, and for overall assessment. Scores 1, 2, and 3 were considered negative for tumors, and scores 4 and 5 were considered positive for tumors. The Cohen κ coefficient (κ) was calculated to measure interreader agreement. Overall survival (OS) and progression-free survival (PFS) were analyzed by Kaplan-Meier plots with a Mantel-Cox log-rank test and Gehan Breslow Wilcoxon test for comparisons.\n\n\nRESULTS\nOf the 214 patients, 175 were men and 39 were women. There was 85.98%, 95.33%, 93.46%, and 87.38% agreement between the readers for overall, left neck, right neck, and primary tumor site response scores, respectively. The corresponding κ coefficients for interreader agreement between readers were, 0.69-0.79, 0.68-0.83, 0.69-0.87, and 0.79-0.86 for overall, left neck, right neck, and primary tumor site response, respectively. The sensitivity, specificity, positive predictive value, negative predictive value, and overall accuracy of the therapy assessment were 68.1%, 92.2%, 71.1%, 91.1%, and 86.9%, respectively. Cox multivariate regression analysis showed human papillomavirus (HPV) status and PET/CT interpretation were the only factors associated with PFS and OS. Among the HPV-positive patients (n = 123), there was a significant difference in PFS (hazard ratio [HR], 0.14; 95% confidence interval, 0.03-0.57; P = 0.0063) and OS (HR, 0.01; 95% confidence interval, 0.00-0.13; P = 0.0006) between the patients who had a score negative for residual tumor versus positive for residual tumor. A similar significant difference was observed in PFS and OS for all patients. There was also a significant difference in the PFS of patients with PET-avid residual disease in one site versus multiple sites in the neck (HR, 0.23; log-rank P = 0.004).\n\n\nCONCLUSION\nThe Hopkins 5-point qualitative therapy response interpretation criteria for head and neck PET/CT has substantial interreader agreement and excellent negative predictive value and predicts OS and PFS in patients with HPV-positive HNSCC.", "title": "" }, { "docid": "f82eb2d4cc45577f08c7e867bf012816", "text": "OBJECTIVE\nThe purpose of this study was to compare the retrieval characteristics of the Option Elite (Argon Medical, Plano, Tex) and Denali (Bard, Tempe, Ariz) retrievable inferior vena cava filters (IVCFs), two filters that share a similar conical design.\n\n\nMETHODS\nA single-center, retrospective study reviewed all Option and Denali IVCF removals during a 36-month period. Attempted retrievals were classified as advanced if the routine \"snare and sheath\" technique was initially unsuccessful despite multiple attempts or an alternative endovascular maneuver or access site was used. Patient and filter characteristics were documented.\n\n\nRESULTS\nIn our study, 63 Option and 45 Denali IVCFs were retrieved, with an average dwell time of 128.73 and 99.3 days, respectively. Significantly higher median fluoroscopy times were experienced in retrieving the Option filter compared with the Denali filter (12.18 vs 6.85 minutes; P = .046). Use of adjunctive techniques was also higher in comparing the Option filter with the Denali filter (19.0% vs 8.7%; P = .079). No significant difference was noted between these groups in regard to gender, age, or history of malignant disease.\n\n\nCONCLUSIONS\nOption IVCF retrieval procedures required significantly longer retrieval fluoroscopy time compared with Denali IVCFs. Although procedure time was not analyzed in this study, as a surrogate, the increased fluoroscopy time may also have an impact on procedural direct costs and throughput.", "title": "" }, { "docid": "eb4f7427eb73ac0a0486e8ecb2172b52", "text": "In this work we propose the use of a modified version of the correlation coefficient as a performance criterion for the image alignment problem. The proposed modification has the desirable characteristic of being invariant with respect to photometric distortions. Since the resulting similarity measure is a nonlinear function of the warp parameters, we develop two iterative schemes for its maximization, one based on the forward additive approach and the second on the inverse compositional method. As it is customary in iterative optimization, in each iteration the nonlinear objective function is approximated by an alternative expression for which the corresponding optimization is simple. In our case we propose an efficient approximation that leads to a closed form solution (per iteration) which is of low computational complexity, the latter property being particularly strong in our inverse version. The proposed schemes are tested against the forward additive Lucas-Kanade and the simultaneous inverse compositional algorithm through simulations. Under noisy conditions and photometric distortions our forward version achieves more accurate alignments and exhibits faster convergence whereas our inverse version has similar performance as the simultaneous inverse compositional algorithm but at a lower computational complexity.", "title": "" }, { "docid": "2dda75184e2c9c5507c75f84443fff08", "text": "Text classification can help users to effectively handle and exploit useful information hidden in large-scale documents. However, the sparsity of data and the semantic sensitivity to context often hinder the classification performance of short texts. In order to overcome the weakness, we propose a unified framework to expand short texts based on word embedding clustering and convolutional neural network (CNN). Empirically, the semantically related words are usually close to each other in embedding spaces. Thus, we first discover semantic cliques via fast clustering. Then, by using additive composition over word embeddings from context with variable window width, the representations of multi-scale semantic units1 in short texts are computed. In embedding spaces, the restricted nearest word embeddings (NWEs)2 of the semantic units are chosen to constitute expanded matrices, where the semantic cliques are used as supervision information. Finally, for a short text, the projected matrix 3 and expanded matrices are combined and fed into CNN in parallel. Experimental results on two open benchmarks validate the effectiveness of the proposed method.", "title": "" }, { "docid": "8de5b77f3cb4f1c20ff6cc11b323ba9c", "text": "The Internet of Things (IoT) paradigm refers to the network of physical objects or \"things\" embedded with electronics, software, sensors, and connectivity to enable objects to exchange data with servers, centralized systems, and/or other connected devices based on a variety of communication infrastructures. IoT makes it possible to sense and control objects creating opportunities for more direct integration between the physical world and computer-based systems. IoT will usher automation in a large number of application domains, ranging from manufacturing and energy management (e.g. SmartGrid), to healthcare management and urban life (e.g. SmartCity). However, because of its finegrained, continuous and pervasive data acquisition and control capabilities, IoT raises concerns about the security and privacy of data. Deploying existing data security solutions to IoT is not straightforward because of device heterogeneity, highly dynamic and possibly unprotected environments, and large scale. In this talk, after outlining key challenges in data security and privacy, we present initial approaches to securing IoT data, including efficient and scalable encryption protocols, software protection techniques for small devices, and fine-grained data packet loss analysis for sensor networks.", "title": "" }, { "docid": "07e54849ceae5e425b106619e760e522", "text": "In this paper, we propose a novel approach to interpret a well-trained classification model through systematically investigating effects of its hidden units on prediction making. We search for the core hidden units responsible for predicting inputs as the class of interest under the generative Bayesian inference framework. We model such a process of unit selection as an Indian Buffet Process, and derive a simplified objective function via the MAP asymptotic technique. The induced binary optimization problem is efficiently solved with a continuous relaxation method by attaching a Switch Gate layer to the hidden layers of interest. The resulted interpreter model is thus end-to-end optimized via standard gradient back-propagation. Experiments are conducted with two popular deep convolutional classifiers, respectively well-trained on the MNIST dataset and the CIFAR10 dataset. The results demonstrate that the proposed interpreter successfully finds the core hidden units most responsible for prediction making. The modified model, only with the selected units activated, can hold correct predictions at a high rate. Besides, this interpreter model is also able to extract the most informative pixels in the images by connecting a Switch Gate layer to the input layer.", "title": "" }, { "docid": "7cf2c2ce9edff28880bc399e642cee44", "text": "This paper provides new results and insights for tracking an extended target object modeled with an Elliptic Random Hypersurface Model (RHM). An Elliptic RHM specifies the relative squared Mahalanobis distance of a measurement source to the center of the target object by means of a one-dimensional random scaling factor. It is shown that uniformly distributed measurement sources on an ellipse lead to a uniformly distributed squared scaling factor. Furthermore, a Bayesian inference mechanisms tailored to elliptic shapes is introduced, which is also suitable for scenarios with high measurement noise. Closed-form expressions for the measurement update in case of Gaussian and uniformly distributed squared scaling factors are derived.", "title": "" }, { "docid": "c19f986d747f4d6a3448607f76d961ab", "text": "We propose Stochastic Neural Architecture Search (SNAS), an economical endto-end solution to Neural Architecture Search (NAS) that trains neural operation parameters and architecture distribution parameters in same round of backpropagation, while maintaining the completeness and differentiability of the NAS pipeline. In this work, NAS is reformulated as an optimization problem on parameters of a joint distribution for the search space in a cell. To leverage the gradient information in generic differentiable loss for architecture search, a novel search gradient is proposed. We prove that this search gradient optimizes the same objective as reinforcement-learning-based NAS, but assigns credits to structural decisions more efficiently. This credit assignment is further augmented with locally decomposable reward to enforce a resource-efficient constraint. In experiments on CIFAR-10, SNAS takes fewer epochs to find a cell architecture with state-of-theart accuracy than non-differentiable evolution-based and reinforcement-learningbased NAS, which is also transferable to ImageNet. It is also shown that child networks of SNAS can maintain the validation accuracy in searching, with which attention-based NAS requires parameter retraining to compete, exhibiting potentials to stride towards efficient NAS on big datasets.", "title": "" }, { "docid": "5fd66116021e4d86d3937e7a5b595975", "text": "The notion of disentangled autoencoders was proposed as an extension to the variational autoencoder by introducing a disentanglement parameter β, controlling the learning pressure put on the possible underlying latent representations. For certain values of β this kind of autoencoders is capable of encoding independent input generative factors in separate elements of the code, leading to a more interpretable and predictable model behaviour. In this paper we quantify the effects of the parameter β on the model performance and disentanglement. After training multiple models with the same value of β, we establish the existence of consistent variance in one of the disentanglement measures, proposed in literature. The negative consequences of the disentanglement to the autoencoder’s discriminative ability are also asserted while varying the amount of examples available during training.", "title": "" }, { "docid": "8a08bb5a952589615c9054d4fc0e8c1f", "text": "The classical plain-text representation of source code is c onvenient for programmers but requires parsing to uncover t he deep structure of the program. While sophisticated software too ls parse source code to gain access to the program’s structur e, many lightweight programming aids such as grep rely instead on only the lexical structure of source code. I d escribe a new XML application that provides an alternative representation o f Java source code. This XML-based representation, called J avaML, is more natural for tools and permits easy specification of nume rous software-engineering analyses by leveraging the abun dance of XML tools and techniques. A robust converter built with th e Jikes Java compiler framework translates from the classic l Java source code representation to JavaML, and an XSLT style sheet converts from JavaML back into the classical textual f orm.", "title": "" }, { "docid": "9eacc5f0724ff8fe2152930980dded4b", "text": "A computer-controlled adjustable nanosecond pulse generator based on high-voltage MOSFET is designed in this paper, which owns stable performance and miniaturization profile of 32×30×7 cm3. The experiment results show that the pulser can generate electrical pulse with Gaussian rising time of 20 nanosecond, section-adjustable index falling time of 40–200 nanosecond, continuously adjustable repitition frequency of 0–5 kHz, quasi-continuously adjustable amplitude of 0–1 kV at 50 Ω load. And the pulser could meet the requiremen.", "title": "" } ]
scidocsrr
ee708f1e329ba7b807f3de3d89be05db
Energy Harvesting Electronics for Vibratory Devices in Self-Powered Sensors
[ { "docid": "6126a101cf55448f0c9ac4dbf98bc690", "text": "This paper studies the energy conversion efficiency for a rectified piezoelectric power harvester. An analytical model is proposed, and an expression of efficiency is derived under steady-state operation. In addition, the relationship among the conversion efficiency, electrically induced damping and ac–dc power output is established explicitly. It is shown that the optimization criteria are different depending on the relative strength of the coupling. For the weak electromechanical coupling system, the optimal power transfer is attained when the efficiency and induced damping achieve their maximum values. This result is consistent with that observed in the recent literature. However, a new finding shows that they are not simultaneously maximized in the strongly coupled electromechanical system.", "title": "" } ]
[ { "docid": "8a59e2b140eaf91a4a5fd8c109682543", "text": "A search-based procedural content generation (SBPCG) algorithm for strategy game maps is proposed. Two representations for strategy game maps are devised, along with a number of objectives relating to predicted player experience. A multiobjective evolutionary algorithm is used for searching the space of maps for candidates that satisfy pairs of these objectives. As the objectives are inherently partially conflicting, the algorithm generates Pareto fronts showing how these objectives can be balanced. Such fronts are argued to be a valuable tool for designers looking to balance various design needs. Choosing appropriate points (manually or automatically) on the Pareto fronts, maps can be found that exhibit good map design according to specified criteria, and could either be used directly in e.g. an RTS game or form the basis for further human design.", "title": "" }, { "docid": "ada35607fa56214e5df8928008735353", "text": "Osseous free flaps have become the preferred method for reconstructing segmental mandibular defects. Of 457 head and neck free flaps, 150 osseous mandible reconstructions were performed over a 10-year period. This experience was retrospectively reviewed to establish an approach to osseous free flap mandible reconstruction. There were 94 male and 56 female patients (mean age, 50 years; range 3 to 79 years); 43 percent had hemimandibular defects, and the rest had central, lateral, or a combination defect. Donor sites included the fibula (90 percent), radius (4 percent), scapula (4 percent), and ilium (2 percent). Rigid fixation (up to five osteotomy sites) was used in 98 percent of patients. Aesthetic and functional results were evaluated a minimum of 6 months postoperatively. The free flap success rate was 100 percent, and bony union was achieved in 97 percent of the osteotomy sites. Osseointegrated dental implants were placed in 20 patients. A return to an unrestricted diet was achieved in 45 percent of patients; 45 percent returned to a soft diet, and 5 percent were on a liquid diet. Five percent of patients required enteral feeding to maintain weight. Speech was assessed as normal (36 percent), near normal (27 percent), intelligible (28 percent), or unintelligible (9 percent). Aesthetic outcome was judged as excellent (32 percent), good (27 percent), fair (27 percent), or poor (14 percent). This study demonstrates a very high success rate, with good-to-excellent functional and aesthetic results using osseous free flaps for primary mandible reconstruction. The fibula donor site should be the first choice for most cases, particularly those with anterior or large bony defects requiring multiple osteotomies. Use of alternative donor sites (i.e., radius and scapula) is best reserved for cases with large soft-tissue and minimal bone requirements. The ilium is recommended only when other options are unavailable. Thoughtful flap selection and design should supplant the need for multiple, simultaneous free flaps and vein grafting in most cases.", "title": "" }, { "docid": "6a85677755a82b147cb0874ae8299458", "text": "Data mining involves the process of recovering related, significant and credential information from a large collection of aggregated data. A major area of current research in data mining is the field of clinical investigations that involve disease diagnosis, prognosis and drug therapy. The objective of this paper is to identify an efficient classifier for prognostic breast cancer data. This research work involves designing a data mining framework that incorporates the task of learning patterns and rules that will facilitate the formulation of decisions in new cases. The machine learning techniques employed to train the proposed system are based on feature relevance analysis and classification algorithms. Wisconsin Prognostic Breast Cancer (WPBC) data from the UCI machine learning repository is utilized by means of data mining techniques to completely train the system on 198 individual cases, each comprising of 33 predictor values. This paper highlights the performance of feature reduction and classification algorithms on the training dataset. We evaluate the number of attributes for split in the Random tree algorithm and the confidence level and minimum size of the leaves in the C4.5 algorithm to produce 100 percent classification accuracy. Our results demonstrate that Random Tree and Quinlan’s C4.5 classification algorithm produce 100 percent accuracy in the training and test phase of classification with proper evaluation of algorithmic parameters.", "title": "" }, { "docid": "6976614013c1aa550b5e506b1d1203e7", "text": "Here we present an overview of various techniques performed concomitantly during penile prosthesis surgery to enhance penile length and girth. We report on the technique of ventral phalloplasty and its outcomes along with augmentation corporoplasty, suprapubic lipectomy, suspensory ligament release, and girth enhancement procedures. For the serious implanter, outcomes can be improved by combining the use of techniques for each scar incision. These adjuvant procedures are a key addition in the armamentarium for the serious implant surgeon.", "title": "" }, { "docid": "5db5bed638cd8c5c629f9bebef556730", "text": "The health benefits of garlic likely arise from a wide variety of components, possibly working synergistically. The complex chemistry of garlic makes it plausible that variations in processing can yield quite different preparations. Highly unstable thiosulfinates, such as allicin, disappear during processing and are quickly transformed into a variety of organosulfur components. The efficacy and safety of these preparations in preparing dietary supplements based on garlic are also contingent on the processing methods employed. Although there are many garlic supplements commercially available, they fall into one of four categories, i.e., dehydrated garlic powder, garlic oil, garlic oil macerate and aged garlic extract (AGE). Garlic and garlic supplements are consumed in many cultures for their hypolipidemic, antiplatelet and procirculatory effects. In addition to these proclaimed beneficial effects, some garlic preparations also appear to possess hepatoprotective, immune-enhancing, anticancer and chemopreventive activities. Some preparations appear to be antioxidative, whereas others may stimulate oxidation. These additional biological effects attributed to AGE may be due to compounds, such as S-allylcysteine, S-allylmercaptocysteine, N(alpha)-fructosyl arginine and others, formed during the extraction process. Although not all of the active ingredients are known, ample research suggests that several bioavailable components likely contribute to the observed beneficial effects of garlic.", "title": "" }, { "docid": "707a31c60288fc2873bb37544bb83edf", "text": "The game of Go has a long history in East Asian countries, but the field of Computer Go has yet to catch up to humans until the past couple of years. While the rules of Go are simple, the strategy and combinatorics of the game are immensely complex. Even within the past couple of years, new programs that rely on neural networks to evaluate board positions still explore many orders of magnitude more board positions per second than a professional can. We attempt to mimic human intuition in the game by creating a convolutional neural policy network which, without any sort of tree search, should play the game at or above the level of most humans. We introduce three structures and training methods that aim to create a strong Go player: non-rectangular convolutions, which will better learn the shapes on the board, supervised learning, training on a data set of 53,000 professional games, and reinforcement learning, training on games played between different versions of the network. Our network has already surpassed the skill level of intermediate amateurs simply using supervised learning. Further training and implementation of non-rectangular convolutions and reinforcement learning will likely increase this skill level much further.", "title": "" }, { "docid": "c2bd875199c6da6ce0f7c46349c7c937", "text": "This chapter presents a survey of contemporary NLP research on Multiword Expressions (MWEs). MWEs pose a huge problem to precise language processing due to their idiosyncratic nature and diversity of their semantic, lexical, and syntactical properties. The chapter begins by considering MWEs definitions, describes some MWEs classes, indicates problems MWEs generate in language applications and their possible solutions, presents methods of MWE encoding in dictionaries and their automatic detection in corpora. The chapter goes into more detail on a particular MWE class called Verb-Noun Constructions (VNCs). Due to their frequency in corpus and unique characteristics, VNCs present a research problem in their own right. Having outlined several approaches to VNC representation in lexicons, the chapter explains the formalism of Lexical Function as a possible VNC representation. Such representation may serve as a tool for VNCs automatic detection in a corpus. The latter is illustrated on Spanish material applying some supervised learning methods commonly used for NLP tasks.", "title": "" }, { "docid": "4b878ffe2fd7b1f87e2f06321e5f03fa", "text": "Physical unclonable function (PUF) leverages the immensely complex and irreproducible nature of physical structures to achieve device authentication and secret information storage. To enhance the security and robustness of conventional PUFs, reconfigurable physical unclonable functions (RPUFs) with dynamically refreshable challenge-response pairs (CRPs) have emerged recently. In this paper, we propose two novel physically reconfigurable PUF (P-RPUF) schemes that exploit the process parameter variability and programming sensitivity of phase change memory (PCM) for CRP reconfiguration and evaluation. The first proposed PCM-based P-RPUF scheme extracts its CRPs from the measurable differences of the PCM cell resistances programmed by randomly varying pulses. An imprecisely controlled regulator is used to protect the privacy of the CRP in case the configuration state of the RPUF is divulged. The second proposed PCM-based RPUF scheme produces the random response by counting the number of programming pulses required to make the cell resistance converge to a predetermined target value. The merging of CRP reconfiguration and evaluation overcomes the inherent vulnerability of P-RPUF devices to malicious prediction attacks by limiting the number of accessible CRPs between two consecutive reconfigurations to only one. Both schemes were experimentally evaluated on 180-nm PCM chips. The obtained results demonstrated their quality for refreshable key generation when appropriate fuzzy extractor algorithms are incorporated.", "title": "" }, { "docid": "41e71a03c2abdd0fec78e8273709efa7", "text": "Logical correction of aging contour changes of the face is based on understanding its structure and the processes involved in the aging appearance. Aging changes are seen at all tissue levels between the skin and bone although the relative contribution of each component to the overall change of facial appearance has yet to be satisfactorily determined. Significantly, the facial skeleton changes profoundly with aging as a consequence of significant resorption of the bones of dental origin in particular. The resultant loss of skeletal projection gives the visual impression of descent while the reduced ligamentous support leads to laxity of the overlying soft tissues. Understanding the specific changes of the face with aging is fundamental to achieving optimum correction and safe use of injectables for facial rejuvenation.", "title": "" }, { "docid": "fe79c1c71112b3b40e047db6030aaff9", "text": "We are at a key juncture in history where biodiversity loss is occurring daily and accelerating in the face of population growth, climate change, and rampant development. Simultaneously, we are just beginning to appreciate the wealth of human health benefits that stem from experiencing nature and biodiversity. Here we assessed the state of knowledge on relationships between human health and nature and biodiversity, and prepared a comprehensive listing of reported health effects. We found strong evidence linking biodiversity with production of ecosystem services and between nature exposure and human health, but many of these studies were limited in rigor and often only correlative. Much less information is available to link biodiversity and health. However, some robust studies indicate that exposure to microbial biodiversity can improve health, specifically in reducing certain allergic and respiratory diseases. Overall, much more research is needed on mechanisms of causation. Also needed are a reenvisioning of land-use planning that places human well-being at the center and a new coalition of ecologists, health and social scientists and planners to conduct research and develop policies that promote human interaction with nature and biodiversity. Improvements in these areas should enhance human health and ecosystem, community, as well as human resilience. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).", "title": "" }, { "docid": "46cc4ab93b7b6dd28b81846b891ceb3f", "text": "This paper covers design, implementation and evaluation of a system that may be used to predict future stock prices basing on analysis of data from social media services. The authors took advantage of large datasets available from Twitter micro blogging platform and widely available stock market records. Data was collected during three months and processed for further analysis. Machine learning was employed to conduct sentiment classification of data coming from social networks in order to estimate future stock prices. Calculations were performed in distributed environment according to Map Reduce programming model. Evaluation and discussion of results of predictions for different time intervals and input datasets proved efficiency of chosen approach is discussed here.", "title": "" }, { "docid": "91283606a1737f3076ba6e00a6754fd1", "text": "OBJECTIVE\nTo review the quantitative instruments available to health service researchers who want to measure culture and cultural change.\n\n\nDATA SOURCES\nA literature search was conducted using Medline, Cinahl, Helmis, Psychlit, Dhdata, and the database of the King's Fund in London for articles published up to June 2001, using the phrase \"organizational culture.\" In addition, all citations and the gray literature were reviewed and advice was sought from experts in the field to identify instruments not found on the electronic databases. The search focused on instruments used to quantify culture with a track record, or potential for use, in health care settings.\n\n\nDATA EXTRACTION\nFor each instrument we examined the cultural dimensions addressed, the number of items for each questionnaire, the measurement scale adopted, examples of studies that had used the tool, the scientific properties of the instrument, and its strengths and limitations.\n\n\nPRINCIPAL FINDINGS\nThirteen instruments were found that satisfied our inclusion criteria, of which nine have a track record in studies involving health care organizations. The instruments varied considerably in terms of their grounding in theory, format, length, scope, and scientific properties.\n\n\nCONCLUSIONS\nA range of instruments with differing characteristics are available to researchers interested in organizational culture, all of which have limitations in terms of their scope, ease of use, or scientific properties. The choice of instrument should be determined by how organizational culture is conceptualized by the research team, the purpose of the investigation, intended use of the results, and availability of resources.", "title": "" }, { "docid": "542c115a46d263ee347702cf35b6193c", "text": "We obtain universal bounds on the energy of codes and for designs in Hamming spaces. Our bounds hold for a large class of potential functions, allow unified treatment, and can be viewed as a generalization of the Levenshtein bounds for maximal codes.", "title": "" }, { "docid": "8be48759b1ae6b7d65ff61ebc43dfee6", "text": "In this study, we introduce a household object dataset for recognition and manipulation tasks, focusing on commonly available objects in order to facilitate sharing of applications and algorithms. The core information available for each object consists of a 3D surface model annotated with a large set of possible grasp points, pre-computed using a grasp simulator. The dataset is an integral part of a complete Robot Operating System (ROS) architecture for performing pick and place tasks. We present our current applications using this data, and discuss possible extensions and future directions for shared datasets for robot operation in unstructured settings. I. DATASETS FOR ROBOTICS RESEARCH Recent years have seen a growing consensus that one of the keys to robotic applications in unstructured environments lies in collaboration and reusable functionality. An immediate result has been the emergence of a number of platforms and frameworks for sharing operational “building blocks,” usually in the form of code modules, with functionality ranging from low-level hardware drivers to complex algorithms such as path or motion planners. By using a set of now well-established guidelines, such as stable documented interfaces and standardized communication protocols, this type of collaboration has accelerated development towards complex applications. However, a similar set of methods for sharing and reusing data has been slower to emerge. In this paper we describe our effort in producing and releasing to the community a complete architecture for performing pick-and-place tasks in unstructured (or semistructured) environments. There are two key components to this architecture: the algorithms themselves, developed using the Robot Operating System (ROS) framework, and the knowledge base that they operate on. In our case, the algorithms provide abilities such as object segmentation and recognition, motion planning with collision avoidance, grasp execution using tactile feedback, etc. The knowledge base, which is the main focus of this study, contains relevant information for object recognition and grasping for a large set of common household objects. Some of the key aspects of combining computational tools with the data that they operate on are: • other researchers will have the option of directly using our dataset over the Internet (in an open, read-only fashion), or downloading and customizing it for their own applications; • defining a stable interface to the dataset component of the release will allow other researchers to provide their own modified and/or extended versions of the data to †Willow Garage Inc., Menlo Park, CA. Email: {matei, bradski, hsiao, pbrook}@willowgarage.com ∗University of Washington, Seattle, WA. the community, knowing that it will be directly usable by anyone running the algorithmic component; • the data and algorithm components can evolve together, like any other components of a large software distribution, with well-defined and documented interfaces, version numbering and control, etc. In particular, our current dataset is available in the form of a relational database, using the SQL standard. This choice provides additional benefits, including optimized relational queries, both for using the data on-line and managing it off-line, and low-level serialization functionality for most major languages. We believe that these features can help foster collaboration as well as provide useful tools for benchmarking as we advance towards increasingly complex behavior in unstructured environments. There have been previous example of datasets released in the research community (as described for example in [3], [7], [13] to name only a few), used either for benchmarking or for data-driven algorithms. However, few of these have been accompanied by the relevant algorithms, or have offered a well-defined interface to be used for extensions. The database component of our architecture was directly inspired by the Columbia Grasp Database (CGDB) [5], [6], released together with processing software integrated with the GraspIt! simulator [9]. The CGDB contains object shape and grasp information for a very large (n = 7, 256) set of general shapes from the Princeton Shape Benchmark [12]. The dataset presented here is smaller in scope (n = 180), referring only to actual graspable objects from the real world, and is integrated with a complete manipulation pipeline on the PR2 robot. II. THE OBJECT AND GRASP DATABASE", "title": "" }, { "docid": "1a620e17048fa25cfc54f5c9fb821f39", "text": "The performance of a detector depends much on its training dataset and drops significantly when the detector is applied to a new scene due to the large variations between the source training dataset and the target scene. In order to bridge this appearance gap, we propose a deep model to automatically learn scene-specific features and visual patterns in static video surveillance without any manual labels from the target scene. It jointly learns a scene-specific classifier and the distribution of the target samples. Both tasks share multi-scale feature representations with both discriminative and representative power. We also propose a cluster layer in the deep model that utilizes the scenespecific visual patterns for pedestrian detection. Our specifically designed objective function not only incorporates the confidence scores of target training samples but also automatically weights the importance of source training samples by fitting the marginal distributions of target samples. It significantly improves the detection rates at 1 FPPI by 10% compared with the state-of-the-art domain adaptation methods on MIT Traffic Dataset and CUHK Square Dataset.", "title": "" }, { "docid": "5459dc71fd40a576365f0afced64b6b7", "text": "Cloud computing providers such as Amazon and Google have recently begun offering container-instances, which provide an efficient route to application deployment within a lightweight, isolated and well-defined execution environment. Cloud providers currently offer Container Service Platforms (CSPs), which orchestrate containerised applications. Existing CSP frameworks do not offer any form of intelligent resource scheduling: applications are usually scheduled individually, rather than taking a holistic view of all registered applications and available resources in the cloud. This can result in increased execution times for applications, resource wastage through underutilised container-instances, and a reduction in the number of applications that can be deployed, given the available resources. The research presented in this paper aims to extend existing systems by adding a cloud-based Container Management Service (CMS) framework that offers increased deployment density, scalability and resource efficiency. CMS provides additional functionalities for orchestrating containerised applications by joint optimisation of sets of containerised applications, and resource pool in multiple (geographical distributed) cloud regions. We evaluated CMS on a cloud-based CSP i.e., Amazon EC2 Container Management Service (ECS) and conducted extensive experiments using sets of CPU and Memory intensive containerised applications against the direct deployment strategy of Amazon ECS. The results show that CMS achieves up to 25% higher cluster utilisation, and up to 70% reduction in execution times.", "title": "" }, { "docid": "ca468aa680c29fb00f55e9d851676200", "text": "The class of problems involving the random generation of combinatorial structures from a uniform distribution is considered. Uniform generation problems are, in computational difficulty, intermediate between classical existence and counting problems. It is shown that exactly uniform generation of 'efficiently verifiable' combinatorial structures is reducible to approximate counting (and hence, is within the third level of the polynomial hierarchy). Natural combinatorial problems are presented which exhibit complexity gaps between their existence and generation, and between their generation and counting versions. It is further shown that for self-reducible problems, almost uniform generation and randomized approximate counting are inter-reducible, and hence, of similar complexity. CR Categories. F.I.1, F.1.3, G.2.1, G.3", "title": "" }, { "docid": "e7a0a9e31bba0eec8bf598c5e9eefe6b", "text": "Stylizing photos, to give them an antique or artistic look, has become popular in recent years. The available stylization filters, however, are usually created manually by artists, resulting in a narrow set of choices. Moreover, it can be difficult for the user to select a desired filter, since the filters’ names often do not convey their functions. We investigate an approach to photo filtering in which the user provides one or more keywords, and the desired style is defined by the set of images returned by searching the web for those keywords. Our method clusters the returned images, allows the user to select a cluster, then stylizes the user’s photos by transferring vignetting, color, and local contrast from that cluster. This approach vastly expands the range of available styles, and gives each filter a meaningful name by default. We demonstrate that our method is able to robustly transfer a wide range of styles from image collections to users’ photos.", "title": "" }, { "docid": "c249c64b3e41cde156a63e1224ae2091", "text": "The technology of intelligent agents and multi-agent systems seems set to radically alter the way in which complex, distributed, open systems are conceptualized and implemented. The purpose of this paper is to consider the problem of building a multi-agent system as a software engineering enterprise. The article focuses on three issues: (i) how agents might be specified; (ii) how these specifications might be refined or otherwise transformed into efficient implementations; and (iii) how implemented agents and multi-agent systems might subsequently be verified, in order to show that they are correct with respect to their specifications. These issues are discussed with reference to a number of casestudies. The article concludes by setting out some issues and open problems for future", "title": "" }, { "docid": "f534a356d309fc6625fa3baa070e803a", "text": "Neural networks have been successfully applied in applications with a large amount of labeled data. However, the task of rapid generalization on new concepts with small training data while preserving performances on previously learned ones still presents a significant challenge to neural network models. In this work, we introduce a novel meta learning method, Meta Networks (MetaNet), that learns a meta-level knowledge across tasks and shifts its inductive biases via fast parameterization for rapid generalization. When evaluated on Omniglot and Mini-ImageNet benchmarks, our MetaNet models achieve a near human-level performance and outperform the baseline approaches by up to 6% accuracy. We demonstrate several appealing properties of MetaNet relating to generalization and continual learning.", "title": "" } ]
scidocsrr
ebdd1187acfaade03515728ec857b9af
Efficient frontier detection for robot exploration
[ { "docid": "77908ab362e0a26e395bc2d2bf07e0ee", "text": "In this paper we consider the problem of exploring an unknown environment by a team of robots. As in single-robot exploration the goal is to minimize the overall exploration time. The key problem to be solved therefore is to choose appropriate target points for the individual robots so that they simultaneously explore different regions of their environment. We present a probabilistic approach for the coordination of multiple robots which, in contrast to previous approaches, simultaneously takes into account the costs of reaching a target point and the utility of target points. The utility of target points is given by the size of the unexplored area that a robot can cover with its sensors upon reaching a target position. Whenever a target point is assigned to a specific robot, the utility of the unexplored area visible from this target position is reduced for the other robots. This way, a team of multiple robots assigns different target points to the individual robots. The technique has been implemented and tested extensively in real-world experiments and simulation runs. The results given in this paper demonstrate that our coordination technique significantly reduces the exploration time compared to previous approaches. '", "title": "" }, { "docid": "83981d52eb5e58d6c2d611b25c9f6d12", "text": "This tutorial provides an introduction to Simultaneous Localisation and Mapping (SLAM) and the extensive research on SLAM that has been undertaken over the past decade. SLAM is the process by which a mobile robot can build a map of an environment and at the same time use this map to compute it’s own location. The past decade has seen rapid and exciting progress in solving the SLAM problem together with many compelling implementations of SLAM methods. Part I of this tutorial (this paper), describes the probabilistic form of the SLAM problem, essential solution methods and significant implementations. Part II of this tutorial will be concerned with recent advances in computational methods and new formulations of the SLAM problem for large scale and complex environments.", "title": "" } ]
[ { "docid": "97a6a77cfa356636e11e02ffe6fc0121", "text": "© 2019 Muhammad Burhan Hafez et al., published by De Gruyter. This work is licensed under the Creative CommonsAttribution-NonCommercial-NoDerivs4.0License. Paladyn, J. Behav. Robot. 2019; 10:14–29 Research Article Open Access Muhammad Burhan Hafez*, Cornelius Weber, Matthias Kerzel, and Stefan Wermter Deep intrinsically motivated continuous actor-critic for eflcient robotic visuomotor skill learning https://doi.org/10.1515/pjbr-2019-0005 Received June 6, 2018; accepted October 29, 2018 Abstract: In this paper, we present a new intrinsically motivated actor-critic algorithm for learning continuous motor skills directly from raw visual input. Our neural architecture is composed of a critic and an actor network. Both networks receive the hidden representation of a deep convolutional autoencoder which is trained to reconstruct the visual input, while the centre-most hidden representation is also optimized to estimate the state value. Separately, an ensemble of predictive world models generates, based on its learning progress, an intrinsic reward signal which is combined with the extrinsic reward to guide the exploration of the actor-critic learner. Our approach is more data-efficient and inherently more stable than the existing actor-critic methods for continuous control from pixel data. We evaluate our algorithm for the task of learning robotic reaching and grasping skills on a realistic physics simulator and on a humanoid robot. The results show that the control policies learnedwith our approach can achieve better performance than the compared state-of-the-art and baseline algorithms in both dense-reward and challenging sparse-reward settings.", "title": "" }, { "docid": "e86247471d4911cb84aa79911547045b", "text": "Creating rich representations of environments requires integration of multiple sensing modalities with complementary characteristics such as range and imaging sensors. To precisely combine multisensory information, the rigid transformation between different sensor coordinate systems (i.e., extrinsic parameters) must be estimated. The majority of existing extrinsic calibration techniques require one or multiple planar calibration patterns (such as checkerboards) to be observed simultaneously from the range and imaging sensors. The main limitation of these approaches is that they require modifying the scene with artificial targets. In this paper, we present a novel algorithm for extrinsically calibrating a range sensor with respect to an image sensor with no requirement of external artificial targets. The proposed method exploits natural linear features in the scene to precisely determine the rigid transformation between the coordinate frames. First, a set of 3D lines (plane intersection and boundary line segments) are extracted from the point cloud, and a set of 2D line segments are extracted from the image. Correspondences between the 3D and 2D line segments are used as inputs to an optimization problem which requires jointly estimating the relative translation and rotation between the coordinate frames. The proposed method is not limited to any particular types or configurations of sensors. To demonstrate robustness, efficiency and generality of the presented algorithm, we include results using various sensor configurations.", "title": "" }, { "docid": "526238c8369bb37048f3165b2ace0d15", "text": "With their exceptional interactive and communicative capabilities, Online Social Networks (OSNs) allow destinations and companies to heighten their brand awareness. Many tourist destinations and hospitality brands are exploring the use of OSNs to form brand awareness and generate positive WOM. The purpose of this research is to propose and empirically test a theory-driven model of brand awareness in OSNs. A survey among 230 OSN users was deployed to test the theoretical model. The data was analyzed using SEM. Study results indicate that building brand awareness in OSNs increases WOM traffic. In order to foster brand awareness in OSN, it is important to create a virtually interactive environment, enabling users to exchange reliable, rich and updated information in a timely manner. Receiving financial and/or psychological rewards and accessing exclusive privileges in OSNs are important factors for users. Both system quality and information quality were found to be important precursors of brand awareness in OSNs. Study results support the importance of social media in online branding strategies. Virtual interactivity, system quality, information content quality, and rewarding activities influence and generate brand awareness, which in return, triggers WOM. 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "824b0e8a66699965899169738df7caa9", "text": "Much recent progress in Vision-to-Language (V2L) problems has been achieved through a combination of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). This approach does not explicitly represent high-level semantic concepts, but rather seeks to progress directly from image features to text. In this paper we investigate whether this direct approach succeeds due to, or despite, the fact that it avoids the explicit representation of high-level information. We propose a method of incorporating high-level concepts into the successful CNN-RNN approach, and show that it achieves a significant improvement on the state-of-the-art in both image captioning and visual question answering. We also show that the same mechanism can be used to introduce external semantic information and that doing so further improves performance. We achieve the best reported results on both image captioning and VQA on several benchmark datasets, and provide an analysis of the value of explicit high-level concepts in V2L problems.", "title": "" }, { "docid": "51da24a6bdd2b42c68c4465624d2c344", "text": "Hashing based Approximate Nearest Neighbor (ANN) search has attracted much attention due to its fast query time and drastically reduced storage. However, most of the hashing methods either use random projections or extract principal directions from the data to derive hash functions. The resulting embedding suffers from poor discrimination when compact codes are used. In this paper, we propose a novel data-dependent projection learning method such that each hash function is designed to correct the errors made by the previous one sequentially. The proposed method easily adapts to both unsupervised and semi-supervised scenarios and shows significant performance gains over the state-ofthe-art methods on two large datasets containing up to 1 million points.", "title": "" }, { "docid": "2c6332afec6a2c728041e0325a27fcbf", "text": "Today’s social networks are plagued by numerous types of malicious profiles which can range from socialbots to sexual predators. We present a novel method for the detection of these malicious profiles by using the social network’s own topological features only. Reliance on these features alone ensures that the proposed method is generic enough to be applied on a range of social networks. The algorithm has been evaluated on several social networks and was found to be effective in detecting various types of malicious profiles. We believe this method is a valuable step in the increasing battle against social network spammers, socialbots, and sexual predictors.", "title": "" }, { "docid": "26af6b4795e1864a63da17231651960c", "text": "In 2020, 146,063 deaths due to pancreatic cancer are estimated to occur in Europe and the United States combined. To identify common susceptibility alleles, we performed the largest pancreatic cancer GWAS to date, including 9040 patients and 12,496 controls of European ancestry from the Pancreatic Cancer Cohort Consortium (PanScan) and the Pancreatic Cancer Case-Control Consortium (PanC4). Here, we find significant evidence of a novel association at rs78417682 (7p12/TNS3, P = 4.35 × 10−8). Replication of 10 promising signals in up to 2737 patients and 4752 controls from the PANcreatic Disease ReseArch (PANDoRA) consortium yields new genome-wide significant loci: rs13303010 at 1p36.33 (NOC2L, P = 8.36 × 10−14), rs2941471 at 8q21.11 (HNF4G, P = 6.60 × 10−10), rs4795218 at 17q12 (HNF1B, P = 1.32 × 10−8), and rs1517037 at 18q21.32 (GRP, P = 3.28 × 10−8). rs78417682 is not statistically significantly associated with pancreatic cancer in PANDoRA. Expression quantitative trait locus analysis in three independent pancreatic data sets provides molecular support of NOC2L as a pancreatic cancer susceptibility gene. Genetic variants associated with susceptibility to pancreatic cancer have been identified using genome wide association studies (GWAS). Here, the authors combine data from over 9000 patients and perform a meta-analysis to identify five novel loci linked to pancreatic cancer.", "title": "" }, { "docid": "127406000c2ede6517513bfa21747431", "text": "These are exciting times for cancer immunotherapy. After many years of disappointing results, the tide has finally changed and immunotherapy has become a clinically validated treatment for many cancers. Immunotherapeutic strategies include cancer vaccines, oncolytic viruses, adoptive transfer of ex vivo activated T and natural killer cells, and administration of antibodies or recombinant proteins that either costimulate cells or block the so-called immune checkpoint pathways. The recent success of several immunotherapeutic regimes, such as monoclonal antibody blocking of cytotoxic T lymphocyte-associated protein 4 (CTLA-4) and programmed cell death protein 1 (PD1), has boosted the development of this treatment modality, with the consequence that new therapeutic targets and schemes which combine various immunological agents are now being described at a breathtaking pace. In this review, we outline some of the main strategies in cancer immunotherapy (cancer vaccines, adoptive cellular immunotherapy, immune checkpoint blockade, and oncolytic viruses) and discuss the progress in the synergistic design of immune-targeting combination therapies.", "title": "" }, { "docid": "86d8a61771cd14a825b6fc652f77d1d6", "text": "The widespread of adult content on online social networks (e.g., Twitter) is becoming an emerging yet critical problem. An automatic method to identify accounts spreading sexually explicit content (i.e., adult account) is of significant values in protecting children and improving user experiences. Traditional adult content detection techniques are ill-suited for detecting adult accounts on Twitter due to the diversity and dynamics in Twitter content. In this paper, we formulate the adult account detection as a graph based classification problem and demonstrate our detection method on Twitter by using social links between Twitter accounts and entities in tweets. As adult Twitter accounts are mostly connected with normal accounts and post many normal entities, which makes the graph full of noisy links, existing graph based classification techniques cannot work well on such a graph. To address this problem, we propose an iterative social based classifier (ISC), a novel graph based classification technique resistant to the noisy links. Evaluations using large-scale real-world Twitter data show that, by labeling a small number of popular Twitter accounts, ISC can achieve satisfactory performance in adult account detection, significantly outperforming existing techniques.", "title": "" }, { "docid": "86d725fa86098d90e5e252c6f0aaab3c", "text": "This paper illustrates the manner in which UML can be used to study mappings to different types of database systems. After introducing UML through a comparison to the EER model, UML diagrams are used to teach different approaches for mapping conceptual designs to the relational model. As we cover object-oriented and object-relational database systems, different features of UML are used over the same enterprise example to help students understand mapping alternatives for each model. Students are required to compare and contrast the mappings in each model as part of the learning process. For object-oriented and object-relational database systems, we address mappings to the ODMG and SQL99 standards in addition to specific commercial implementations.", "title": "" }, { "docid": "888c4bc9f1ca4402f1f56bde657c5fbe", "text": "This paper presents a comprehensive survey of existing authentication and privacy-preserving schemes for 4G and 5G cellular networks. We start by providing an overview of existing surveys that deal with 4G and 5G communications, applications, standardization, and security. Then, we give a classification of threat models in 4G and 5G cellular networks in four categories, including, attacks against privacy, attacks against integrity, attacks against availability, and attacks against authentication. We also provide a classification of countermeasures into three types of categories, including, cryptography methods, humans factors, and intrusion detection methods. The countermeasures and informal and formal security analysis techniques used by the authentication and privacy preserving schemes are summarized in form of tables. Based on the categorization of the authentication and privacy models, we classify these schemes in seven types, including, handover authentication with privacy, mutual authentication with privacy, RFID authentication with privacy, deniable authentication with privacy, authentication with mutual anonymity, authentication and key agreement with privacy, and three-factor authentication with privacy. In addition, we provide a taxonomy and comparison of authentication and privacypreserving schemes for 4G and 5G cellular networks in form of tables. Based on the current survey, several recommendations for further research are discussed at the end of this paper.", "title": "" }, { "docid": "6ff51eea5a590996ed0219a4991d32f2", "text": "The number R(4, 3, 3) is often presented as the unknown Ramsey number with the best chances of being found “soon”. Yet, its precise value has remained unknown for almost 50 years. This paper presents a methodology based on abstraction and symmetry breaking that applies to solve hard graph edge-coloring problems. The utility of this methodology is demonstrated by using it to compute the value R(4, 3, 3) = 30. Along the way it is required to first compute the previously unknown set ℛ ( 3 , 3 , 3 ; 13 ) $\\mathcal {R}(3,3,3;13)$ consisting of 78,892 Ramsey colorings.", "title": "" }, { "docid": "2d7ff73a3fb435bd11633f650b23172e", "text": "This study determined the effect of Tetracarpidium conophorum (black walnut) leaf extract on the male reproductive organs of albino rats. The effects of the leaf extracts were determined on the Epididymal sperm concentration, Testicular histology, and on testosterone concentration in the rat serum by a micro plate enzyme immunoassay (Testosterone assay). A total of sixteen (16) male albino wistar rats were divided into four (1, 2, 3 and 4) groups of four rats each. Group 1 served as the control and was fed with normal diet only, while groups 2, 3 and 4 were fed with 200, 400 and 600 mg/kg body weight (BW) of the extract for a period of two weeks. The Epididymal sperm concentration were not significantly affected (p>0.05) across the groups. The level of testosterone for the treatment groups 2 and 4 showed no significant difference (p>0.05) compared to the control while group 4 showed significant increase compared to that of the control (p<0.05). Pathologic changes were observed in testicular histology across the treatment groups. Robust seminiferous tubular lumen containing sperm cells and increased production of Leydig cells and Sertoli cells were observed across different treatment groups compared to that of the control.", "title": "" }, { "docid": "30e0918ec670bdab298f4f5bb59c3612", "text": "Consider a single hard disk drive (HDD) composed of rotating platters and a single magnetic head. We propose a simple internal coding framework for HDDs that uses coding across drive blocks to reduce average block seek times. In particular, instead of the HDD controller seeking individual blocks, the drive performs coded-seeking: It seeks the closest subset of coded blocks, where a coded block contains partial information from multiple uncoded blocks. Coded-seeking is a tool that relaxes the scheduling of a full traveling salesman problem (TSP) on an HDD into a k-TSP. This may provide opportunities for new scheduling algorithms and to reduce average read times.", "title": "" }, { "docid": "86052e2fc8f89b91f274a607531f536e", "text": "Existing approaches to analyzing the asymptotics of graph Laplacians typically assume a well-behaved kernel function with smoothness assumptions. We remove the smoothness assumption and generalize the analysis of graph Laplacians to include previously unstudied graphs including kNN graphs. We also introduce a kernel-free framework to analyze graph constructions with shrinking neighborhoods in general and apply it to analyze locally linear embedding (LLE). We also describe how, for a given limit operator, desirable properties such as a convergent spectrum and sparseness can be achieved by choosing the appropriate graph construction.", "title": "" }, { "docid": "a024f33090621555f2d5e3aadeac0265", "text": "Recent efforts to understand the mechanisms underlying human cooperation have focused on the notion of trust, with research illustrating that both initial impressions and previous interactions impact the amount of trust people place in a partner. Less is known, however, about how these two types of information interact in iterated exchanges. The present study examined how implicit initial trustworthiness information interacts with experienced trustworthiness in a repeated Trust Game. Consistent with our hypotheses, these two factors reliably influence behavior both independently and synergistically, in terms of how much money players were willing to entrust to their partner and also in their post-game subjective ratings of trustworthiness. To further understand this interaction, we used Reinforcement Learning models to test several distinct processing hypotheses. These results suggest that trustworthiness is a belief about probability of reciprocation based initially on implicit judgments, and then dynamically updated based on experiences. This study provides a novel quantitative framework to conceptualize the notion of trustworthiness.", "title": "" }, { "docid": "096b2ffac795053e046c25f1e8697fcf", "text": "Background\nThe benefit of computer-assisted planning in orthognathic surgery (OGS) has been extensively documented over the last decade. This study aimed to evaluate the accuracy of three-dimensional (3D) virtual planning in surgery-first OGS.\n\n\nMethods\nFifteen patients with skeletal class III malocclusion who underwent bimaxillary OGS with surgery-first approach were included. A composite skull model was reconstructed using data from cone-beam computed tomography and stereolithography from a scanned dental cast. Surgical procedures were simulated using Simplant O&O software, and the virtual plan was transferred to the operation room using 3D-printed splints. Differences of the 3D measurements between the virtual plan and postoperative results were evaluated, and the accuracy was reported using root mean square deviation (RMSD) and the Bland-Altman method.\n\n\nResults\nThe virtual planning was successfully transferred to surgery. The overall mean linear difference was 0.88 mm (0.79 mm for the maxilla and 1 mm for the mandible), and the overall mean angular difference was 1.16°. The RMSD ranged from 0.86 to 1.46 mm and 1.27° to 1.45°, within the acceptable clinical criteria.\n\n\nConclusion\nIn this study, virtual surgical planning and 3D-printed surgical splints facilitated the diagnosis and treatment planning, and offered an accurate outcome in surgery-first OGS.", "title": "" }, { "docid": "4df52d891c63975a1b9d4cd6c74571db", "text": "DDoS attacks have been a persistent threat to network availability for many years. Most of the existing mitigation techniques attempt to protect against DDoS by filtering out attack traffic. However, as critical network resources are usually static, adversaries are able to bypass filtering by sending stealthy low traffic from large number of bots that mimic benign traffic behavior. Sophisticated stealthy attacks on critical links can cause a devastating effect such as partitioning domains and networks. In this paper, we propose to defend against DDoS attacks by proactively changing the footprint of critical resources in an unpredictable fashion to invalidate an adversary's knowledge and plan of attack against critical network resources. Our present approach employs virtual networks (VNs) to dynamically reallocate network resources using VN placement and offers constant VN migration to new resources. Our approach has two components: (1) a correct-by-construction VN migration planning that significantly increases the uncertainty about critical links of multiple VNs while preserving the VN placement properties, and (2) an efficient VN migration mechanism that identifies the appropriate configuration sequence to enable node migration while maintaining the network integrity (e.g., avoiding session disconnection). We formulate and implement this framework using SMT logic. We also demonstrate the effectiveness of our implemented framework on both PlanetLab and Mininet-based experimentations.", "title": "" }, { "docid": "686abc74c0a34c90755d20c0ffc63eb2", "text": "Bayesian estimation for 2 groups provides complete distributions of credible values for the effect size, group means and their difference, standard deviations and their difference, and the normality of the data. The method handles outliers. The decision rule can accept the null value (unlike traditional t tests) when certainty in the estimate is high (unlike Bayesian model comparison using Bayes factors). The method also yields precise estimates of statistical power for various research goals. The software and programs are free and run on Macintosh, Windows, and Linux platforms.", "title": "" }, { "docid": "c5e401fe1b2a65677b93ae3e8aa60e18", "text": "In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms.", "title": "" } ]
scidocsrr
14beda2a2c57c76fabd3aa8e14d47193
Loving-kindness meditation for posttraumatic stress disorder: a pilot study.
[ { "docid": "ed0736d1f8c35ec8b0c2f5bb9adfb7f9", "text": "Neff's (2003a, 2003b) notion of self-compassion emphasizes kindness towards one's self, a feeling of connectedness with others, and mindful awareness of distressing experiences. Because exposure to trauma and subsequent posttraumatic stress symptoms (PSS) may be associated with self-criticism and avoidance of internal experiences, the authors examined the relationship between self-compassion and PSS. Out of a sample of 210 university students, 100 endorsed experiencing a Criterion A trauma. Avoidance symptoms significantly correlated with self-compassion, but reexperiencing and hyperarousal did not. Individuals high in self-compassion may engage in less avoidance strategies following trauma exposure, allowing for a natural exposure process.", "title": "" } ]
[ { "docid": "96e56dcf3d38c8282b5fc5c8ae747a66", "text": "The solid-state transformer (SST) was conceived as a replacement for the conventional power transformer, with both lower volume and weight. The smart transformer (ST) is an SST that provides ancillary services to the distribution and transmission grids to optimize their performance. Hence, the focus shifts from hardware advantages to functionalities. One of the most desired functionalities is the dc connectivity to enable a hybrid distribution system. For this reason, the ST architecture shall be composed of at least two power stages. The standard design procedure for this kind of system is to design each power stage for the maximum load. However, this design approach might limit additional services, like the reactive power compensation on the medium voltage (MV) side, and it does not consider the load regulation capability of the ST on the low voltage (LV) side. If the SST is tailored to the services that it shall provide, different stages will have different designs, so that the ST is no longer a mere application of the SST but an entirely new subject.", "title": "" }, { "docid": "e5f084c72109f869c54f402237f84907", "text": "As former Fermatist, the author tried many times to prove Fermat’s Last Theorem in an elementary way. Just few insights of the proposed schemes partially passed the peer-reviewing and they motivated the subsequent fruitful collaboration with Prof. Mario De Paz. Among the author’s failures, there is an unpublished proof emblematic of the FLT’s charming power for the suggestive circumstances it was formulated. As sometimes happens with similar erroneous attempts, containing out-of-context hints, it provides a germinal approach to power sums yet to be refined.", "title": "" }, { "docid": "74290ff01b32423087ce0025625dc445", "text": "niques is now the world champion computer program in the game of Contract Bridge. As reported in The New York Times and The Washington Post, this program—a new version of Great Game Products’ BRIDGE BARON program—won the Baron Barclay World Bridge Computer Challenge, an international competition hosted in July 1997 by the American Contract Bridge League. It is well known that the game tree search techniques used in computer programs for games such as Chess and Checkers work differently from how humans think about such games. In contrast, our new version of the BRIDGE BARON emulates the way in which a human might plan declarer play in Bridge by using an adaptation of hierarchical task network planning. This article gives an overview of the planning techniques that we have incorporated into the BRIDGE BARON and discusses what the program’s victory signifies for research on AI planning and game playing.", "title": "" }, { "docid": "b2911f3df2793066dde1af35f5a09d62", "text": "Cloud computing is drawing attention from both practitioners and researchers, and its adoption among organizations is on the rise. The focus has mainly been on minimizing fixed IT costs and using the IT resource flexibility offered by the cloud. However, the promise of cloud computing is much greater. As a disruptive technology, it enables innovative new services and business models that decrease time to market, create operational efficiencies and engage customers and citizens in new ways. However, we are still in the early days of cloud computing, and, for organizations to exploit the full potential, we need knowledge of the potential applications and pitfalls of cloud computing. Maturity models provide effective methods for organizations to assess, evaluate, and benchmark their capabilities as bases for developing roadmaps for improving weaknesses. Adopting the business-IT maturity model by Pearlson & Saunders (2007) as analytical framework, we synthesize the existing literature, identify levels of cloud computing benefits, and establish propositions for practice in terms of how to realize these benefits.", "title": "" }, { "docid": "32a597647795a7333b82827b55c209c9", "text": "This study investigates the relationship between the extent to which employees have opportunities to voice dissatisfaction and voluntary turnover in 111 short-term, general care hospitals. Results show that, whether or not a union is present, high numbers of mechanisms for employee voice are associated with high retention rates. Implications for theory and research as well as management practice are discussed.", "title": "" }, { "docid": "5793b2b2edbcb1443be7de07406f0fd2", "text": "Question answering is a complex and valuable task in natural language processing and artificial intelligence. Several deep learning models having already been proposed to solve it. In this work, we propose a deep learning model with an attention mechanism that is based on a previous work and a decoder that incorporates a wide summary of the context and question. That summary includes a condensed representation of the question, a context paragraph representation previous created by the model, as well as positional question summaries created by the attention mechanism. We demonstrate that a strong attention layer allows a deep learning model to do well even on long questions and context paragraphs in addition to contributing significantly to model performance.", "title": "" }, { "docid": "af952f9368761c201c5dfe4832686e87", "text": "The field of service design is expanding rapidly in practice, and a body of formal research is beginning to appear to which the present article makes an important contribution. As innovations in services develop, there is an increasing need not only for research into emerging practices and developments but also into the methods that enable, support and promote such unfolding changes. This article tackles this need directly by referring to a large design research project, and performing a related practicebased inquiry into the co-design and development of methods for fostering service design in organizations wishing to improve their service offerings to customers. In particular, with reference to a funded four-year research project, one aspect is elaborated on that uses cards as a method to focus on the importance and potential of touch-points in service innovation. Touch-points are one of five aspects in the project that comprise a wider, integrated model and means for implementing innovations in service design. Touch-points are the points of contact between a service provider and customers. A customer might utilise many different touch-points as part of a use scenario (often called a customer journey). For example, a bank’s touch points include its physical buildings, web-site, physical print-outs, self-service machines, bank-cards, customer assistants, call-centres, telephone assistance etc. Each time a person relates to, or interacts with, a touch-point, they have a service-encounter. This gives an experience and adds something to the person’s relationship with the service and the service provider. The sum of all experiences from touch-point interactions colours their opinion of the service (and the service provider). Touch-points are one of the central aspects of service design. A commonly used definition of service design is “Design for experiences that happen over time and across different touchpoints” (ServiceDesign.org). As this definition shows, touchpoints are often cited as one of the major elements of service", "title": "" }, { "docid": "80e6a7287c6da44387ceb3938dedb509", "text": "By taking advantage of the elevation domain, three-dimensional (3-D) multiple input and multiple output (MIMO) with massive antenna elements is considered as a promising and practical technique for the fifth Generation mobile communication system. So far, 3-D MIMO is mostly studied by simulation and a few field trials have been launched recently. It still remains unknown how much does the 3-D MIMO meet our expectations in versatile scenarios. In this paper, we answer this based on measurements with $56\\times 32$ antenna elements at 3.5 GHz with 100-MHz bandwidth in three typical deployment scenarios, including outdoor to indoor (O2I), urban microcell (UMi), and urban macrocell (UMa). Each scenario contains two different site locations and 2–5 test routes under the same configuration. Based on the measured data, both elevation and azimuth angles are extracted and their stochastic behaviors are investigated. Then, we reconstruct two-dimensional and 3-D MIMO channels based on the measured data, and compare the capacity and eigenvalues distribution. It is observed that 3-D MIMO channel which fully utilizes the elevation domain does improve capacity and also enhance the contributing eigenvalue number. However, this gain varies from scenario to scenario in reality, O2I is the most beneficial scenario, then followed by UMi and UMa scenarios. More results of multiuser capacity varying with the scenario, antenna number and user number can provide the experimental insights for the efficient utilization of 3-D MIMO in future.", "title": "" }, { "docid": "a839016be99c3cb93d30fa48403086d8", "text": "At synapses of the mammalian central nervous system, release of neurotransmitter occurs at rates transiently as high as 100 Hz, putting extreme demands on nerve terminals with only tens of functional vesicles at their disposal. Thus, the presynaptic vesicle cycle is particularly critical to maintain neurotransmission. To understand vesicle cycling at the most fundamental level, we studied single vesicles undergoing exo/endocytosis and tracked the fate of newly retrieved vesicles. This was accomplished by minimally stimulating boutons in the presence of the membrane-fluorescent styryl dye FM1-43, then selecting for terminals that contained only one dye-filled vesicle. We then observed the kinetics of dye release during single action potential stimulation. We found that most vesicles lost only a portion of their total dye during a single fusion event, but were able to fuse again soon thereafter. We interpret this as direct evidence of \"kiss-and-run\" followed by rapid reuse. Other interpretations such as \"partial loading\" and \"endosomal splitting\" were largely excluded on the basis of multiple lines of evidence. Our data placed an upper bound of <1.4 s on the lifetime of the kiss-and-run fusion event, based on the assumption that aqueous departitioning is rate limiting. The repeated use of individual vesicles held over a range of stimulus frequencies up to 30 Hz and was associated with neurotransmitter release. A small percentage of fusion events did release a whole vesicle's worth of dye in one action potential, consistent with a classical picture of exocytosis as fusion followed by complete collapse or at least very slow retrieval.", "title": "" }, { "docid": "5056c2a6f132c25e4b0ff1a79c72f508", "text": "The proliferation of Bluetooth Low-Energy (BLE) chipsets on mobile devices has lead to a wide variety of user-installable tags and beacons designed for location-aware applications. In this paper, we present the Acoustic Location Processing System (ALPS), a platform that augments BLE transmitters with ultrasound in a manner that improves ranging accuracy and can help users configure indoor localization systems with minimal effort. A user places three or more beacons in an environment and then walks through a calibration sequence with their mobile device where they touch key points in the environment like the floor and the corners of the room. This process automatically computes the room geometry as well as the precise beacon locations without needing auxiliary measurements. Once configured, the system can track a user's location referenced to a map.\n The platform consists of time-synchronized ultrasonic transmitters that utilize the bandwidth just above the human hearing limit, where mobile devices are still sensitive and can detect ranging signals. To aid in the mapping process, the beacons perform inter-beacon ranging during setup. Each beacon includes a BLE radio that can identify and trigger the ultrasonic signals. By using differences in propagation characteristics between ultrasound and radio, the system can classify if beacons are within Line-Of-Sight (LOS) to the mobile phone. In cases where beacons are blocked, we show how the phone's inertial measurement sensors can be used to supplement localization data. We experimentally evaluate that our system can estimate three-dimensional beacon location with a Euclidean distance error of 16.1cm, and can generate maps with room measurements with a two-dimensional Euclidean distance error of 19.8cm. When tested in six different environments, we saw that the system can identify Non-Line-Of-Sight (NLOS) signals with over 80% accuracy and track a user's location to within less than 100cm.", "title": "" }, { "docid": "facc1845ddde1957b2c1b74a62d74261", "text": "The large availability of user provided contents on online social media facilitates people aggregation around shared beliefs, interests, worldviews and narratives. In spite of the enthusiastic rhetoric about the so called collective intelligence unsubstantiated rumors and conspiracy theories-e.g., chemtrails, reptilians or the Illuminati-are pervasive in online social networks (OSN). In this work we study, on a sample of 1.2 million of individuals, how information related to very distinct narratives-i.e. main stream scientific and conspiracy news-are consumed and shape communities on Facebook. Our results show that polarized communities emerge around distinct types of contents and usual consumers of conspiracy news result to be more focused and self-contained on their specific contents. To test potential biases induced by the continued exposure to unsubstantiated rumors on users' content selection, we conclude our analysis measuring how users respond to 4,709 troll information-i.e. parodistic and sarcastic imitation of conspiracy theories. We find that 77.92% of likes and 80.86% of comments are from users usually interacting with conspiracy stories.", "title": "" }, { "docid": "8e3bf062119c6de9fa5670ce4b00764b", "text": "Heating red phosphorus in sealed ampoules in the presence of a Sn/SnI4 catalyst mixture has provided bulk black phosphorus at much lower pressures than those required for allotropic conversion by anvil cells. Herein we report the growth of ultra-long 1D red phosphorus nanowires (>1 mm) selectively onto a wafer substrate from red phosphorus powder and a thin film of red phosphorus in the present of a Sn/SnI4 catalyst. Raman spectra and X-ray diffraction characterization suggested the formation of crystalline red phosphorus nanowires. FET devices constructed with the red phosphorus nanowires displayed a typical I-V curve similar to that of black phosphorus and a similar mobility reaching 300 cm(2)  V(-1)  s with an Ion /Ioff ratio approaching 10(2) . A significant response to infrared light was observed from the FET device.", "title": "" }, { "docid": "b6249dbd61928a0722e0bcbf18cd9f79", "text": "For many applications such as tele-operational robots and interactions with virtual environments, it is better to have performance with force feedback than without. Haptic devices are force reflecting interfaces. They can also track human hand positions simultaneously. A new 6 DOF (degree-of-freedom) haptic device was designed and calibrated in this study. It mainly contains a double parallel linkage, a rhombus linkage, a rotating mechanical structure and a grasping interface. Benefited from the unique design, it is a hybrid structure device with a large workspace and high output capability. Therefore, it is capable of multi-finger interactions. Moreover, with an adjustable base, operators can change different postures without interrupting haptic tasks. To investigate the performance regarding position tracking accuracy and static output forces, we conducted experiments on a three-dimensional electric sliding platform and a digital force gauge, respectively. Displacement errors and force errors are calculated and analyzed. To identify the capability and potential of the device, four application examples were programmed.", "title": "" }, { "docid": "03dcb05a6aa763b6b0a5cdc58ddb81d8", "text": "In this paper, a phase-shifted dual H-bridge converter, which can solve the drawbacks of existing phase-shifted full-bridge converters such as narrow zero-voltage-switching (ZVS) range, large circulating current, large duty-cycle loss, and serious secondary-voltage overshoot and oscillation, is analyzed and evaluated. The proposed topology is composed of two symmetric half-bridge inverters that are placed in parallel on the primary side and are driven in a phase-shifting manner to regulate the output voltage. At the rectifier stage, a center-tap-type rectifier with two additional low-current-rated diodes is employed. This structure allows the proposed converter to have the advantages of a wide ZVS range, no problems related to duty-cycle loss, no circulating current, and the reduction of secondary-voltage oscillation and overshoot. Moreover, the output filter's size becomes smaller compared to the conventional phase-shift full-bridge converters. This paper describes the operation principle of the proposed converter and the analysis and design consideration in depth. A 1-kW 320-385-V input 50-V output laboratory prototype operating at a 100-kHz switching frequency is designed, built, and tested to verify the effectiveness of the presented converter.", "title": "" }, { "docid": "c5ff79665033fd215411069cb860d641", "text": "This paper presents a new geometry-based method to determine if a cable-driven robot operating in a d-degree-of-freedom workspace (2 ≤ d ≤ 6) with n ≥ d cables can generate a given set of wrenches in a given pose, considering acceptable minimum and maximum tensions in the cables. To this end, the fundamental nature of the Available Wrench Set is studied. The latter concept, defined here, is closely related to similar sets introduced in [23, 4]. It is shown that the Available Wrench Set can be represented mathematically by a zonotope, a special class of convex polytopes. Using the properties of zonotopes, two methods to construct the Available Wrench Set are discussed. From the representation of the Available Wrench Set, computationallyefficient and non-iterative tests are presented to verify if this set includes the Task Wrench Set, the set of wrenches needed for a given task. INTRODUCTION AND PROBLEM DEFINITION A cable-driven robot, or simply cable robot, is a parallel robot whose actuated limbs are cables. The length of the cables can be adjusted in a coordinated manner to control the pose (position and orientation) and/or wrench (force and torque) at the moving platform. Pioneer applications of such mechanisms are the NIST Robocrane [1], the Falcon high-speed manipulator [15] and the Skycam [7]. The fact that cables can only exert efforts in one direction impacts the capability of the mechanism to generate wrenches at the platform. Previous work already presented methods to test if a set of wrenches – ranging from one to all possible wrenches – could be generated by a cable robot in a given pose, considering that cables work only in tension. Some of the proposed methods focus on fully constrained cable robots while others apply to unconstrained robots. In all cases, minimum and/or maximum cable tensions is considered. A complete section of this paper is dedicated to the comparison of the proposed approach with previous methods. A general geometric approach that addresses all possible cases without using an iterative algorithm is presented here. It will be shown that the results obtained with this approach are consistent with the ones previously presented in the literature [4, 5, 14, 17, 18, 22, 23, 24, 26]. This paper does not address the workspace of cable robots. The latter challenging problem was addressed in several papers over the recent years [10, 11, 12, 19, 25]. Before looking globally at the workspace, all proposed methods must go through the intermediate step of assessing the capability of a mechanism to generate a given set of wrenches. The approach proposed here is also compared with the intermediate steps of the papers on the workspace determination of cable robots. The task that a robot has to achieve implies that it will have to be able to generate a given set of wrenches in a given pose x. This Task Wrench Set, T , depends on the various applications of the considered robot, which can be for example to move a camera or other sensors [7, 6, 9, 3], manipulate payloads [15, 1] or simulate walking sensations to a user immersed in virtual reality [21], just to name a few. The Available Wrench Set, A, is the set of wrenches that the mechanism can generate. This set depends on the architecture of the robot, i.e., where the cables are attached on the platform and where the fixed winches are located. It also depends on the configuration pose as well as on the minimum and maximum acceptable tension in the cables. All the wrenches that are possibly needed to accomplish a task can 1 Copyright  2008 by ASME", "title": "" }, { "docid": "9fdd2b84fc412e03016a12d951e4be01", "text": "We examine the implications of shape on the process of finding dense correspondence and half-occlusions for a stereo pair of images. The desired property of the disparity map is that it should be a piecewise continuous function which is consistent with the images and which has the minimum number of discontinuities. To zeroth order, piecewise continuity becomes piecewise constancy. Using this approximation, we first discuss an approach for dealing with such a fronto-parallel shapeless world, and the problems involved therein. We then introduce horizontal and vertical slant to create a first order approximation to piecewise continuity. In particular, we emphasize the following geometric fact: a horizontally slanted surface (i.e., having depth variation in the direction of the separation of the two cameras) will appear horizontally stretched in one image as compared to the other image. Thus, while corresponding two images, N pixels on a scanline in one image may correspond to a different number of pixels M in the other image. This leads to three important modifications to existing stereo algorithms: (a) due to unequal sampling, existing intensity matching metrics must be modified, (b) unequal numbers of pixels in the two images must be allowed to correspond to each other, and (c) the uniqueness constraint, which is often used for detecting occlusions, must be changed to an interval uniqueness constraint. We also discuss the asymmetry between vertical and horizontal slant, and the central role of non-horizontal edges in the context of vertical slant. Using experiments, we discuss cases where existing algorithms fail, and how the incorporation of these new constraints provides correct results.", "title": "" }, { "docid": "639dad156dee50e41c05ac1c77abc3e2", "text": "Digital radiography offers the potential of improved image quality as well as providing opportunities for advances in medical image management, computer-aided diagnosis and teleradiology. Image quality is intimately linked to the precise and accurate acquisition of information from the x-ray beam transmitted by the patient, i.e. to the performance of the x-ray detector. Detectors for digital radiography must meet the needs of the specific radiological procedure where they will be used. Key parameters are spatial resolution, uniformity of response, contrast sensitivity, dynamic range, acquisition speed and frame rate. The underlying physical considerations defining the performance of x-ray detectors for radiography will be reviewed. Some of the more promising existing and experimental detector technologies which may be suitable for digital radiography will be considered. Devices that can be employed in full-area detectors and also those more appropriate for scanning x-ray systems will be discussed. These include various approaches based on phosphor x-ray converters, where light quanta are produced as an intermediate stage, as well as direct x-ray-to-charge conversion materials such as zinc cadmium telluride, amorphous selenium and crystalline silicon.", "title": "" }, { "docid": "753d840a62fc4f4b57f447afae07ba84", "text": "Feature selection has been proven to be effective and efficient in preparing high-dimensional data for data mining and machine learning problems. Since real-world data is usually unlabeled, unsupervised feature selection has received increasing attention in recent years. Without label information, unsupervised feature selection needs alternative criteria to define feature relevance. Recently, data reconstruction error emerged as a new criterion for unsupervised feature selection, which defines feature relevance as the capability of features to approximate original data via a reconstruction function. Most existing algorithms in this family assume predefined, linear reconstruction functions. However, the reconstruction function should be data dependent and may not always be linear especially when the original data is high-dimensional. In this paper, we investigate how to learn the reconstruction function from the data automatically for unsupervised feature selection, and propose a novel reconstruction-based unsupervised feature selection framework REFS, which embeds the reconstruction function learning process into feature selection. Experiments on various types of realworld datasets demonstrate the effectiveness of the proposed framework REFS.", "title": "" }, { "docid": "e8c37cb37bf9f0a34eaa5e18908e751d", "text": "Dr. Turki K. Hassan* & Enaam A. Ali* Received on: 17/5/2009 Accepted on: 16/2/2010 Abstract The purpose of this work is to study, analyze, and design a half-bridge seriesparallel resonant inverter for induction heating applications. A pulse width modulation (PWM)-based double integral sliding mode voltage controlled buck converter is proposed for control the induction heating power. This type of controller is used in order to obtain very small steady state error, stable and fast dynamic response, and robustness against variations in the line voltage and converter parameters. A small induction heating coil is designed and constructed. A carbon steel (C45) cylindrical billet is used as a load. The induction heating load parameters (RL and LL) are measured at the resonant frequency of 85 kHz. The parameters of the resonant circuit are chosen for operation at resonant. The inverter is operated at unity power factor by phased locked loop (PLL) control irrespective of load variations, with maximum current gain, and practically no voltage spikes in the switching devices at turn-off, therefore no snubber circuit is used for operation at unity power factor. A power MOSFET transistor is used as a switching device for buck converter and the IGBT transistor is used as a switching device for the inverter. A complete designed system is simulated using Matlab/Simulink. All the electronic control circuits are designed and implemented. The practical results are compared with simulation results to verify the proposed induction heating system. A close agreement between simulation and practical results is noticed and a good performance is achieved.", "title": "" }, { "docid": "d4d46f30a1e918f89948110dc9c36464", "text": "Many real-world problems involve the optimization of multiple, possibly conflicting objectives. Multi-objective reinforcement learning (MORL) is a generalization of standard reinforcement learning where the scalar reward signal is extended to multiple feedback signals, in essence, one for each objective. MORL is the process of learning policies that optimize multiple criteria simultaneously. In this paper, we present a novel temporal difference learning algorithm that integrates the Pareto dominance relation into a reinforcement learning approach. This algorithm is a multi-policy algorithm that learns a set of Pareto dominating policies in a single run. We name this algorithm Pareto Q-learning and it is applicable in episodic environments with deterministic as well as stochastic transition functions. A crucial aspect of Pareto Q-learning is the updating mechanism that bootstraps sets of Q-vectors. One of our main contributions in this paper is a mechanism that separates the expected immediate reward vector from the set of expected future discounted reward vectors. This decomposition allows us to update the sets and to exploit the learned policies consistently throughout the state space. To balance exploration and exploitation during learning, we also propose three set evaluation mechanisms. These three mechanisms evaluate the sets of vectors to accommodate for standard action selection strategies, such as -greedy. More precisely, these mechanisms use multi-objective evaluation principles such as the hypervolume measure, the cardinality indicator and the Pareto dominance relation to select the most promising actions. We experimentally validate the algorithm on multiple environments with two and three objectives and we demonstrate that Pareto Q-learning outperforms current state-of-the-art MORL algorithms with respect to the hypervolume of the obtained policies. We note that (1) Pareto Q-learning is able to learn the entire Pareto front under the usual assumption that each state-action pair is sufficiently sampled, while (2) not being biased by the shape of the Pareto front. Furthermore, (3) the set evaluation mechanisms provide indicative measures for local action selection and (4) the learned policies can be retrieved throughout the state and action space.", "title": "" } ]
scidocsrr
c85f8681d358ea0d1fc530f772a66604
Using mathematical morphology for document skew estimation
[ { "docid": "9d7df3f82d844ff74f438537bd2927b9", "text": "Several approaches have previously been taken for identify ing document image skew. At issue are efficiency, accuracy, and robustness. We work dire ctly with the image, maximizing a function of the number of ON pixels in a scanline. Image rotat i n is simulated by either vertical shear or accumulation of pixel counts along sloped lines . Pixel sum differences on adjacent scanlines reduce isotropic background noise from non-text regions. To find the skew angle, a succession of values of this function are found. Angles are chosen hierarchically, typically with both a coarse sweep and a fine angular bifurcation. To inc rease efficiency, measurements are made on subsampled images that have been pre-filtered to m aximize sensitivity to image skew. Results are given for a large set of images, includi ng multiple and unaligned text columns, graphics and large area halftones. The measured in t insic angular error is inversely proportional to the number of sampling points on a scanline. This method does not indicate when text is upside-down, and i t also requires sampling the function at 90 degrees of rotation to measure text skew in lan dscape mode. However, such text orientation can be determined (as one of four direction s) by noting that roman characters in all languages have many more ascenders than descenders, a nd using morphological operations to identify such pixels. Only a small amount of text is r equired for accurate statistical determination of orientation, and images without text are i dentified as such.", "title": "" }, { "docid": "517de02a0eff7e5bf3e913ca74f09d10", "text": "Any paper document when converted to electronic form through standard digitizing devices, like scanners, is subject to a small tilt or skew. Meanwhile, a de-skewed document allows a more compact representation of its components, particularly text objects, such as words, lines, and paragraphs, where they can be represented by their rectilinear bounding boxes. This simplified representation leads to more efficient, robust, as well as simpler algorithms for document image analysis including optical character recognition (OCR). This paper presents a new method for automatic detection of skew in a document image using mathematical morphology. The proposed algorithm is extremely fast as well as independent of script forms.", "title": "" } ]
[ { "docid": "b93919bbb2dab3a687cccb71ee515793", "text": "The processing and analysis of colour images has become an important area of study and application. The representation of the RGB colour space in 3D-polar coordinates (hue, saturation and brightness) can sometimes simplify this task by revealing characteristics not visible in the rectangular coordinate representation. The literature describes many such spaces (HLS, HSV, etc.), but many of them, having been developed for computer graphics applications, are unsuited to image processing and analysis tasks. We describe the flaws present in these colour spaces, and present three prerequisites for 3D-polar coordinate colour spaces well-suited to image processing and analysis. We then derive 3D-polar coordinate representations which satisfy the prerequisites, namely a space based on the norm which has efficient linear transform functions to and from the RGB space; and an improved HLS (IHLS) space. The most important property of this latter space is a “well-behaved” saturation coordinate which, in contrast to commonly used ones, always has a small numerical value for near-achromatic colours, and is completely independent of the brightness function. Three applications taking advantage of the good properties of the IHLS space are described: the calculation of a saturation-weighted hue mean and of saturation-weighted hue histograms, and feature extraction using mathematical morphology. 1Updated July 16, 2003. 2Jean Serra is with the Centre de Morphologie Mathématique, Ecole des Mines de Paris, 35 rue Saint-Honoré, 77305 Fontainebleau cedex, France.", "title": "" }, { "docid": "8dd3b98c6e28db1de4a473c4d576e3c5", "text": "In this paper, we address the semisupervised distance metric learning problem and its applications in classification and image retrieval. First, we formulate a semisupervised distance metric learning model by considering the metric information of inner classes and interclasses. In this model, an adaptive parameter is designed to balance the inner metrics and intermetrics by using data structure. Second, we convert the model to a minimization problem whose variable is symmetric positive-definite matrix. Third, in implementation, we deduce an intrinsic steepest descent method, which assures that the metric matrix is strictly symmetric positive-definite at each iteration, with the manifold structure of the symmetric positive-definite matrix manifold. Finally, we test the proposed algorithm on conventional data sets, and compare it with other four representative methods. The numerical results validate that the proposed method significantly improves the classification with the same computational efficiency.", "title": "" }, { "docid": "72cfe76ea68d5692731531aea02444d0", "text": "Primary human tumor culture models allow for individualized drug sensitivity testing and are therefore a promising technique to achieve personalized treatment for cancer patients. This would especially be of interest for patients with advanced stage head and neck cancer. They are extensively treated with surgery, usually in combination with high-dose cisplatin chemoradiation. However, adding cisplatin to radiotherapy is associated with an increase in severe acute toxicity, while conferring only a minor overall survival benefit. Hence, there is a strong need for a preclinical model to identify patients that will respond to the intended treatment regimen and to test novel drugs. One of such models is the technique of culturing primary human tumor tissue. This review discusses the feasibility and success rate of existing primary head and neck tumor culturing techniques and their corresponding chemo- and radiosensitivity assays. A comprehensive literature search was performed and success factors for culturing in vitro are debated, together with the actual value of these models as preclinical prediction assay for individual patients. With this review, we aim to fill a gap in the understanding of primary culture models from head and neck tumors, with potential importance for other tumor types as well.", "title": "" }, { "docid": "6f1550434a03ff0cf47c73ae9592a2f6", "text": "This paper presents focused synthetic aperture radar (SAR) processing of airborne radar sounding data acquired with the High-Capability Radar Sounder system at 60 MHz. The motivation is to improve basal reflection analysis for water detection and to improve layer detection and tracking. The processing and reflection analyses are applied to data from Kamb Ice Stream, West Antarctica. The SAR processor correlates the radar data with reference echoes from subsurface point targets. The references are 1-D responses limited by the pulse nadir footprint or 2-D responses that include echo tails. Unfocused SAR and incoherent integration are included for comparison. Echoes are accurately preserved from along-track slopes up to about 0.5deg for unfocused SAR, 3deg for 1-D correlations, and 10deg for 2-D correlations. The noise/clutter levels increase from unfocused SAR to 1-D and 2-D correlations, but additional gain compensates at the basal interface. The basal echo signal-to-noise ratio improvement is typically about 5 dB, and up to 10 dB for 2-D correlations in rough regions. The increased noise degrades the clarity of internal layers in the 2-D correlations, but detection of layers with slopes greater than 3deg is improved. Reflection coefficients are computed for basal water detection, and the results are compared for the different processing methods. There is a significant increase in the detected water from unfocused SAR to 1-D correlations, indicating that substantial basal water exists on moderately sloped interfaces. Very little additional water is detected from the 2-D correlations. The results from incoherent integration are close to the focused SAR results, but the noise/clutter levels are much greater.", "title": "" }, { "docid": "13cfc33bd8611b3baaa9be37ea9d627e", "text": "Some of the more difficult to define aspects of the therapeutic process (empathy, compassion, presence) remain some of the most important. Teaching them presents a challenge for therapist trainees and educators alike. In this study, we examine our beginning practicum students' experience of learning mindfulness meditation as a way to help them develop therapeutic presence. Through thematic analysis of their journal entries a variety of themes emerged, including the effects of meditation practice, the ability to be present, balancing being and doing modes in therapy, and the development of acceptance and compassion for themselves and for their clients. Our findings suggest that mindfulness meditation may be a useful addition to clinical training.", "title": "" }, { "docid": "3dd732828151a63d090a2633e3e48fac", "text": "This article shows the potential for convex optimization methods to be much more widely used in signal processing. In particular, automatic code generation makes it easier to create convex optimization solvers that are made much faster by being designed for a specific problem family. The disciplined convex programming framework that has been shown useful in transforming problems to a standard form may be extended to create solvers themselves. Much work remains to be done in exploring the capabilities and limitations of automatic code generation. As computing power increases, and as automatic code generation improves, the authors expect convex optimization solvers to be found more and more often in real-time signal processing applications.", "title": "" }, { "docid": "ff826e50f789d4e47f30ec22396c365d", "text": "In present Scenario of the world, Internet has almost reached to every aspect of our lives. Due to this, most of the information sharing and communication is carried out using web. With such rapid development of Internet technology, a big issue arises of unauthorized access to confidential data, which leads to utmost need of information security while transmission. Cryptography and Steganography are two of the popular techniques used for secure transmission. Steganography is more reliable over cryptography as it embeds secret data within some cover material. Unlike cryptography, Steganography is not for keeping message hidden from intruders but it does not allow anyone to know that hidden information even exist in communicated material, as the transmitted material looks like any normal message which seem to be of no use for intruders. Although, Steganography covers many types of covers to hide data like text, image, audio, video and protocols but recent developments focuses on Image Steganography due to its large data hiding capacity and difficult identification, also due to their greater scope and bulk sharing within social networks. A large number of techniques are available to hide secret data within digital images such as LSB, ISB, and MLSB etc. In this paper, a detailed review will be presented on Image Steganography and also different data hiding and security techniques using digital images with their scope and features.", "title": "" }, { "docid": "339efad8a055a90b43abebd9a4884baa", "text": "The paper presents an investigation into the role of virtual reality and web technologies in the field of distance education. Within this frame, special emphasis is given on the building of web-based virtual learning environments so as to successfully fulfill their educational objectives. In particular, basic pedagogical methods are studied, focusing mainly on the efficient preparation, approach and presentation of learning content, and specific designing rules are presented considering the hypermedia, virtual and educational nature of this kind of applications. The paper also aims to highlight the educational benefits arising from the use of virtual reality technology in medicine and study the emerging area of web-based medical simulations. Finally, an innovative virtual reality environment for distance education in medicine is demonstrated. The proposed environment reproduces conditions of the real learning process and enhances learning through a real-time interactive simulator. Keywords—Distance education, medicine, virtual reality, web.", "title": "" }, { "docid": "095f4ea337421d6e1310acf73977fdaa", "text": "We consider the problem of autonomous robotic laundry folding, and propose a solution to the perception and manipulation challenges inherent to the task. At the core of our approach is a quasi-static cloth model which allows us to neglect the complex dynamics of cloth under significant parts of the state space, allowing us to reason instead in terms of simple geometry. We present an algorithm which, given a 2D cloth polygon and a desired sequence of folds, outputs a motion plan for executing the corresponding manipulations, deemed g-folds, on a minimal number of robot grippers. We define parametrized fold sequences for four clothing categories: towels, pants, short-sleeved shirts, and long-sleeved shirts, each represented as polygons. We then devise a model-based optimization approach for visually inferring the class and pose of a spread-out or folded clothing article from a single image, such that the resulting polygon provides a parse suitable for these folding primitives. We test the manipulation and perception tasks individually, and combine them to implement an autonomous folding system on the Willow Garage PR2. This enables the PR2 to identify a clothing article spread out on a table, execute the computed folding sequence, and visually track its progress over successive folds.", "title": "" }, { "docid": "b4e9696cc1804bb5bcf2006ef2705b11", "text": "The conductivity of a thermal-barrier coating composed of atmospheric plasma sprayed 8 mass percent yttria partially stabilized zirconia has been measured. This coating was sprayed on a substrate of 410 stainless steel. An absolute, steady-state measurement method was used to measure thermal conductivity from 400 to 800 K. The thermal conductivity of the coating is 0.62 W/(m⋅K). This measurement has shown to be temperature independent.", "title": "" }, { "docid": "e0ba4e4b7af3cba6bed51f2f697ebe5e", "text": "In this paper, we argue that instead of solely focusing on developing efficient architectures to accelerate well-known low-precision CNNs, we should also seek to modify the network to suit the FPGA. We develop a fully automative toolflow which focuses on modifying the network through filter pruning, such that it efficiently utilizes the FPGA hardware whilst satisfying a predefined accuracy threshold. Although fewer weights are re-moved in comparison to traditional pruning techniques designed for software implementations, the overall model complexity and feature map storage is greatly reduced. We implement the AlexNet and TinyYolo networks on the large-scale ImageNet and PascalVOC datasets, to demonstrate up to roughly 2× speedup in frames per second and 2× reduction in resource requirements over the original network, with equal or improved accuracy.", "title": "" }, { "docid": "4f0274c2303560867fb1f4fe922db86f", "text": "Cerebral activation was measured with positron emission tomography in ten human volunteers. The primary auditory cortex showed increased activity in response to noise bursts, whereas acoustically matched speech syllables activated secondary auditory cortices bilaterally. Instructions to make judgments about different attributes of the same speech signal resulted in activation of specific lateralized neural systems. Discrimination of phonetic structure led to increased activity in part of Broca's area of the left hemisphere, suggesting a role for articulatory recoding in phonetic perception. Processing changes in pitch produced activation of the right prefrontal cortex, consistent with the importance of right-hemisphere mechanisms in pitch perception.", "title": "" }, { "docid": "c0d646e248f240681e36113bf0ea41a3", "text": "Existing methods for multi-domain image-to-image translation (or generation) attempt to directly map an input image (or a random vector) to an image in one of the output domains. However, most existing methods have limited scalability and robustness, since they require building independent models for each pair of domains in question. This leads to two significant shortcomings: (1) the need to train exponential number of pairwise models, and (2) the inability to leverage data from other domains when training a particular pairwise mapping. Inspired by recent work on module networks [2], this paper proposes ModularGAN for multi-domain image generation and image-to-image translation. ModularGAN consists of several reusable and composable modules that carry on different functions (e.g., encoding, decoding, transformations). These modules can be trained simultaneously, leveraging data from all domains, and then combined to construct specific GAN networks at test time, according to the specific image translation task. This leads to ModularGAN’s superior flexibility of generating (or translating to) an image in any desired domain. Experimental results demonstrate that our model not only presents compelling perceptual results but also outperforms state-of-the-art methods on multi-domain facial attribute transfer.", "title": "" }, { "docid": "99c1b5ed924012118e72475dee609b3d", "text": "Lack of trust in online transactions has been cited, by past scholars, as the main reason for the abhorrence of online shopping. In this paper we proposed a model and provided empirical evidence on the impact of the website characteristics on trust in online transactions in Indian context. In the first phase, we identified and empirically verified the relative importance of the website factors that develop online trust in India. In the next phase, we have tested the mediator effect of trust in the relationship between the website factors and purchase intention (and perceived risk). The present study for the first time provided empirical evidence on the mediating role of trust in online shopping among Indian customers.", "title": "" }, { "docid": "a10b5e26b695b704f2329ff7995d099e", "text": "I draw the reader’s attention to machine teaching, the problem of finding an optimal training set given a machine learning algorithm and a target model. In addition to generating fascinating mathematical questions for computer scientists to ponder, machine teaching holds the promise of enhancing education and personnel training. The Socratic dialogue style aims to stimulate critical thinking.", "title": "" }, { "docid": "8c80b8b0e00fa6163d945f7b1b8f63e5", "text": "In this paper, we propose an architecture model called Design Rule Space (DRSpace). We model the architecture of a software system as multiple overlapping DRSpaces, reflecting the fact that any complex software system must contain multiple aspects, features, patterns, etc. We show that this model provides new ways to analyze software quality. In particular, we introduce an Architecture Root detection algorithm that captures DRSpaces containing large numbers of a project’s bug-prone files, which are called Architecture Roots (ArchRoots). After investigating ArchRoots calculated from 15 open source projects, the following observations become clear: from 35% to 91% of a project’s most bug-prone files can be captured by just 5 ArchRoots, meaning that bug-prone files are likely to be architecturally connected. Furthermore, these ArchRoots tend to live in the system for significant periods of time, serving as the major source of bug-proneness and high maintainability costs. Moreover, each ArchRoot reveals multiple architectural flaws that propagate bugs among files and this will incur high maintenance costs over time. The implication of our study is that the quality, in terms of bug-proneness, of a large, complex software project cannot be fundamentally improved without first fixing its architectural flaws.", "title": "" }, { "docid": "82985f584f51a5e103b29265878335e5", "text": "Orthodontic management for patients with single or bilateral congenitally missing permanent lateral incisors is a challenge to effective treatment planning. Over the last several decades, dentistry has focused on several treatment modalities for replacement of missing teeth. The two major alternative treatment options are orthodontic space closure or space opening for prosthetic replacements. For patients with high aesthetic expectations implants are one of the treatment of choices, especially when it comes to replacement of missing maxillary lateral incisors and mandibular incisors. Edentulous areas where the available bone is compromised to use conventional implants with 2,5 mm or more in diameter, narrow diameter implants with less than 2,5 mm diameter can be successfully used. This case report deals with managing a compromised situation in the region of maxillary lateral incisor using a narrow diameter implant.", "title": "" }, { "docid": "e856bca86bb757d11b30f3a3916fa06c", "text": "A X-band reconfigurable active phased array antenna system is presented. The phased array system consists of interconnected tile modules of which number can be flexibly changed depending on system requirements. The PCB integrated tile module assembles 4×4 patch antennas and flip-chipped phased array 0.13-μm SiGe BiCMOS ICs with 5-bit IF phase shifters. The concept of scalable phased array is verified by narrowing beamwidth in beamforming pattern and improving SNR in data transmission of 64-QAM OFDM signal as increasing the number of antenna elements.", "title": "" }, { "docid": "66acaa4909502a8d7213366e0667c3c2", "text": "Facial rejuvenation, particularly lip augmentation, has gained widespread popularity. An appreciation of perioral anatomy as well as the structural characteristics that define the aging face is critical to achieve optimal patient outcomes. Although techniques and technology evolve continuously, hyaluronic acid (HA) dermal fillers continue to dominate aesthetic practice. A combination approach including neurotoxin and volume restoration demonstrates superior results in select settings.", "title": "" }, { "docid": "669de02f4c87c2a67e776410f70bf801", "text": "Repeating an item in a list benefits recall performance, and this benefit increases when the repetitions are spaced apart (Madigan, 1969; Melton, 1970). Retrieved context theory incorporates 2 mechanisms that account for these effects: contextual variability and study-phase retrieval. Specifically, if an item presented at position i is repeated at position j, this leads to retrieval of its context from its initial presentation at i (study-phase retrieval), and this retrieved context will be used to update the current state of context (contextual variability). Here we consider predictions of a computational model that embodies retrieved context theory, the context maintenance and retrieval model (CMR; Polyn, Norman, & Kahana, 2009). CMR makes the novel prediction that subjects are more likely to successively recall items that follow a shared repeated item (e.g., i + 1, j + 1) because both items are associated with the context of the repeated item presented at i and j. CMR also predicts that the probability of recalling at least 1 of 2 studied items should increase with the items' spacing (Lohnas, Polyn, & Kahana, 2011). We tested these predictions in a new experiment, and CMR's predictions were upheld. These findings suggest that retrieved context theory offers an integrated explanation for repetition and spacing effects in free recall tasks.", "title": "" } ]
scidocsrr
eb97c4e814cfff02c7fc273eab5218f0
3D region segmentation using topological persistence
[ { "docid": "6ed624fa056d1f92cc8e58401ab3036e", "text": "In this paper, we present an approach to segment 3D point cloud data using ideas from persistent homology theory. The proposed algorithms first generate a simplicial complex representation of the point cloud dataset. Next, we compute the zeroth homology group of the complex which corresponds to the number of connected components. Finally, we extract the clusters of each connected component in the dataset. We show that this technique has several advantages over state of the art methods such as the ability to provide a stable segmentation of point cloud data under noisy or poor sampling conditions and its independence of a fixed distance metric.", "title": "" } ]
[ { "docid": "548ca7ecd778bc64e4a3812acd73dcfb", "text": "Inference algorithms of latent Dirichlet allocation (LDA), either for small or big data, can be broadly categorized into expectation-maximization (EM), variational Bayes (VB) and collapsed Gibbs sampling (GS). Looking for a unified understanding of these different inference algorithms is currently an important open problem. In this paper, we revisit these three algorithms from the entropy perspective, and show that EM can achieve the best predictive perplexity (a standard performance metric for LDA accuracy) by minimizing directly the cross entropy between the observed word distribution and LDA's predictive distribution. Moreover, EM can change the entropy of LDA's predictive distribution through tuning priors of LDA, such as the Dirichlet hyperparameters and the number of topics, to minimize the cross entropy with the observed word distribution. Finally, we propose the adaptive EM (AEM) algorithm that converges faster and more accurate than the current state-of-the-art SparseLDA [20] and AliasLDA [12] from small to big data and LDA models. The core idea is that the number of active topics, measured by the residuals between E-steps at successive iterations, decreases significantly, leading to the amortized σ(1) time complexity in terms of the number of topics. The open source code of AEM is available at GitHub.", "title": "" }, { "docid": "40128351f90abde13925799756dc1511", "text": "A new field of forensic accounting has emerged as current practices have been changed in electronic business environment and rapidly increasing fraudulent activities. Despite taking many forms, the fraud is usually theft of funds and information or misuse of someone's information assets. As financial frauds prevail in digital environment, accountants are the most helpful people to investigate them. However, forensic accountants in digital environment, usually called fraud investigators or fraud examiners, must be specially trained to investigate and report digital evidences in the courtroom. In this paper, the authors researched the case of financial fraud forensic analysis of the Microsoft Excel file, as it is very often used in financial reporting. We outlined some of the well-known difficulties involved in tracing the fraudster activities throughout extracted Excel file metadata, and applied a different approach from that well-described in classic postmortem computer system forensic analysis or in data mining techniques application. In the forensic examination steps we used open source code, Deft 7.1 (Digital evidence & forensic toolkit) and verified results by the other forensic tools, Meld a visual diff and merge tool to compare files and directories and KDiff tool, too. We proposed an integrated forensic accounting, functional model as a combined accounting, auditing and digital forensic investigative process. Before this approach can be properly validated some future work needs to be done, too.", "title": "" }, { "docid": "e2302f7cd00b4c832a6a708dc6775739", "text": "This article provides theoretically and practically grounded assistance to companies that are today engaged primarily in non‐digital industries in the development and implementation of business models that use the Internet of Things. To that end, we investigate the role of the Internet in business models in general in the first section. We conclude that the significance of the Internet in business model innovation has increased steadily since the 1990s, that each new Internet wave has given rise to new digital business model patterns, and that the biggest breakthroughs to date have been made in digital industries. In the second section, we show that digital business model patterns have now become relevant in physical industries as well. The separation between physical and digital industries is now consigned to the past. The key to this transformation is the Internet of Things which makes possible hybrid solutions that merge physical products and digital services. From this, we derive very general business model logic for the Internet of Things and some specific components and patterns for business models. Finally we sketch out the central challenges faced in implementing such hybrid business models and point to possible solutions. The Influence of the Internet on Business Models to Date", "title": "" }, { "docid": "6b4a4e5271f5a33d3f30053fc6c1a4ff", "text": "Based on environmental, legal, social, and economic factors, reverse logistics and closed-loop supply chain issues have attracted attention among both academia and practitioners. This attention is evident by the vast number of publications in scientific journals which have been published in recent years. Hence, a comprehensive literature review of recent and state-of-the-art papers is vital to draw a framework of the past, and to shed light on future directions. The aim of this paper is to review recently published papers in reverse logistic and closed-loop supply chain in scientific journals. A total of 382 papers published between January 2007 and March 2013 are selected and reviewed. The papers are then analyzed and categorized to construct a useful foundation of past research. Finally, gaps in the literature are identified to clarify and to suggest future research opportunities. 2014 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "0b18f7966a57e266487023d3a2f3549d", "text": "A clear andpowerfulformalism for describing languages, both natural and artificial, follows f iom a method for expressing grammars in logic due to Colmerauer and Kowalski. This formalism, which is a natural extension o f context-free grammars, we call \"definite clause grammars\" (DCGs). A DCG provides not only a description of a language, but also an effective means for analysing strings o f that language, since the DCG, as it stands, is an executable program o f the programming language Prolog. Using a standard Prolog compiler, the DCG can be compiled into efficient code, making it feasible to implement practical language analysers directly as DCGs. This paper compares DCGs with the successful and widely used augmented transition network (ATN) formalism, and indicates how ATNs can be translated into DCGs. It is argued that DCGs can be at least as efficient as ATNs, whilst the DCG formalism is clearer, more concise and in practice more powerful", "title": "" }, { "docid": "ae167d6e1ff2b1ee3bd23e3e02800fab", "text": "The aim of this paper is to improve the classification performance based on the multiclass imbalanced datasets. In this paper, we introduce a new resampling approach based on Clustering with sampling for Multiclass Imbalanced classification using Ensemble (C-MIEN). C-MIEN uses the clustering approach to create a new training set for each cluster. The new training sets consist of the new label of instances with similar characteristics. This step is applied to reduce the number of classes then the complexity problem can be easily solved by C-MIEN. After that, we apply two resampling techniques (oversampling and undersampling) to rebalance the class distribution. Finally, the class distribution of each training set is balanced and ensemble approaches are used to combine the models obtained with the proposed method through majority vote. Moreover, we carefully design the experiments and analyze the behavior of C-MIEN with different parameters (imbalance ratio and number of classifiers). The experimental results show that C-MIEN achieved higher performance than state-of-the-art methods.", "title": "" }, { "docid": "3bb905351ce1ea2150f37059ed256a90", "text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.", "title": "" }, { "docid": "b4c395b97f0482f3c1224ed6c8623ac2", "text": "The Scientific Computation Language (SCL) was designed mainly for developing computational models in education and research. This paper presents the justification for such a language, its relevant features, and a case study of a computational model implemented with the SCL.\n Development of the SCL language is part of the OOPsim project, which has had partial NSF support (CPATH). One of the goals of this project is to develop tools and approaches for designing and implementing computational models, emphasizing multi-disciplinary teams in the development process.\n A computational model is a computer implementation of the solution to a (scientific) problem for which a mathematical representation has been formulated. Developing a computational model consists of applying Computer Science concepts, principles and methods.\n The language syntax is defined at a higher level of abstraction than C, and includes language statements for improving program readability, debugging, maintenance, and correctness. The language design was influenced by Ada, Pascal, Eiffel, Java, C, and C++.\n The keywords have been added to maintain full compatibility with C. The SCL language translator is an executable program that is implemented as a one-pass language processor that generates C source code. The generated code can be integrated conveniently with any C and/or C++ library, on Linux and Windows (and MacOS). The semantics of SCL is informally defined to be the same C semantics.", "title": "" }, { "docid": "2f4a4c223c13c4a779ddb546b3e3518c", "text": "Machine learning systems trained on user-provided data are susceptible to data poisoning attacks, whereby malicious users inject false training data with the aim of corrupting the learned model. While recent work has proposed a number of attacks and defenses, little is understood about the worst-case loss of a defense in the face of a determined attacker. We address this by constructing approximate upper bounds on the loss across a broad family of attacks, for defenders that first perform outlier removal followed by empirical risk minimization. Our approximation relies on two assumptions: (1) that the dataset is large enough for statistical concentration between train and test error to hold, and (2) that outliers within the clean (nonpoisoned) data do not have a strong effect on the model. Our bound comes paired with a candidate attack that often nearly matches the upper bound, giving us a powerful tool for quickly assessing defenses on a given dataset. Empirically, we find that even under a simple defense, the MNIST-1-7 and Dogfish datasets are resilient to attack, while in contrast the IMDB sentiment dataset can be driven from 12% to 23% test error by adding only 3% poisoned data.", "title": "" }, { "docid": "81a45cb4ca02c38839a81ad567eb1491", "text": "Big data is often mined using clustering algorithms. Density-Based Spatial Clustering of Applications with Noise (DBSCAN) is a popular spatial clustering algorithm. However, it is computationally expensive and thus for clustering big data, parallel processing is required. The two prevalent paradigms for parallel processing are High-Performance Computing (HPC) based on Message Passing Interface (MPI) or Open Multi-Processing (OpenMP) and the newer big data frameworks such as Apache Spark or Hadoop. This report surveys for these two different paradigms publicly available implementations that aim at parallelizing DBSCAN and compares their performance. As a result, it is found that the big data implementations are not yet mature and in particular for skewed data, the implementation’s decomposition of the input data into parallel tasks has a huge influence on the performance in terms of running time.", "title": "" }, { "docid": "8aadc690d86ad4c015a4a82a32336336", "text": "The complexities of various search algorithms are considered in terms of time, space, and cost of the solution paths. • Brute-force search . Breadth-first search (BFS) . Depth-first search (DFS) . Depth-first Iterative-deepening (DFID) . Bi-directional search • Heuristic search: best-first search . A∗ . IDA∗ The issue of storing information in DISK instead of main memory. Solving 15-puzzle. TCG: DFID, 20121120, Tsan-sheng Hsu c © 2", "title": "" }, { "docid": "e740e5ff2989ce414836c422c45570a9", "text": "Many organizations desired to operate their businesses, works and services in a mobile (i.e. just in time and anywhere), dynamic, and knowledge-oriented fashion. Activities like e-learning, environmental learning, remote inspection, health-care, home security and safety mechanisms etc. requires a special infrastructure that might provide continuous, secured, reliable and mobile data with proper information/ knowledge management system in context to their confined environment and its users. An indefinite number of sensor networks for numerous healthcare applications has been designed and implemented but they all lacking extensibility, fault-tolerance, mobility, reliability and openness. Thus, an open, flexible and rearrangeable infrastructure is proposed for healthcare monitoring applications. Where physical sensors are virtualized as virtual sensors on cloud computing by this infrastructure and virtual sensors are provisioned automatically to end users whenever they required. In this paper we reviewed some approaches to hasten the service creations in field of healthcare and other applications with Cloud-Sensor architecture. This architecture provides services to end users without being worried about its implementation details. The architecture allows the service requesters to use the virtual sensors by themselves or they may create other new services by extending virtual sensors.", "title": "" }, { "docid": "459de602bf6e46ad4b752f2e51c81ffa", "text": "Self-adaptation is an essential feature of natural evolution. However, in the context of function optimization, self-adaptation features of evolutionary search algorithms have been explored mainly with evolution strategy (ES) and evolutionary programming (EP). In this paper, we demonstrate the self-adaptive feature of real-parameter genetic algorithms (GAs) using a simulated binary crossover (SBX) operator and without any mutation operator. The connection between the working of self-adaptive ESs and real-parameter GAs with the SBX operator is also discussed. Thereafter, the self-adaptive behavior of real-parameter GAs is demonstrated on a number of test problems commonly used in the ES literature. The remarkable similarity in the working principle of real-parameter GAs and self-adaptive ESs shown in this study suggests the need for emphasizing further studies on self-adaptive GAs.", "title": "" }, { "docid": "587f7821fc7ecfe5b0bbbd3b08b9afe2", "text": "The most commonly used method for cuffless blood pressure (BP) measurement is using pulse transit time (PTT), which is based on Moens-Korteweg (M-K) equation underlying the assumption that arterial geometries such as the arterial diameter keep unchanged. However, the arterial diameter is dynamic which varies over the cardiac cycle, and it is regulated through the contraction or relaxation of the vascular smooth muscle innervated primarily by the sympathetic nervous system. This may be one of the main reasons that impair the BP estimation accuracy. In this paper, we propose a novel indicator, the photoplethysmogram (PPG) intensity ratio (PIR), to evaluate the arterial diameter change. The deep breathing (DB) maneuver and Valsalva maneuver (VM) were performed on five healthy subjects for assessing parasympathetic and sympathetic nervous activities, respectively. Heart rate (HR), PTT, PIR and BP were measured from the simultaneously recorded electrocardiogram (ECG), PPG, and continuous BP. It was found that PIR increased significantly from inspiration to expiration during DB, whilst BP dipped correspondingly. Nevertheless, PIR changed positively with BP during VM. In addition, the spectral analysis revealed that the dominant frequency component of PIR, HR and SBP, shifted significantly from high frequency (HF) to low frequency (LF), but not obvious in that of PTT. These results demonstrated that PIR can be potentially used to evaluate the smooth muscle tone which modulates arterial BP in the LF range. The PTT-based BP measurement that take into account the PIR could therefore improve its estimation accuracy.", "title": "" }, { "docid": "ba4600c9c8e4c1bfcec9fa8fcde0f05c", "text": "While things (i.e., technologies) play a crucial role in creating and shaping meaningful, positive experiences, their true value lies only in the resulting experiences. It is about what we can do and experience with a thing, about the stories unfolding through using a technology, not about its styling, material, or impressive list of features. This paper explores the notion of \"experiences\" further: from the link between experiences, well-being, and people's developing post-materialistic stance to the challenges of the experience market and the experience-driven design of technology.", "title": "" }, { "docid": "1ed9f257129a45388fcf976b87e37364", "text": "Mobile cloud computing is an extension of cloud computing that allow the users to access the cloud service via their mobile devices. Although mobile cloud computing is convenient and easy to use, the security challenges are increasing significantly. One of the major issues is unauthorized access. Identity Management enables to tackle this issue by protecting the identity of users and controlling access to resources. Although there are several IDM frameworks in place, they are vulnerable to attacks like timing attacks in OAuth, malicious code attack in OpenID and huge amount of information leakage when user’s identity is compromised in Single Sign-On. Our proposed framework implicitly authenticates a user based on user’s typing behavior. The authentication information is encrypted into homomorphic signature before being sent to IDM server and tokens are used to authorize users to access the cloud resources. Advantages of our proposed framework are: user’s identity protection and prevention from unauthorized access.", "title": "" }, { "docid": "03b2876a4b62a6e10e8523cccc32452a", "text": "Millions of people regularly report the details of their real-world experiences on social media. This provides an opportunity to observe the outcomes of common and critical situations. Identifying and quantifying these outcomes may provide better decision-support and goal-achievement for individuals, and help policy-makers and scientists better understand important societal phenomena. We address several open questions about using social media data for open-domain outcome identification: Are the words people are more likely to use after some experience relevant to this experience? How well do these words cover the breadth of outcomes likely to occur for an experience? What kinds of outcomes are discovered? Studying 3-months of Twitter data capturing people who experienced 39 distinct situations across a variety of domains, we find that these outcomes are generally found to be relevant (55-100% on average) and that causally related concepts are more likely to be discovered than conceptual or semantically related concepts.", "title": "" }, { "docid": "ff08d2e0d53f2d9a7d49f0fdd820ec7a", "text": "Milk contains numerous nutrients. The content of n-3 fatty acids, the n-6/n-3 ratio, and short- and medium-chain fatty acids may promote positive health effects. In Western societies, cow’s milk fat is perceived as a risk factor for health because it is a source of a high fraction of saturated fatty acids. Recently, there has been increasing interest in donkey’s milk. In this work, the fat and energetic value and acidic composition of donkey’s milk, with reference to human nutrition, and their variations during lactation, were investigated. We also discuss the implications of the acidic profile of donkey’s milk on human nutrition. Individual milk samples from lactating jennies were collected 15, 30, 45, 60, 90, 120, 150, 180 and 210days after foaling, for the analysis of fat, proteins and lactose, which was achieved using an infrared milk analyser, and fatty acids composition by gas chromatography. The donkey’s milk was characterised by low fat and energetic (1719.2kJ·kg-1) values, a high polyunsaturated fatty acids (PUFA) content of mainly α-linolenic acid (ALA) and linoleic acid (LA), a low n-6 to n-3 FA ratio or LA/ALA ratio, and advantageous values of atherogenic and thrombogenic indices. Among the minor PUFA, docosahesaenoic (DHA), eicosapentanoic (EPA), and arachidonic (AA) acids were present in very small amounts (<1%). In addition, the AA/EPA ratio was low (0.18). The fat and energetic values decreased (P < 0.01) during lactation. The fatty acid patterns were affected by the lactation stage and showed a decrease (P < 0.01) in saturated fatty acids content and an increase (P < 0.01) in the unsaturated fatty acids content. The n-6 to n-3 ratio and the LA/ALA ratio were approximately 2:1, with values <1 during the last period of lactation, suggesting the more optimal use of milk during this period. The high level of unsaturated/saturated fatty acids and PUFA-n3 content and the low n-6/n-3 ratio suggest the use of donkey’s milk as a functional food for human nutrition and its potential utilisation for infant nutrition as well as adult diets, particular for the elderly.", "title": "" }, { "docid": "8cbd4a4adf82c385a6c821fde08d16e9", "text": "The internet of things (IOT) is the new revolution of internet after PCS and ServersClients communication now sensors, smart object, wearable devices, and smart phones are able to communicate. Everything surrounding us can talk to each other. life will be easier and smarter with smart environment, smart homes,smart cities and intelligent transport and healthcare.Billions of devices will be communicating wirelessly is a real huge challenge to our security and privacy.IOT requires efficient and effective security solutions which satisfies IOT requirements, The low power, small memory and limited computational capabilities . This paper addresses various standards, protocols and technologies of IOT and different security attacks which may compromise IOT security and privacy.", "title": "" }, { "docid": "de4d14afaf6a24fcd831e2a293c30fc3", "text": "Artistic style transfer can be thought as a process to generate different versions of abstraction of the original image. However, most of the artistic style transfer operators are not optimized for human faces thus mainly suffers from two undesirable features when applying them to selfies. First, the edges of human faces may unpleasantly deviate from the ones in the original image. Second, the skin color is far from faithful to the original one which is usually problematic in producing quality selfies. In this paper, we take a different approach and formulate this abstraction process as a gradient domain learning problem. We aim to learn a type of abstraction which not only achieves the specified artistic style but also circumvents the two aforementioned drawbacks thus highly applicable to selfie photography. We also show that our method can be directly generalized to videos with high inter-frame consistency. Our method is also robust to non-selfie images, and the generalization to various kinds of real-life scenes is discussed. We will make our code publicly available.", "title": "" } ]
scidocsrr
1344b287c0ab3d80c035ac740d55dd32
Broadband microstrip-line-fed circularly-polarized circular slot antenna
[ { "docid": "fb3e9503a9f4575f5ecdbfaaa80638d0", "text": "This paper presents a new wideband circularly polarized square slot antenna (CPSSA) with a coplanar waveguide (CPW) feed. The proposed antenna features two inverted-L grounded strips around two opposite corners of the slot and a widened tuning stub protruded into the slot from the signal strip of the CPW. Broadside circular-polarization (CP) radiation can be easily obtained using a simple design procedure. For the optimized antenna prototype, the measured bandwidth with an axial ratio (AR) of less than 3 dB is larger than 25% and the measured VSWR les 2 impedance bandwidth is as large as 52%.", "title": "" } ]
[ { "docid": "281eb03143a40df5b0267ac45bbd4f3e", "text": "The biology of fracture healing is a complex biological process that follows specific regenerative patterns and involves changes in the expression of several thousand genes. Although there is still much to be learned to fully comprehend the pathways of bone regeneration, the over-all pathways of both the anatomical and biochemical events have been thoroughly investigated. These efforts have provided a general understanding of how fracture healing occurs. Following the initial trauma, bone heals by either direct intramembranous or indirect fracture healing, which consists of both intramembranous and endochondral bone formation. The most common pathway is indirect healing, since direct bone healing requires an anatomical reduction and rigidly stable conditions, commonly only obtained by open reduction and internal fixation. However, when such conditions are achieved, the direct healing cascade allows the bone structure to immediately regenerate anatomical lamellar bone and the Haversian systems without any remodelling steps necessary. In all other non-stable conditions, bone healing follows a specific biological pathway. It involves an acute inflammatory response including the production and release of several important molecules, and the recruitment of mesenchymal stem cells in order to generate a primary cartilaginous callus. This primary callus later undergoes revascularisation and calcification, and is finally remodelled to fully restore a normal bone structure. In this article we summarise the basic biology of fracture healing.", "title": "" }, { "docid": "fcd320ce68efa45dace6b798aa64dacd", "text": "We focus on two leading state-of-the-art approaches to grammatical error correction – machine learning classification and machine translation. Based on the comparative study of the two learning frameworks and through error analysis of the output of the state-of-the-art systems, we identify key strengths and weaknesses of each of these approaches and demonstrate their complementarity. In particular, the machine translation method learns from parallel data without requiring further linguistic input and is better at correcting complex mistakes. The classification approach possesses other desirable characteristics, such as the ability to easily generalize beyond what was seen in training, the ability to train without human-annotated data, and the flexibility to adjust knowledge sources for individual error types. Based on this analysis, we develop an algorithmic approach that combines the strengths of both methods. We present several systems based on resources used in previous work with a relative improvement of over 20% (and 7.4 F score points) over the previous state-of-the-art.", "title": "" }, { "docid": "ec48c3ba506409be7219320fe8e263ca", "text": "Cyber scanning refers to the task of probing enterprise networks or Internet wide services, searching for vulnerabilities or ways to infiltrate IT assets. This misdemeanor is often the primarily methodology that is adopted by attackers prior to launching a targeted cyber attack. Hence, it is of paramount importance to research and adopt methods for the detection and attribution of cyber scanning. Nevertheless, with the surge of complex offered services from one side and the proliferation of hackers' refined, advanced, and sophisticated techniques from the other side, the task of containing cyber scanning poses serious issues and challenges. Furthermore recently, there has been a flourishing of a cyber phenomenon dubbed as cyber scanning campaigns - scanning techniques that are highly distributed, possess composite stealth capabilities and high coordination - rendering almost all current detection techniques unfeasible. This paper presents a comprehensive survey of the entire cyber scanning topic. It categorizes cyber scanning by elaborating on its nature, strategies and approaches. It also provides the reader with a classification and an exhaustive review of its techniques. Moreover, it offers a taxonomy of the current literature by focusing on distributed cyber scanning detection methods. To tackle cyber scanning campaigns, this paper uniquely reports on the analysis of two recent cyber scanning incidents. Finally, several concluding remarks are discussed.", "title": "" }, { "docid": "2f08b35bb6f4f9d44d1225e2d26b5395", "text": "An efficient disparity estimation and occlusion detection algorithm for multiocular systems is presented. A dynamic programming algorithm, using a multiview matching cost as well as pure geometrical constraints, is used to estimate disparity and to identify the occluded areas in the extreme left and right views. A significant advantage of the proposed approach is that the exact number of views in which each point appears (is not occluded) can be determined. The disparity and occlusion information obtained may then be used to create virtual images from intermediate viewpoints. Furthermore, techniques are developed for the coding of occlusion and disparity information, which is needed at the receiver for the reproduction of a multiview sequence using the two encoded extreme views. Experimental results illustrate the performance of the proposed techniques.", "title": "" }, { "docid": "5e5b2f8a3cc512ee2db165013a5a4782", "text": "The purpose of this project was to develop a bidimensional measure of mindfulness to assess its two key components: present-moment awareness and acceptance. The development and psychometric validation of the Philadelphia Mindfulness Scale is described, and data are reported from expert raters, two nonclinical samples (n = 204 and 559), and three clinical samples including mixed psychiatric outpatients (n = 52), eating disorder inpatients (n = 30), and student counseling center outpatients (n = 78). Exploratory and confirmatory factor analyses support a two-factor solution, corresponding to the two constituent components of the construct. Good internal consistency was demonstrated, and relationships with other constructs were largely as expected. As predicted, significant differences were found between the nonclinical and clinical samples in levels of awareness and acceptance. The awareness and acceptance subscales were not correlated, suggesting that these two constructs can be examined independently. Potential theoretical and applied uses of the measure are discussed.", "title": "" }, { "docid": "3f48327ca2125df3a6da0c1e54131013", "text": "Background: We investigated the value of magnetic resonance imaging (MRI) in the evaluation of sex-reassignment surgery in male-to-female transsexual patients. Methods: Ten male-to-female transsexual patients who underwent sex-reassignment surgery with inversion of combined penile and scrotal skin flaps for vaginoplasty were examined after surgery with MRI. Turbo spin-echo T2-weighted and spin-echo T1-weighted images were obtained in sagittal, coronal, and axial planes with a 1.5-T superconductive magnet. Images were acquired with and without an inflatable silicon vaginal tutor. The following parameters were evaluated: neovaginal depth, neovaginal inclination in the sagittal plane, presence of remnants of the corpus spongiosum and corpora cavernosa, and thickness of the rectovaginal septum. Results: The average neovaginal depth was 7.9 cm (range = 5–10 cm). The neovagina had a correct oblique inclination in the sagittal plane in four patients, no inclination in five, and an incorrect inclination in one. In seven patients, MRI showed remnants of the corpora cavernosa and/or of the corpus spongiosum; in three patients, no remnants were detected. The average thickness of the rectovaginal septum was 4 mm (range = 3–6 mm). Conclusion: MRI allows a detailed assessment of the pelvic anatomy after genital reconfiguration and provides information that can help the surgeon to adopt the most correct surgical approach.", "title": "" }, { "docid": "78a8eb1c05d8af52ca32ba29b3fcf89b", "text": "Pediatric firearm-related deaths and injuries are a national public health crisis. In this Special Review Article, we characterize the epidemiology of firearm-related injuries in the United States and discuss public health programs, the role of pediatricians, and legislative efforts to address this health crisis. Firearm-related injuries are leading causes of unintentional injury deaths in children and adolescents. Children are more likely to be victims of unintentional injuries, the majority of which occur in the home, and adolescents are more likely to suffer from intentional injuries due to either assault or suicide attempts. Guns are present in 18% to 64% of US households, with significant variability by geographic region. Almost 40% of parents erroneously believe their children are unaware of the storage location of household guns, and 22% of parents wrongly believe that their children have never handled household guns. Public health interventions to increase firearm safety have demonstrated varying results, but the most effective programs have provided free gun safety devices to families. Pediatricians should continue working to reduce gun violence by asking patients and their families about firearm access, encouraging safe storage, and supporting firearm-related injury prevention research. Pediatricians should also play a role in educating trainees about gun violence. From a legislative perspective, universal background checks have been shown to decrease firearm homicides across all ages, and child safety laws have been shown to decrease unintentional firearm deaths and suicide deaths in youth. A collective, data-driven public health approach is crucial to halt the epidemic of pediatric firearm-related injury.", "title": "" }, { "docid": "84195c27330dad460b00494ead1654c8", "text": "We present a unified framework for the computational implementation of syntactic, semantic, pragmatic and even \"stylistic\" constraints on anaphora. We build on our BUILDRS implementation of Discourse Representation (DR) Theory and Lexical Functional Grammar (LFG) discussed in Wada & Asher (1986). We develop and argue for a semantically based processing model for anaphora resolution that exploits a number of desirable features: (1) the partial semantics provided by the discourse representation structures (DRSs) of DR theory, (2) the use of syntactic and lexical features to filter out unacceptable potential anaphoric antecedents from the set of logically possible antecedents determined by the logical structure of the DRS, (3) the use of pragmatic or discourse constraints, noted by those working on focus, to impose a salience ordering on the set of grammatically acceptable potential antecedents. Only where there is a marked difference in the degree of salience among the possible antecedents does the salience ranking allow us to make predictions on preferred readings. In cases where the difference is extreme, we predict the discourse to be infelicitous if, because of other constraints, one of the markedly less salient antecedents must be linked with the pronoun. We also briefly consider the applications of our processing model to other definite noun phrases besides anaphoric pronouns.", "title": "" }, { "docid": "77d0845463db0f4e61864b37ec1259b7", "text": "A new form of the variational autoencoder (VAE) is proposed, based on the symmetric KullbackLeibler divergence. It is demonstrated that learning of the resulting symmetric VAE (sVAE) has close connections to previously developed adversarial-learning methods. This relationship helps unify the previously distinct techniques of VAE and adversarially learning, and provides insights that allow us to ameliorate shortcomings with some previously developed adversarial methods. In addition to an analysis that motivates and explains the sVAE, an extensive set of experiments validate the utility of the approach.", "title": "" }, { "docid": "bf04d5a87fbac1157261fac7652b9177", "text": "We consider the partitioning of a society into coalitions in purely hedonic settings; i.e., where each player's payo is completely determined by the identity of other members of her coalition. We rst discuss how hedonic and non-hedonic settings di er and some su cient conditions for the existence of core stable coalition partitions in hedonic settings. We then focus on a weaker stability condition: individual stability, where no player can bene t from moving to another coalition while not hurting the members of that new coalition. We show that if coalitions can be ordered according to some characteristic over which players have single-peaked preferences, or where players have symmetric and additively separable preferences, then there exists an individually stable coalition partition. Examples show that without these conditions, individually stable coalition partitions may not exist. We also discuss some other stability concepts, and the incompatibility of stability with other normative properties.", "title": "" }, { "docid": "97c81cfa85ff61b999ae8e565297a16e", "text": "This paper describes the complete implementation of a blind image denoising algorithm, that takes any digital image as input. In a first step the algorithm estimates a Signal and Frequency Dependent (SFD) noise model. In a second step, the image is denoised by a multiscale adaptation of the Non-local Bayes denoising method. We focus here on a careful analysis of the denoising step and present a detailed discussion of the influence of its parameters. Extensive commented tests of the blind denoising algorithm are presented, on real JPEG images and on scans of old photographs. Source Code The source code (ANSI C), its documentation, and the online demo are accessible at the IPOL web page of this article1.", "title": "" }, { "docid": "d43f56f13fee5b45cb31233e61aa20d0", "text": "An automated brain tumor segmentation method was developed and validated against manual segmentation with three-dimensional magnetic resonance images in 20 patients with meningiomas and low-grade gliomas. The automated method (operator time, 5-10 minutes) allowed rapid identification of brain and tumor tissue with an accuracy and reproducibility comparable to those of manual segmentation (operator time, 3-5 hours), making automated segmentation practical for low-grade gliomas and meningiomas.", "title": "" }, { "docid": "cf62cb1e0b3cac894a277762808c68e0", "text": "-Most educational institutions’ administrators are concerned about student irregular attendance. Truancies can affect student overall academic performance. The conventional method of taking attendance by calling names or signing on paper is very time consuming and insecure, hence inefficient. Therefore, computer based student attendance management system is required to assist the faculty and the lecturer for this time-provide much convenient method to take attendance, but some prerequisites has to be done before start using the program. Although the use of RFID systems in educational institutions is not new, it is intended to show how the use of it came to solve daily problems in our university. The system has been built using the web-based applications such as ASP.NET and IIS server to cater the recording and reporting of the students’ attendances The system can be easily accessed by the lecturers via the web and most importantly, the reports can be generated in real-time processing, thus, providing valuable information about the students’.", "title": "" }, { "docid": "cd16afd19a0ac72cd3453a7b59aad42b", "text": "BACKGROUND\nIncreased flexibility is often desirable immediately prior to sports performance. Static stretching (SS) has historically been the main method for increasing joint range-of-motion (ROM) acutely. However, SS is associated with acute reductions in performance. Foam rolling (FR) is a form of self-myofascial release (SMR) that also increases joint ROM acutely but does not seem to reduce force production. However, FR has never previously been studied in resistance-trained athletes, in adolescents, or in individuals accustomed to SMR.\n\n\nOBJECTIVE\nTo compare the effects of SS and FR and a combination of both (FR+SS) of the plantarflexors on passive ankle dorsiflexion ROM in resistance-trained, adolescent athletes with at least six months of FR experience.\n\n\nMETHODS\nEleven resistance-trained, adolescent athletes with at least six months of both resistance-training and FR experience were tested on three separate occasions in a randomized cross-over design. The subjects were assessed for passive ankle dorsiflexion ROM after a period of passive rest pre-intervention, immediately post-intervention and after 10, 15, and 20 minutes of passive rest. Following the pre-intervention test, the subjects randomly performed either SS, FR or FR+SS. SS and FR each comprised 3 sets of 30 seconds of the intervention with 10 seconds of inter-set rest. FR+SS comprised the protocol from the FR condition followed by the protocol from the SS condition in sequence.\n\n\nRESULTS\nA significant effect of time was found for SS, FR and FR+SS. Post hoc testing revealed increases in ROM between baseline and post-intervention by 6.2% for SS (p < 0.05) and 9.1% for FR+SS (p < 0.05) but not for FR alone. Post hoc testing did not reveal any other significant differences between baseline and any other time point for any condition. A significant effect of condition was observed immediately post-intervention. Post hoc testing revealed that FR+SS was superior to FR (p < 0.05) for increasing ROM.\n\n\nCONCLUSIONS\nFR, SS and FR+SS all lead to acute increases in flexibility and FR+SS appears to have an additive effect in comparison with FR alone. All three interventions (FR, SS and FR+SS) have time courses that lasted less than 10 minutes.\n\n\nLEVEL OF EVIDENCE\n2c.", "title": "" }, { "docid": "700a6c2741affdbdc2a5dd692130ebb0", "text": "Automated tools for understanding application behavior and its changes during the application lifecycle are essential for many performance analysis and debugging tasks. Application performance issues have an immediate impact on customer experience and satisfaction. A sudden slowdown of enterprise-wide application can effect a large population of customers, lead to delayed projects, and ultimately can result in company financial loss. Significantly shortened time between new software releases further exacerbates the problem of thoroughly evaluating the performance of an updated application. Our thesis is that online performance modeling should be a part of routine application monitoring. Early, informative warnings on significant changes in application performance should help service providers to timely identify and prevent performance problems and their negative impact on the service. We propose a novel framework for automated anomaly detection and application change analysis. It is based on integration of two complementary techniques: (i) a regression-based transaction model that reflects a resource consumption model of the application, and (ii) an application performance signature that provides a compact model of runtime behavior of the application. The proposed integrated framework provides a simple and powerful solution for anomaly detection and analysis of essential performance changes in application behavior. An additional benefit of the proposed approach is its simplicity: It is not intrusive and is based on monitoring data that is typically available in enterprise production environments. The introduced solution further enables the automation of capacity planning and resource provisioning tasks of multitier applications in rapidly evolving IT environments.", "title": "" }, { "docid": "77f60100af0c9556e5345ee1b04d8171", "text": "SDNET2018 is an annotated image dataset for training, validation, and benchmarking of artificial intelligence based crack detection algorithms for concrete. SDNET2018 contains over 56,000 images of cracked and non-cracked concrete bridge decks, walls, and pavements. The dataset includes cracks as narrow as 0.06 mm and as wide as 25 mm. The dataset also includes images with a variety of obstructions, including shadows, surface roughness, scaling, edges, holes, and background debris. SDNET2018 will be useful for the continued development of concrete crack detection algorithms based on deep convolutional neural networks (DCNNs), which are a subject of continued research in the field of structural health monitoring. The authors present benchmark results for crack detection using SDNET2018 and a crack detection algorithm based on the AlexNet DCNN architecture. SDNET2018 is freely available at https://doi.org/10.15142/T3TD19.", "title": "" }, { "docid": "b6ceacf3ad3773acddc3452933b57a0f", "text": "The growing interest in robots that interact safely with humans and surroundings have prompted the need for soft structural embodiments including soft actuators. This paper explores a class of soft actuators inspired in design and construction by Pneumatic Artificial Muscles (PAMs) or McKibben Actuators. These bio-inspired actuators consist of fluid-filled elastomeric enclosures that are reinforced with fibers along a specified orientation and are in general referred to as Fiber-Reinforced Elastomeric Enclosures (FREEs). Several recent efforts have mapped the fiber configurations to instantaneous deformation, forces, and moments generated by these actuators upon pressurization with fluid. However most of the actuators, when deployed undergo large deformations and large overall motions thus necessitating the study of their large-deformation kinematics. This paper analyzes the large deformation kinematics of FREEs. A concept called configuration memory effect is proposed to explain the smart nature of these actuators. This behavior is tested with experiments and finite element modeling for a small sample of actuators. The paper also describes different possibilities and design implications of the large deformation behavior of FREEs in successful creation of soft robots.", "title": "" }, { "docid": "eccbc87e4b5ce2fe28308fd9f2a7baf3", "text": "3", "title": "" }, { "docid": "18498166845b27890110c3ca0cd43d86", "text": "Raine Mäntysalo The purpose of this article is to make an overview of postWWII urban planning theories from the point of view of participation. How have the ideas of public accountability, deliberative democracy and involvement of special interests developed from one theory to another? The urban planning theories examined are rational-comprehensive planning theory, advocacy planning theory, incrementalist planning theory and the two branches of communicative planning theory: planning as consensus-seeking and planning as management of conflicts.", "title": "" } ]
scidocsrr
0e64848e074e909fa708e882acdc40ce
Weighted color and texture sample selection for image matting
[ { "docid": "d4aaea0107cbebd7896f4cb57fa39c05", "text": "A novel method is proposed for performing multilabel, interactive image segmentation. Given a small number of pixels with user-defined (or predefined) labels, one can analytically and quickly determine the probability that a random walker starting at each unlabeled pixel will first reach one of the prelabeled pixels. By assigning each pixel to the label for which the greatest probability is calculated, a high-quality image segmentation may be obtained. Theoretical properties of this algorithm are developed along with the corresponding connections to discrete potential theory and electrical circuits. This algorithm is formulated in discrete space (i.e., on a graph) using combinatorial analogues of standard operators and principles from continuous potential theory, allowing it to be applied in arbitrary dimension on arbitrary graphs", "title": "" }, { "docid": "8076620d4905b087d10ee7fba14bd2ec", "text": "Image matting aims at extracting foreground elements from an image by mean s of color and opacity (alpha) estimation. While a lot of progress has been made in recent years on improv ing the accuracy of matting techniques, one common problem persisted: the low speed of matte computation. We pre sent the first real-time matting technique for natural images and videos. Our technique is based on the obser vation that, for small neighborhoods, pixels tend to share similar attributes. Therefore, independently treating eac h pixel in the unknown regions of a trimap results in a lot of redundant work. We show how this computation can be significantly and safely reduced by means of a careful selection of pairs of background and foreground s amples. Our technique achieves speedups of up to two orders of magnitude compared to previous ones, while producin g high-quality alpha mattes. The quality of our results has been verified through an independent benchmark. The speed of our technique enables, for the first time, real-time alpha matting of videos, and has the potential to enable a n ew class of exciting applications.", "title": "" } ]
[ { "docid": "0b6ac11cb84a573e55cb75f0bc342d72", "text": "This paper develops and tests algorithms for predicting the end-to-end route of a vehicle based on GPS observations of the vehicle’s past trips. We show that a large portion of a typical driver’s trips are repeated. Our algorithms exploit this fact for prediction by matching the first part of a driver’s current trip with one of the set of previously observed trips. Rather than predicting upcoming road segments, our focus is on making long term predictions of the route. We evaluate our algorithms using a large corpus of real world GPS driving data acquired from observing over 250 drivers for an average of 15.1 days per subject. Our results show how often and how accurately we can predict a driver’s route as a function of the distance already driven.", "title": "" }, { "docid": "d58c81bf22cdad5c1a669dd9b9a77fbd", "text": "The rapid increase in healthcare demand has seen novel developments in health monitoring technologies, such as the body area networks (BAN) paradigm. BAN technology envisions a network of continuously operating sensors, which measure critical physical and physiological parameters e.g., mobility, heart rate, and glucose levels. Wireless connectivity in BAN technology is key to its success as it grants portability and flexibility to the user. While radio frequency (RF) wireless technology has been successfully deployed in most BAN implementations, they consume a lot of battery power, are susceptible to electromagnetic interference and have security issues. Intrabody communication (IBC) is an alternative wireless communication technology which uses the human body as the signal propagation medium. IBC has characteristics that could naturally address the issues with RF for BAN technology. This survey examines the on-going research in this area and highlights IBC core fundamentals, current mathematical models of the human body, IBC transceiver designs, and the remaining research challenges to be addressed. IBC has exciting prospects for making BAN technologies more practical in the future.", "title": "" }, { "docid": "8eb51537b051bbf78d87a0cd48e9d90c", "text": "One of the important techniques of Data mining is Classification. Many real world problems in various fields such as business, science, industry and medicine can be solved by using classification approach. Neural Networks have emerged as an important tool for classification. The advantages of Neural Networks helps for efficient classification of given data. In this study a Heart diseases dataset is analyzed using Neural Network approach. To increase the efficiency of the classification process parallel approach is also adopted in the training phase.", "title": "" }, { "docid": "afe4c8e46449bfa37a04e67595d4537b", "text": "Gamification is the use of game design elements in non-game settings to engage participants and encourage desired behaviors. It has been identified as a promising technique to improve students' engagement which could have a positive impact on learning. This study evaluated the learning effectiveness and engagement appeal of a gamified learning activity targeted at the learning of C-programming language. Furthermore, the study inquired into which gamified learning activities were more appealing to students. The study was conducted using the mixed-method sequential explanatory protocol. The data collected and analysed included logs, questionnaires, and pre- and post-tests. The results of the evaluation show positive effects on the engagement of students toward the gamified learning activities and a moderate improvement in learning outcomes. Students reported different motivations for continuing and stopping activities once they completed the mandatory assignment. The preferences for different gamified activities were also conditioned by academic milestones.", "title": "" }, { "docid": "9fc2d92c42400a45cb7bf6c998dc9236", "text": "This paper presents a new probabilistic model of information retrieval. The most important modeling assumption made is that documents and queries are defined by an ordered sequence of single terms. This assumption is not made in well-known existing models of information retrieval, but is essential in the field of statistical natural language processing. Advances already made in statistical natural language processing will be used in this paper to formulate a probabilistic justification for using tf×idf term weighting. The paper shows that the new probabilistic interpretation of tf×idf term weighting might lead to better understanding of statistical ranking mechanisms, for example by explaining how they relate to coordination level ranking. A pilot experiment on the TREC collection shows that the linguistically motivated weighting algorithm outperforms the popular BM25 weighting algorithm.", "title": "" }, { "docid": "40f2565bd4b167954450c050ac3a9fd7", "text": "No-limit Texas hold’em is the most popular form of poker. Despite artificial intelligence (AI) successes in perfect-information games, the private information and massive game tree have made no-limit poker difficult to tackle. We present Libratus, an AI that, in a 120,000-hand competition, defeated four top human specialist professionals in heads-up no-limit Texas hold’em, the leading benchmark and long-standing challenge problem in imperfect-information game solving. Our game-theoretic approach features application-independent techniques: an algorithm for computing a blueprint for the overall strategy, an algorithm that fleshes out the details of the strategy for subgames that are reached during play, and a self-improver algorithm that fixes potential weaknesses that opponents have identified in the blueprint strategy.", "title": "" }, { "docid": "2d6c085f30847fe3745e0a8d7d93ea9c", "text": "Deep gated convolutional networks have been proved to be very effective in single channel speech separation. However current state-of-the-art framework often considers training the gated convolutional networks in time-frequency (TF) domain. Such an approach will result in limited perceptual score, such as signal-to-distortion ratio (SDR) upper bound of separated utterances and also fail to exploit an end-to-end framework. In this paper we present an integrated simple and effective end-to-end approach to monaural speech separation, which consists of deep gated convolutional neural networks (GCNN) that takes the mixed utterance of two speakers and maps it to two separated utterances, where each utterance contains only one speaker’s voice. In addition long shortterm memory (LSTM) is employed for long term temporal modeling. For the objective, we propose to train the network by directly optimizing utterance level SDR in a permutation invariant training (PIT) style. Our experiments on the public WSJ0-2mix data corpus demonstrate that this new scheme can produce more discriminative separated utterances and leading to performance improvement on the speaker separation task.", "title": "" }, { "docid": "9b06bfb67641fa009e51e1077b7a2434", "text": "This paper presents the results of an exploratory study carried out to learn about the use and impact of Information and Communication Technologies (ICT) on Small and Medium Sized Enterprises (SMEs) in Oman. The study investigates ICT infrastructure, software used, driver for ICT investment, perceptions about business benefits of ICT and outsourcing trends of SMEs. The study provides an insight on the barriers for the adoption of ICT. Data on these aspects of ICT was collected from 51 SMEs through a survey instrument. The results of the study show that only a small number of SMEs in Oman are aware of the benefits of ICT adoption. The main driving forces for ICT investment are to provide better and faster customer service and to stay ahead of the competition. A majority of surveyed SMEs have reported a positive performance and other benefits by utilizing ICT in their businesses. Majority of SMEs outsource most of their ICT activities. Lack of internal capabilities, high cost of ICT and lack of information about suitable ICT solutions and implementation were some of the major barriers in adopting ICT. These findings are consistent with other studies e.g. (Harindranath et al 2008). There is a need for more focus and concerted efforts on increasing awareness among SMEs on the benefits of ICT adoption. The results of the study recognize the need for more training facilities in ICT for SMEs, measures to provide ICT products and services at an affordable cost, and availability of free professional advice and consulting at reasonable cost to SMEs. Our findings therefore have important implication for policy aimed at ICT adoption and use by SMEs. The findings of this research will provide a foundation for future research and will help policy makers in understanding the current state of affairs of the usage and impact of ICT on SMEs in Oman.", "title": "" }, { "docid": "9faf87e51078bb92f146ba4d31f04c7f", "text": "This paper first describes the problem of goals nonreachable with obstacles nearby when using potential field methods for mobile robot path planning. Then, new repulsive potential functions are presented by taking the relative distance between the robot and the goal into consideration, which ensures that the goal position is the global minimum of the total potential.", "title": "" }, { "docid": "cfe31ce3a6a23d9148709de6032bd90b", "text": "I argue that Non-Photorealistic Rendering (NPR) research will play a key role in the scientific understanding of visual art and illustration. NPR can contribute to scientific understanding of two kinds of problems: how do artists create imagery, and how do observers respond to artistic imagery? I sketch out some of the open problems, how NPR can help, and what some possible theories might look like. Additionally, I discuss the thorny problem of how to evaluate NPR research and theories.", "title": "" }, { "docid": "ae937be677ca7c0714bde707816171ff", "text": "The authors examined how time orientation and morningness-eveningness relate to 2 forms of procrastination: indecision and avoidant forms. Participants were 509 adults (M age = 49.78 years, SD = 6.14) who completed measures of time orientation, morningness-eveningness, decisional procrastination (i.e., indecision), and avoidant procrastination. Results showed that morningness was negatively related to avoidant procrastination but not decisional procrastination. Overall, the results indicated different temporal profiles for indecision and avoidant procrastinations. Avoidant procrastination related to low future time orientation and low morningness, whereas indecision related to both (a) high negative and high positive past orientations and (b) low present-hedonistic and low future time orientations. The authors inferred that distinct forms of procrastination seem different on the basis of dimensions of time.", "title": "" }, { "docid": "d8d86da66ebeaae73e9aaa2a30f18bb5", "text": "In this paper, a novel approach to the characterization of structural damage in civil structures is presented. Structural damage often results in subtle changes to structural stiffness and damping properties that are manifested by changes in the location of transfer function characteristic equation roots (poles) upon the complex plane. Using structural response time-history data collected from an instrumented structure, transfer function poles can be estimated using traditional system identification methods. Comparing the location of poles corresponding to the structure in an unknown structural state to those of the undamaged structure, damage can be accurately identified. The IASC-ASCE structural health monitoring benchmark structure is used in this study to illustrate the merits of the transfer function pole migration approach to damage detection in civil structures.", "title": "" }, { "docid": "2f362f4c9b56a44af8e93dad107e3995", "text": "Microstrip filters are widely used in microwave circuit, This paper briefly describes the design principle of microstrip bandstop filter (BSF). A compact wide band high rejection BSF is presented. This filter consists of two parts: defected ground structures filter (DGS) and spurline filter. Due to the inherently compact characteristics of the spurline and DGS, the proposed filter shows a better rejection performance than open stub BSF in the same circuit size. The results of simulation and optimization given by HFSS12 prove the correctness of the design.", "title": "" }, { "docid": "45b1cb6c9393128c9a9dcf9dbeb50778", "text": "Bitcoin, a distributed, cryptographic, digital currency, gained a lot of media attention for being an anonymous e-cash system. But as all transactions in the network are stored publicly in the blockchain, allowing anyone to inspect and analyze them, the system does not provide real anonymity but pseudonymity. There have already been studies showing the possibility to deanonymize bitcoin users based on the transaction graph and publicly available data. Furthermore, users could be tracked by bitcoin exchanges or shops, where they have to provide personal information that can then be linked to their bitcoin addresses. Special bitcoin mixing services claim to obfuscate the origin of transactions and thereby increase the anonymity of its users. In this paper we evaluate three of these services – Bitcoin Fog, BitLaundry, and the Send Shared functionality of Blockchain.info – by analyzing the transaction graph. While Bitcoin Fog and Blockchain.info successfully mix our transaction, we are able to find a direct relation between the input and output transactions in the graph of BitLaundry.", "title": "" }, { "docid": "d46c44e5a4bc2e0dd1423394534409d3", "text": "This paper describes a heterogeneous computer cluster called Axel. Axel contains a collection of nodes; each node can include multiple types of accelerators such as FPGAs (Field Programmable Gate Arrays) and GPUs (Graphics Processing Units). A Map-Reduce framework for the Axel cluster is presented which exploits spatial and temporal locality through different types of processing elements and communication channels. The Axel system enables the first demonstration of FPGAs, GPUs and CPUs running collaboratively for N-body simulation. Performance improvement from 4.4 times to 22.7 times has been achieved using our approach, which shows that the Axel system can combine the benefits of the specialization of FPGA, the parallelism of GPU, and the scalability of computer clusters.", "title": "" }, { "docid": "28d7c171b05309d9a4ec4aa9ec4f66e1", "text": "A cost and energy efficient method of wind power generation is to connect the output of the turbine to a doubly-fed induction generator (DFIG), allowing operation at a range of variable speeds. While for electrical engineers the electromagnetic components in such a system, like the electric machine, power electronic converter and magnetic filters are of most interest, a DFIG wind turbine is a complex design involving multiple physical domains strongly interacting with each other. The electrical system, for instance, is influenced by the converter’s cooling system and mechanical components, including the rotor blades, shaft and gearbox. This means that during component selection and design of control schemes, the influence of domains on one another must be considered in order to achieve an optimized overall system performance such that the design is dynamic, efficient and cost-effective. In addition to creating an accurate model of the entire system, it is also important to model the real-world operating and fault conditions. For fast prototyping and performance prediction, computer-based simulation has been widely adopted in the engineering development process. Modeling such complex systems while including switching power electronic converters requires a powerful and robust simulation tool. Furthermore, a rapid solver is critical to allow for developing multiple iterative enhancements based on insight gained through system simulation studies.", "title": "" }, { "docid": "90b59d264de9bc4054f4905c47e22596", "text": "Bronson (1974) reviewed evidence in support of the claim that the development of visually guided behavior in the human infant over the first few months of life represents a shift from subcortical to cortical visual processing. Recently, this view has been brought into question for two reasons; first, evidence revealing apparently sophisticated perceptual abilities in the newborn, and second, increasing evidence for multiple cortica streams of visual processing. The present paper presents a reanalysis of the relation between the maturation of cortical pathways and the development of visually guided behavior, focusing in particular on how the maturational state of the primary visual cortex may constrain the functioning of neural pathways subserving oculomotor control.", "title": "" }, { "docid": "e8824408140898ac81fba94530f6e43e", "text": "The Bag-of-Visual-Words model has emerged as an effective approach to represent local video features for human actions classification. However, one of the major challenges in this model is the generation of the visual vocabulary. In the case of human action recognition, losing spatial-temporal relationships is one of the important reasons that provokes the low descriptive power of classic visual words. In this work we propose a three-level approach to construct visual n-grams for human action classification. First, in order to reduce the number of non-descriptive words generated by K-means clustering of the spatio-temporal interest points, we propose to apply a variant of the classsical Leader-Follower clustering algorithm to create an optimal vocabulary from a pre-established number of visual words. Second, with the aim of incorporating spatial and temporal constraints to the Bag-of-Visual-Words model, we exploit the spatio-temporal relationships between interest points to build a graphbased representation of the video. Frequent subgraphs are extracted for each action class and a visual vocabulary of n-grams is constructed from the labels (descriptors) of selected subgraphs. Finally, we build a histogram by using the frequency of each n-gram in the graph representing a video of human action. The proposed approach combines the representational power of graphs with the efficiency of the Bag-of-Visual-Words model. Extensive validation on five challenging human actions datasets demonstrates the effectiveness of the proposed model compared to state-of-the-art methods.", "title": "" }, { "docid": "3902afc560de6f0b028315977bc55976", "text": "Traffic light congestion normally occurs in urban areas where the number of vehicles is too many on the road. This problem drives the need for innovation and provide efficient solutions regardless this problem. Smart system that will monitor the congestion level at the traffic light will be a new option to replace the old system which is not practical anymore. Implementing internet of thinking (IoT) technology will provide the full advantage for monitoring and creating a congestion model based on sensor readings. Multiple sensor placements for each lane will give a huge advantage in detecting vehicle and increasing the accuracy in collecting data. To gather data from each sensor, the LoRaWAN technology is utilized where it features low power wide area network, low cost of implementation and the communication is secure bi-directional for the internet of thinking. The radio frequency used between end nodes to gateways range is estimated around 15-kilometer radius. A series of test is carried out to estimate the range of signal and it gives a positive result. The level of congestion for each lane will be displayed on Grafana dashboard and the algorithm can be calculated. This provides huge advantages to the implementation of this project, especially the scope of the project will be focus in urban areas where the level of congestion is bad.", "title": "" }, { "docid": "3b6cef052cd7a7acc765b44292af51cc", "text": "Minimizing travel time is critical for the successful operation of emergency vehicles. Preemption can significantly help emergency vehicles reach the intended destination faster. Majority of the current studies focus on minimizing and/or eliminating delays for EVs and do not consider the negative impacts of preemption on urban traffic. One primary negative impact is extended delays for non-EV traffic due to preemption that is addressed in this paper. We propose an Adaptive Preemption of Traffic (APT) system for Emergency Vehicles in an Intelligent Transportation System. We utilize the knowledge of current traffic conditions in the transportation system to adaptively preempt traffic at signals along the path of EVs so as to minimize, if not eliminate stopped delays for EVs while simultaneously minimizing the delays for non-emergency vehicles in the system. Through extensive simulation results, we show substantial reduction in delays for both EVs.", "title": "" } ]
scidocsrr
da2ed32edd2a329f2cbd1aafbc314048
Synthetic Data and Artificial Neural Networks for Natural Scene Text Recognition
[ { "docid": "2e5a3cd852a53b018032804f77088d03", "text": "A general method for text localization and recognition in real-world images is presented. The proposed method is novel, as it (i) departs from a strict feed-forward pipeline and replaces it by a hypothesesverification framework simultaneously processing multiple text line hypotheses, (ii) uses synthetic fonts to train the algorithm eliminating the need for time-consuming acquisition and labeling of real-world training data and (iii) exploits Maximally Stable Extremal Regions (MSERs) which provides robustness to geometric and illumination conditions. The performance of the method is evaluated on two standard datasets. On the Char74k dataset, a recognition rate of 72% is achieved, 18% higher than the state-of-the-art. The paper is first to report both text detection and recognition results on the standard and rather challenging ICDAR 2003 dataset. The text localization works for number of alphabets and the method is easily adapted to recognition of other scripts, e.g. cyrillics.", "title": "" }, { "docid": "7197dbee035c62044a93d4e60762e3ea", "text": "The problem of detecting and recognizing text in natural scenes has proved to be more challenging than its counterpart in documents, with most of the previous work focusing on a single part of the problem. In this work, we propose new solutions to the character and word recognition problems and then show how to combine these solutions in an end-to-end text-recognition system. We do so by leveraging the recently introduced Maxout networks along with hybrid HMM models that have proven useful for voice recognition. Using these elements, we build a tunable and highly accurate recognition system that beats state-of-theart results on all the sub-problems for both the ICDAR 2003 and SVT benchmark datasets.1", "title": "" }, { "docid": "a43e646ee162a23806c3b8f0a9d69b23", "text": "This paper describes the results of the ICDAR 2005 competition for locating text in camera captured scenes. For this we used the same data as the ICDAR 2003 competition, which has been kept private until now. This allows a direct comparison with the 2003 entries. The main result is that the leading 2005 entry has improved significantly on the leading 2003 entry, with an increase in average f-score from 0.5 to 0.62, where the f-score is the same adapted information retrieval measure used for the 2003 competition. The paper also discusses the Web-based deployment and evaluation of text locating systems, and one of the leading entries has now been deployed in this way. This mode of usage could lead to more complete and more immediate knowledge of the strengths and weaknesses of each newly developed system.", "title": "" }, { "docid": "26fc8289a213c51b43777fc909eaeb7e", "text": "This paper tackles the problem of recognizing characters in images of natural scenes. In particular, we focus on recognizing characters in situations that would traditionally not be handled well by OCR techniques. We present an annotated database of images containing English and Kannada characters. The database comprises of images of street scenes taken in Bangalore, India using a standard camera. The problem is addressed in an object cateogorization framework based on a bag-of-visual-words representation. We assess the performance of various features based on nearest neighbour and SVM classification. It is demonstrated that the performance of the proposed method, using as few as 15 training images, can be far superior to that of commercial OCR systems. Furthermore, the method can benefit from synthetically generated training data obviating the need for expensive data collection and annotation.", "title": "" } ]
[ { "docid": "59c83aa2f97662c168316f1a4525fd4d", "text": "Light field microscopy is a new technique for high-speed volumetric imaging of weakly scattering or fluorescent specimens. It employs an array of microlenses to trade off spatial resolution against angular resolution, thereby allowing a 4-D light field to be captured using a single photographic exposure without the need for scanning. The recorded light field can then be used to computationally reconstruct a full volume. In this paper, we present an optical model for light field microscopy based on wave optics, instead of previously reported ray optics models. We also present a 3-D deconvolution method for light field microscopy that is able to reconstruct volumes at higher spatial resolution, and with better optical sectioning, than previously reported. To accomplish this, we take advantage of the dense spatio-angular sampling provided by a microlens array at axial positions away from the native object plane. This dense sampling permits us to decode aliasing present in the light field to reconstruct high-frequency information. We formulate our method as an inverse problem for reconstructing the 3-D volume, which we solve using a GPU-accelerated iterative algorithm. Theoretical limits on the depth-dependent lateral resolution of the reconstructed volumes are derived. We show that these limits are in good agreement with experimental results on a standard USAF 1951 resolution target. Finally, we present 3-D reconstructions of pollen grains that demonstrate the improvements in fidelity made possible by our method.", "title": "" }, { "docid": "3ec2678c6e0b7b8eb92ab5b2fc1ca504", "text": "The current trend towards smaller and smaller mobile devices may cause considerable difficulties in using them. In this paper, we propose an interface called Anywhere Surface Touch, which allows any flat or curved surface in a real environment to be used as an input area. The interface uses only a single small camera and a contact microphone to recognize several kinds of interaction between the fingers of the user and the surface. The system recognizes which fingers are interacting and in which direction the fingers are moving. Additionally, the fusion of vision and sound allows the system to distinguish the contact conditions between the fingers and the surface. Evaluation experiments showed that users became accustomed to our system quickly, soon being able to perform input operations on various surfaces.", "title": "" }, { "docid": "244745da710e8c401173fe39359c7c49", "text": "BACKGROUND\nIntegrating information from the different senses markedly enhances the detection and identification of external stimuli. Compared with unimodal inputs, semantically and/or spatially congruent multisensory cues speed discrimination and improve reaction times. Discordant inputs have the opposite effect, reducing performance and slowing responses. These behavioural features of crossmodal processing appear to have parallels in the response properties of multisensory cells in the superior colliculi and cerebral cortex of non-human mammals. Although spatially concordant multisensory inputs can produce a dramatic, often multiplicative, increase in cellular activity, spatially disparate cues tend to induce a profound response depression.\n\n\nRESULTS\nUsing functional magnetic resonance imaging (fMRI), we investigated whether similar indices of crossmodal integration are detectable in human cerebral cortex, and for the synthesis of complex inputs relating to stimulus identity. Ten human subjects were exposed to varying epochs of semantically congruent and incongruent audio-visual speech and to each modality in isolation. Brain activations to matched and mismatched audio-visual inputs were contrasted with the combined response to both unimodal conditions. This strategy identified an area of heteromodal cortex in the left superior temporal sulcus that exhibited significant supra-additive response enhancement to matched audio-visual inputs and a corresponding sub-additive response to mismatched inputs.\n\n\nCONCLUSIONS\nThe data provide fMRI evidence of crossmodal binding by convergence in the human heteromodal cortex. They further suggest that response enhancement and depression may be a general property of multisensory integration operating at different levels of the neuroaxis and irrespective of the purpose for which sensory inputs are combined.", "title": "" }, { "docid": "2f7dd12e2bc56cddfa4b2dbd7e7a8c1a", "text": "and the Alfred P. Sloan Foundation. Appleyard received support from the National Science Foundation under Grant No. 0438736. Jon Perr and Patrick Sullivan ably assisted with the interviews of Open Source Software leaders. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the above funding sources or any other individuals or organizations. Open Innovation and Strategy", "title": "" }, { "docid": "e1f531740891d47387a2fc2ef4f71c46", "text": "Multi-dimensional arrays, or tensors, are increasingly found in fields such as signal processing and recommender systems. Real-world tensors can be enormous in size and often very sparse. There is a need for efficient, high-performance tools capable of processing the massive sparse tensors of today and the future. This paper introduces SPLATT, a C library with shared-memory parallelism for three-mode tensors. SPLATT contains algorithmic improvements over competing state of the art tools for sparse tensor factorization. SPLATT has a fast, parallel method of multiplying a matricide tensor by a Khatri-Rao product, which is a key kernel in tensor factorization methods. SPLATT uses a novel data structure that exploits the sparsity patterns of tensors. This data structure has a small memory footprint similar to competing methods and allows for the computational improvements featured in our work. We also present a method of finding cache-friendly reordering and utilizing them with a novel form of cache tiling. To our knowledge, this is the first work to investigate reordering and cache tiling in this context. SPLATT averages almost 30x speedup compared to our baseline when using 16 threads and reaches over 80x speedup on NELL-2.", "title": "" }, { "docid": "4d8f38413169a572c0087fd180a97e44", "text": "As continued scaling of silicon FETs grows increasingly challenging, alternative paths for improving digital system energy efficiency are being pursued. These paths include replacing the transistor channel with emerging nanomaterials (such as carbon nanotubes), as well as utilizing negative capacitance effects in ferroelectric materials in the FET gate stack, e.g., to improve sub-threshold slope beyond the 60 mV/decade limit. However, which path provides the largest energy efficiency benefits—and whether these multiple paths can be combined to achieve additional energy efficiency benefits—is still unclear. Here, we experimentally demonstrate the first negative capacitance carbon nanotube FETs (CNFETs), combining the benefits of both carbon nanotube channels and negative capacitance effects. We demonstrate negative capacitance CNFETs, achieving sub-60 mV/decade sub-threshold slope with an average sub-threshold slope of 55 mV/decade at room temperature. The average ON-current (<inline-formula> <tex-math notation=\"LaTeX\">${I}_{ \\mathrm{ON}}$ </tex-math></inline-formula>) of these negative capacitance CNFETs improves by <inline-formula> <tex-math notation=\"LaTeX\">$2.1\\times $ </tex-math></inline-formula> versus baseline CNFETs, (i.e., without negative capacitance) for the same OFF-current (<inline-formula> <tex-math notation=\"LaTeX\">${I}_{ \\mathrm{OFF}}$ </tex-math></inline-formula>). This work demonstrates a promising path forward for future generations of energy-efficient electronic systems.", "title": "" }, { "docid": "dd942595f8187493ce08706401350969", "text": "We study the problem of cooperative multi-agent reinforcement learning with a single joint reward signal. This class of learning problems is difficult because of the often large combined action and observation spaces. In the fully centralized and decentralized approaches, we find the problem of spurious rewards and a phenomenon we call the “lazy agent” problem, which arises due to partial observability. We address these problems by training individual agents with a novel value-decomposition network architecture, which learns to decompose the team value function into agent-wise value functions.", "title": "" }, { "docid": "b894e6a16f5082bc3c28894fedc87232", "text": "Goal: The use of an online game for learning in higher education aims to make complex theoretical knowledge more approachable. Permanent repetition will lead to a more in-depth learning. Objective: To gain insight into whether and to what extent, online games have the potential to contribute to student learning in higher education. Experimental Setting: The online game was used for the first time during a lecture on Structural Concrete at Master’s level, involving 121 seventh semester students. Methods: Pretest/posttest experimental control group design with questionnaires and an independent online evaluation. Results: The minimum learning result of playing the game was equal to that achieved with traditional methods. A factor called “joy” was introduced, according to Nielsen (2002), which was amazingly high. Conclusion: The experimental findings support the efficacy of game playing. Students enjoyed this kind of e-Learning.", "title": "" }, { "docid": "9235935bc5fdc927a88cb797d6b90ffa", "text": "The wireless sensor network \"macroscope\" offers the potential to advance science by enabling dense temporal and spatial monitoring of large physical volumes. This paper presents a case study of a wireless sensor network that recorded 44 days in the life of a 70-meter tall redwood tree, at a density of every 5 minutes in time and every 2 meters in space. Each node measured air temperature, relative humidity, and photosynthetically active solar radiation. The network captured a detailed picture of the complex spatial variation and temporal dynamics of the microclimate surrounding a coastal redwood tree. This paper describes the deployed network and then employs a multi-dimensional analysis methodology to reveal trends and gradients in this large and previously-unobtainable dataset. An analysis of system performance data is then performed, suggesting lessons for future deployments.", "title": "" }, { "docid": "59608978a30fcf6fc8bc0b92982abe69", "text": "The self-advocacy movement (Dybwad & Bersani, 1996) grew out of resistance to oppressive practices of institutionalization (and worse) for people with cognitive disabilities. Moving beyond the worst abuses, people with cognitive disabilities seek as full participation in society as possible.", "title": "" }, { "docid": "f8956705295a454b99eb81bd41f0e8aa", "text": "Virtual Reality systems have drawn much attention by researchers and companies in the last few years. Virtual Reality is a term that applies to computer-simulated environments that can simulate physical presence in places in the real world, as well as in imaginary worlds. Interactivity and its captivating power, contribute to the feeling of being the part of the action on the virtual safe environment, without any real danger. So, Virtual Reality has been a promising technology applicable in various domains of application such as training simulators, medical and health care, education, scientific visualization, and entertainment industry. Virtual reality can lead to state of the art technologies like Second Life, too. Like many advantageous technologies, beside opportunities of Virtual Reality and Second Life, inevitable challenges appear, too. This paper is a technical brief on Virtual Reality technology and its opportunities and challenges in different areas.", "title": "" }, { "docid": "f5e6df40898a5b84f8e39784f9b56788", "text": "OBJECTIVE\nTo determine the prevalence of anxiety and depression among medical students at Nishtar Medical College, Multan.\n\n\nMETHODS\nA cross-sectional study was carried out at Nishtar Medical College, Multan in 2008. The questionnaire was administered to 815 medical students who had spent more than 6 months in college and had no self reported physical illness. They were present at the time of distribution of the questionnaires and consented. Prevalence of anxiety and depression was assessed using a structured validated questionnaire, the Aga Khan University Anxiety and Depression Scale with a cut-off score of 19. Data Analysis was done using SPSS v. 14.\n\n\nRESULTS\nOut of 815 students, 482 completed the questionnaire with a response rate of 59.14%. The mean age of students was 20.66 +/- 1.8 years. A high prevalence of anxiety and depression (43.89%) was found amongst medical students. Prevalence of anxiety and depression among students of first, second, third, fourth and final years was 45.86%, 52.58%, 47.14%, 28.75% and 45.10% respectively. Female students were found to be more depressed than male students (OR = 2.05, 95% CI = 1.42-2.95, p = 0.0001). There was a significant association between the prevalence of anxiety and depression and the respective year of medical college (p = 0.0276). It was seen that age, marital status, locality and total family income did not significantly affect the prevalence of anxiety and depression.\n\n\nCONCLUSIONS\nThe results showed that medical students constitute a vulnerable group that has a high prevalence of psychiatric morbidity comprising of anxiety and depression.", "title": "" }, { "docid": "8b4ddcb98f8a5c5e51f02c23b0aee764", "text": "The problem of identifying approximately duplicate record in database is an essential step for data cleaning & data integration process. A dynamic web page is displayed to show the results as well as other relevant advertisements that seem relevant to the query. The real world entities have two or more representation in databases. When dealing with large amount of data it is important that there be a well defined and tested mechanism to filter out duplicate result. This keeps the result relevant to the queries. Duplicate record exists in the query result of many web databases especially when the duplicates are defined based on only some of the fields in a record. Using exact matching technique Records that are exactly same can be detected. The system that helps user to integrate and compares the query results returned from multiple web databases matches the different sources records that referred to the same real world entity. In this paper, we analyze the literature on duplicate record detection. We cover similarity metrics which are commonly used to detect similar field entries, and present an extensive set of duplicate detection algorithms that can detect approximately duplicate records in a database also the techniques for improving the efficiency and scalability of approximate duplicate detection algorithms are covered. We conclude with coverage of existing tools and with a brief discussion of the big open problems in the area.", "title": "" }, { "docid": "3716c5aa7139aeb5ec6db87da7f0285d", "text": "In a temporal database, time values are associated with data item to indicate their periods of validity. We propose a model for temporal databases within the framework of the classical database theory. Our model is realized as a temporal parameterization of static relations. We do not impose any restrictions upon the schemes of temporal relations. The classical concepts of normal forms and dependencies are easily extended to our model, allowing a suitable design for a database scheme. We present a relational algebra and a tuple calculus for our model and prove their equivalence. Our data model is homogeneous in the sense that the periods of validity of all the attributes in a given tuple of a temporal relation are identical. We discuss how to relax the homogeneity requirement to extend the application domain of our approach.", "title": "" }, { "docid": "0991b582ad9fcc495eb534ebffe3b5f8", "text": "A computationally cheap extension from single-microphone acoustic echo cancellation (AEC) to multi-microphone AEC is presented for the case of a single loudspeaker. It employs the idea of common-acoustical-pole and zero modeling of room transfer functions (RTFs). The RTF models used for multi-microphone AEC share a fixed common denominator polynomial, which is calculated off-line by means of a multi-channel warped linear prediction. By using the common denominator polynomial as a prefilter, only the numerator polynomial has to be estimated recursively for each microphone, hence adapting to changes in the RTFs. This approach allows to decrease the number of numerator coefficients by one order of magnitude for each microphone compared with all-zero modeling. In a first configuration, the prefiltering is done on the adaptive filter signal, hence achieving a pole-zero model of the RTF in the AEC. In a second configuration, the (inverse) prefiltering is done on the loudspeaker signal, hence achieving a dereverberation effect, in addition to AEC, on the microphone signals.", "title": "" }, { "docid": "8c54780de6c8d8c3fa71b31015ad044e", "text": "Integrins are cell surface receptors for extracellular matrix proteins and play a key role in cell survival, proliferation, migration and gene expression. Integrin signaling has been shown to be deregulated in several types of cancer, including prostate cancer. This review is focused on integrin signaling pathways known to be deregulated in prostate cancer and known to promote prostate cancer progression.", "title": "" }, { "docid": "296f18277958621763646519a7224193", "text": "This chapter examines health promotion and disease prevention from the perspective of social cognitive theory. This theory posits a multifaceted causal structure in which self-efficacy beliefs operate in concert with cognized goals, outcome expectations, and perceived environmental impediments and facilitators in the regulation of human motivation, action, and well-being. Perceived self-efficacy is a key factor in the causal structure because it operates on motivation and action both directly and through its impact on the other determinants. The areas of overlap of sociocognitive determinants with some of the most widely applied psychosocial models of health are identified. Social cognitive theory addresses the sociostructural determinants of health as well as the personal determinants. A comprehensive approach to health promotion requires changing the practices of social systems that have widespread detrimental effects on health rather than solely changing the habits of individuals. Further progress in this field requires building new structures for health promotion, new systems for risk reduction and greater emphasis on health policy initiatives. People's beliefs in their collective efficacy to accomplish social change, therefore, play a key role in the policy and public health perspective to health promotion and disease prevention. Bandura, A. (1998). Health promotion from the perspective of social cognitive theory. Psychology and Health, 13, 623-649.", "title": "" }, { "docid": "46714f589bdf57d734fc4eff8741d39b", "text": "As an essential operation in data cleaning, the similarity join has attracted considerable attention from the database community. In this article, we study string similarity joins with edit-distance constraints, which find similar string pairs from two large sets of strings whose edit distance is within a given threshold. Existing algorithms are efficient either for short strings or for long strings, and there is no algorithm that can efficiently and adaptively support both short strings and long strings. To address this problem, we propose a new filter, called the segment filter. We partition a string into a set of segments and use the segments as a filter to find similar string pairs. We first create inverted indices for the segments. Then for each string, we select some of its substrings, identify the selected substrings from the inverted indices, and take strings on the inverted lists of the found substrings as candidates of this string. Finally, we verify the candidates to generate the final answer. We devise efficient techniques to select substrings and prove that our method can minimize the number of selected substrings. We develop novel pruning techniques to efficiently verify the candidates. We also extend our techniques to support normalized edit distance. Experimental results show that our algorithms are efficient for both short strings and long strings, and outperform state-of-the-art methods on real-world datasets.", "title": "" }, { "docid": "813e41234aad749022a4d655af987ad6", "text": "Three- and four-element eyepiece designs are presented each with a different type of radial gradient-index distribution. Both quadratic and modified quadratic index profiles are shown to provide effective control of the field aberrations. In particular, the three-element design with a quadratic index profile demonstrates that the inhomogeneous power contribution can make significant contributions to the overall system performance, especially the astigmatism correction. Using gradient-index components has allowed for increased eye relief and field of view making these designs comparable with five- and six-element ones.", "title": "" }, { "docid": "febed6b06359fe35437e7fa16ed0cbfa", "text": "Videos recorded on moving cameras are often known to be shaky due to unstable carrier motion and the video stabilization problem involves inferring the intended smooth motion to keep and the unintended shaky motion to remove. However, conventional methods typically require proper, scenario-specific parameter setting, which does not generalize well across different scenarios. Moreover, we observe that a stable video should satisfy two conditions: a smooth trajectory and consistent inter-frame transition. While conventional methods only target at the former condition, we address these two issues at the same time. In this paper, we propose a homography consistency based algorithm to directly extract the optimal smooth trajectory and evenly distribute the inter-frame transition. By optimizing in the homography domain, our method does not need further matrix decomposition and parameter adjustment, automatically adapting to all possible types of motion (eg. translational or rotational) and video properties (eg. frame rates). We test our algorithm on translational videos recorded from a car and rotational videos from a hovering aerial vehicle, both of high and low frame rates. Results show our method widely applicable to different scenarios without any need of additional parameter adjustment.", "title": "" } ]
scidocsrr
b95fc68fc7586b8f0b79c21da59bdca6
Integrated Speech Enhancement Method Based on Weighted Prediction Error and DNN for Dereverberation and Denoising
[ { "docid": "413b21bece889166a385651ba5cd8512", "text": "Monaural speech separation is a fundamental problem in robust speech processing. Recently, deep neural network (DNN)-based speech separation methods, which predict either clean speech or an ideal time-frequency mask, have demonstrated remarkable performance improvement. However, a single DNN with a given window length does not leverage contextual information sufficiently, and the differences between the two optimization objectives are not well understood. In this paper, we propose a deep ensemble method, named multicontext networks, to address monaural speech separation. The first multicontext network averages the outputs of multiple DNNs whose inputs employ different window lengths. The second multicontext network is a stack of multiple DNNs. Each DNN in a module of the stack takes the concatenation of original acoustic features and expansion of the soft output of the lower module as its input, and predicts the ratio mask of the target speaker; the DNNs in the same module employ different contexts. We have conducted extensive experiments with three speech corpora. The results demonstrate the effectiveness of the proposed method. We have also compared the two optimization objectives systematically and found that predicting the ideal time-frequency mask is more efficient in utilizing clean training speech, while predicting clean speech is less sensitive to SNR variations.", "title": "" } ]
[ { "docid": "f565a815207932f6603b19fc57b02d4c", "text": "This study was aimed at extending the use of assistive technology (i.e., photocells, interface and personal computer) to support choice strategies by three girls with Rett syndrome and severe to profound developmental disabilities. A second purpose of the study was to reduce stereotypic behaviors exhibited by the participants involved (i.e., body rocking, hand washing and hand mouthing). Finally, a third goal of the study was to monitor the effects of such program on the participants' indices of happiness. The study was carried out according to a multiple probe design across responses for each participant. Results showed that the three girls increased the adaptive responses and decreased the stereotyped behaviors during intervention phases compared to baseline. Moreover, during intervention phases, the indices of happiness augmented for each girl as well. Clinical, psychological and rehabilitative implications of the findings are discussed.", "title": "" }, { "docid": "d59bd1ac3d670ef980d16cf51041849c", "text": "Mutation analysis evaluates a testing or debugging technique by measuring how well it detects mutants, which are systematically seeded, artificial faults. Mutation analysis is inherently expensive due to the large number of mutants it generates and due to the fact that many of these generated mutants are not effective; they are redundant, equivalent, or simply uninteresting and waste computational resources. A large body of research has focused on improving the scalability of mutation analysis and proposed numerous optimizations to, e.g., select effective mutants or efficiently execute a large number of tests against a large number of mutants. However, comparatively little research has focused on the costs and benefits of mutation testing, in which mutants are presented as testing goals to a developer, in the context of an industrial-scale software development process. This paper draws on an industrial application of mutation testing, involving 30,000+ developers and 1.9 million change sets, written in 4 programming languages. It shows that mutation testing with productive mutants does not add a significant overhead to the software development process and reports on mutation testing benefits perceived by developers. This paper also quantifies the costs of unproductive mutants, and the results suggest that achieving mutation adequacy is neither practical nor desirable. Finally, this paper describes lessons learned from these studies, highlights the current challenges of efficiently and effectively applying mutation testing in an industrial-scale software development process, and outlines research directions.", "title": "" }, { "docid": "b00c6771f355577437dee2cdd63604b8", "text": "A person gets frustrated when he faces slow speed as many devices are connected to the same network. As the number of people accessing wireless internet increases, it’s going to result in clogged airwaves. Li-Fi is transmission of data through illumination by taking the fiber out of fiber optics by sending data through a LED light bulb that varies in intensity faster than the human eye can follow.", "title": "" }, { "docid": "7a8619e3adf03c8b00a3e830c3f1170b", "text": "We present a robot-pose-registration algorithm, which is entirely based on large planar-surface patches extracted from point clouds sampled from a three-dimensional (3-D) sensor. This approach offers an alternative to the traditional point-to-point iterative-closest-point (ICP) algorithm, its point-to-plane variant, as well as newer grid-based algorithms, such as the 3-D normal distribution transform (NDT). The simpler case of known plane correspondences is tackled first by deriving expressions for least-squares pose estimation considering plane-parameter uncertainty computed during plane extraction. Closed-form expressions for covariances are also derived. To round-off the solution, we present a new algorithm, which is called minimally uncertain maximal consensus (MUMC), to determine the unknown plane correspondences by maximizing geometric consistency by minimizing the uncertainty volume in configuration space. Experimental results from three 3-D sensors, viz., Swiss-Ranger, University of South Florida Odetics Laser Detection and Ranging, and an actuated SICK S300, are given. The first two have low fields of view (FOV) and moderate ranges, while the third has a much bigger FOV and range. Experimental results show that this approach is not only more robust than point- or grid-based approaches in plane-rich environments, but it is also faster, requires significantly less memory, and offers a less-cluttered planar-patches-based visualization.", "title": "" }, { "docid": "e464e7335a4bc1af76d57b158dfcf435", "text": "An elementary way of using language is to refer to objects. Often, these objects are physically present in the shared environment and reference is done via mention of perceivable properties of the objects. This is a type of language use that is modelled well neither by logical semantics nor by distributional semantics, the former focusing on inferential relations between expressed propositions, the latter on similarity relations between words or phrases. We present an account of word and phrase meaning that is perceptually grounded, trainable, compositional, and ‘dialogueplausible’ in that it computes meanings word-by-word. We show that the approach performs well (with an accuracy of 65% on a 1-out-of-32 reference resolution task) on direct descriptions and target/landmark descriptions, even when trained with less than 800 training examples and automatically transcribed utterances.", "title": "" }, { "docid": "1e82e123cacca01a84a8ea2fef641d98", "text": "We propose a new class of convex penalty functions, called variational Gram functions (VGFs), that can promote pairwise relations, such as orthogonality, among a set of vectors in a vector space. These functions can serve as regularizers in convex optimization problems arising from hierarchical classification, multitask learning, and estimating vectors with disjoint supports, among other applications. We study necessary and sufficient conditions under which a VGF is convex, and give a characterization of its subdifferential. We show how to compute its proximal operator, and discuss efficient optimization algorithms for regularized loss minimization problems where the loss admits a simple variational representation and the regularizer is a VGF. We also establish a general representer theorem for such learning problems. Lastly, numerical experiments on a hierarchical classification problem are presented to demonstrate the effectiveness of VGFs and the associated optimization algorithms.", "title": "" }, { "docid": "a3685518bd7248602b6a3143371e4ffc", "text": "The Singular Value Decomposition (SVD) of a matrix is a linear algebra tool that has been successfully applied to a wide variety of domains. The present paper is concerned with the problem of estimating the Jacobian of the SVD components of a matrix with respect to the matrix itself. An exact analytic technique is developed that facilitates the estimation of the Jacobian using calculations based on simple linear algebra. Knowledge of the Jacobian of the SVD is very useful in certain applications involving multivariate regression or the computation of the uncertainty related to estimates obtained through the SVD. The usefulness and generality of the proposed technique is demonstrated by applying it to the estimation of the uncertainty for three different vision problems, namely self-calibration, epipole computation and rigid motion estimation. Key-words: Singular Value Decomposition, Jacobian, Uncertainty, Calibration, Structure from Motion. M. Lourakis was supported by the VIRGO research network (EC Contract No ERBFMRX-CT96-0049) of the TMR Programme. Calcul de la Jacobienne de la Décomposition en Valeurs Singulières: Théorie et applications Résumé : La technique de Décomposition en Valeurs Singulières (SVD) d’une matrice est un outil algèbrique qui a trouvé de nombreuses applications en vision par ordinateur. Dans ce rapport, nous nous intéressons au problème de l’estimation de la jacobienne de la SVD par rapport aux coefficients de la matrice initiale. Cette jacobienne est très utile pour toute une gamme d’applications faisant intervenir des estimations aux moindres carrés (pour lesquelles on utilise la SVD) ou bien des calculs d’incertitude pour des grandeurs estimées de cette manière. Une solution analytique simple à ce problème est présentée. Elle exprime la jacobienne à partir de la SVD de la matrice à l’aide d’opérations très simples d’algèbre linéaire. L’utilité et la généralité de la technique est démontrée en l’appliquant à trois problèmes de vision: l’auto-calibration, le calcul d’épipoles et l’estimation de mouvements rigides. Mots-clés : Décomposition en valeurs singulières, Jacobienne, Incertitude, Calibration, Structure à partir du mouvement. Estimating the Jacobian of the Singular Value Decomposition: Theory and Applications 3", "title": "" }, { "docid": "f456edd4d56dab8f0a60a3cef87f6cdb", "text": "In this paper, we propose Sequential Grouping Networks (SGN) to tackle the problem of object instance segmentation. SGNs employ a sequence of neural networks, each solving a sub-grouping problem of increasing semantic complexity in order to gradually compose objects out of pixels. In particular, the first network aims to group pixels along each image row and column by predicting horizontal and vertical object breakpoints. These breakpoints are then used to create line segments. By exploiting two-directional information, the second network groups horizontal and vertical lines into connected components. Finally, the third network groups the connected components into object instances. Our experiments show that our SGN significantly outperforms state-of-the-art approaches in both, the Cityscapes dataset as well as PASCAL VOC.", "title": "" }, { "docid": "90b913e3857625f3237ff7a47f675fbb", "text": "A new approach for the design of UWB hairpin-comb filters is presented. The filters can be designed to possess broad upper stopband characteristics by controlling the overall size of their resonators. The measured frequency characteristics of implemented UWB filters show potential first spurious passbands centered at about six times the fundamental passband center frequencies.", "title": "" }, { "docid": "2d718fdaecb286ef437b81d2a31383dd", "text": "In this paper, we present a novel non-parametric polygonal approximation algorithm for digital planar curves. The proposed algorithm first selects a set of points (called cut-points) on the contour which are of very ‘high’ curvature. An optimization procedure is then applied to find adaptively the best fitting polygonal approximations for the different segments of the contour as defined by the cut-points. The optimization procedure uses one of the efficiency measures for polygonal approximation algorithms as the objective function. Our algorithm adaptively locates segments of the contour with different levels of details. The proposed algorithm follows the contour more closely where the level of details on the curve is high, while addressing noise by using suppression techniques. This makes the algorithm very robust for noisy, real-life contours having different levels of details. The proposed algorithm performs favorably when compared with other polygonal approximation algorithms using the popular shapes. In addition, the effectiveness of the algorithm is shown by measuring its performance over a large set of handwritten Arabic characters and MPEG7 CE Shape-1 Part B database. Experimental results demonstrate that the proposed algorithm is very stable and robust compared with other algorithms. 2010 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "e3d0d40a685d5224084bf350dfb3b59b", "text": "This review analyzes the methods being used and developed in global environmental governance (GEG), an applied field that employs insights and tools from a variety of disciplines both to understand pressing environmental problems and to determine how to address them collectively. We find that methods are often underspecified in GEG research. We undertake a critical review of data collection and analysis in three categories: qualitative, quantitative, and modeling and scenario building. We include examples and references from recent studies to show when and how best to utilize these different methods to conduct problem-driven research. GEG problems are often characterized by institutional and issue complexity, linkages, and multiscalarity that pose challenges for many conventional methodological approaches. As a result, given the large methodological toolbox available to applied researchers, we recommend they adopt a reflective, pluralist, and often collaborative approach when choosing methods appropriate to these challenges. 441 A nn u. R ev . E nv ir on . R es ou rc . 2 01 3. 38 :4 41 -4 71 . D ow nl oa de d fr om w w w .a nn ua lr ev ie w s. or g by P on tif ic ia U ni ve rs id ad J av er ia na o n 12 /1 9/ 13 . F or p er so na l u se o nl y. EG38CH17-ONeill ARI 20 September 2013 14:27", "title": "" }, { "docid": "6f5a3f7ddb99eee445d342e6235280c3", "text": "Although aesthetic experiences are frequent in modern life, there is as of yet no scientifically comprehensive theory that explains what psychologically constitutes such experiences. These experiences are particularly interesting because of their hedonic properties and the possibility to provide self-rewarding cognitive operations. We shall explain why modern art's large number of individualized styles, innovativeness and conceptuality offer positive aesthetic experiences. Moreover, the challenge of art is mainly driven by a need for understanding. Cognitive challenges of both abstract art and other conceptual, complex and multidimensional stimuli require an extension of previous approaches to empirical aesthetics. We present an information-processing stage model of aesthetic processing. According to the model, aesthetic experiences involve five stages: perception, explicit classification, implicit classification, cognitive mastering and evaluation. The model differentiates between aesthetic emotion and aesthetic judgments as two types of output.", "title": "" }, { "docid": "c536e79078d7d5778895e5ac7f02c95e", "text": "Block-based programming languages like Scratch, Alice and Blockly are becoming increasingly common as introductory languages in programming education. There is substantial research showing that these visual programming environments are suitable for teaching programming concepts. But, what do people do when they use Scratch? In this paper we explore the characteristics of Scratch programs. To this end we have scraped the Scratch public repository and retrieved 250,000 projects. We present an analysis of these projects in three different dimensions. Initially, we look at the types of blocks used and the size of the projects. We then investigate complexity, used abstractions and programming concepts. Finally we detect code smells such as large scripts, dead code and duplicated code blocks. Our results show that 1) most Scratch programs are small, however Scratch programs consisting of over 100 sprites exist, 2) programming abstraction concepts like procedures are not commonly used and 3) Scratch programs do suffer from code smells including large scripts and unmatched broadcast signals.", "title": "" }, { "docid": "a43a0f828859cc6f24881d26dacb63e6", "text": "The emergence in the field of fingerprint recognition witness several efficient techniques that propose matching and recognition in less time. The latent fingerprints posed a challenge for such efficient techniques that may deviates results from ideal to worse. The minutiae are considered as a discriminative feature of finger patterns which is assessed in almost every technique for recognition purpose. But in latent patterns such minutiae may be missed or may have contaminated noise. In this paper, we presents such work that demonstrate the solution for latent fingerprints recognition but in ideal time. We also gathered the description about the techniques that have been evaluated on standard NIST Special Dataset (SD)27 of latent fingerprint.", "title": "" }, { "docid": "34508dac189b31c210d461682fed9f67", "text": "Life is more than cat pictures. There are tough days, heartbreak, and hugs. Under what contexts do people share these feelings online, and how do their friends respond? Using millions of de-identified Facebook status updates with poster-annotated feelings (e.g., “feeling thankful” or “feeling worried”), we examine the magnitude and circumstances in which people share positive or negative feelings and characterize the nature of the responses they receive. We find that people share greater proportions of both positive and negative emotions when their friend networks are smaller and denser. Consistent with social sharing theory, hearing about a friend’s troubles on Facebook causes friends to reply with more emotional and supportive comments. Friends’ comments are also more numerous and longer. Posts with positive feelings, on the other hand, receive more likes, and their comments have more positive language. Feelings that relate to the poster’s self worth, such as “feeling defeated,” “feeling unloved,” or “feeling accomplished” amplify these effects.", "title": "" }, { "docid": "31d66211511ae35d71c7055a2abf2801", "text": "BACKGROUND\nPrevious evidence points to a causal link between playing action video games and enhanced cognition and perception. However, benefits of playing other video games are under-investigated. We examined whether playing non-action games also improves cognition. Hence, we compared transfer effects of an action and other non-action types that required different cognitive demands.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nWe instructed 5 groups of non-gamer participants to play one game each on a mobile device (iPhone/iPod Touch) for one hour a day/five days a week over four weeks (20 hours). Games included action, spatial memory, match-3, hidden- object, and an agent-based life simulation. Participants performed four behavioral tasks before and after video game training to assess for transfer effects. Tasks included an attentional blink task, a spatial memory and visual search dual task, a visual filter memory task to assess for multiple object tracking and cognitive control, as well as a complex verbal span task. Action game playing eliminated attentional blink and improved cognitive control and multiple-object tracking. Match-3, spatial memory and hidden object games improved visual search performance while the latter two also improved spatial working memory. Complex verbal span improved after match-3 and action game training.\n\n\nCONCLUSION/SIGNIFICANCE\nCognitive improvements were not limited to action game training alone and different games enhanced different aspects of cognition. We conclude that training specific cognitive abilities frequently in a video game improves performance in tasks that share common underlying demands. Overall, these results suggest that many video game-related cognitive improvements may not be due to training of general broad cognitive systems such as executive attentional control, but instead due to frequent utilization of specific cognitive processes during game play. Thus, many video game training related improvements to cognition may be attributed to near-transfer effects.", "title": "" }, { "docid": "3361e6c7a448e69a73e8b3e879815386", "text": "The neck is not only the first anatomical area to show aging but also contributes to the persona of the individual. The understanding the aging process of the neck is essential for neck rejuvenation. Multiple neck rejuvenation techniques have been reported in the literature. In 1974, Skoog [1] described the anatomy of the superficial musculoaponeurotic system (SMAS) and its role in the aging of the neck. Recently, many patients have expressed interest in minimally invasive surgery with a low risk of complications and short recovery period. The use of thread for neck rejuvenation and the concept of the suture suspension neck lift have become widespread as a convenient and effective procedure; nevertheless, complications have also been reported such as recurrence, inadequate correction, and palpability of the sutures. In this study, we analyzed a new type of thread lift: elastic lift that uses elastic thread (Elasticum; Korpo SRL, Genova, Italy). We already use this new technique for the midface lift and can confirm its efficacy and safety in that context. The purpose of this study was to evaluate the outcomes and safety of the elastic lift technique for neck region lifting.", "title": "" }, { "docid": "fc9b4cb8c37ffefde9d4a7fa819b9417", "text": "Automatic neural architecture design has shown its potential in discovering powerful neural network architectures. Existing methods, no matter based on reinforcement learning or evolutionary algorithms (EA), conduct architecture search in a discrete space, which is highly inefficient. In this paper, we propose a simple and efficient method to automatic neural architecture design based on continuous optimization. We call this new approach neural architecture optimization (NAO). There are three key components in our proposed approach: (1) An encoder embeds/maps neural network architectures into a continuous space. (2) A predictor takes the continuous representation of a network as input and predicts its accuracy. (3) A decoder maps a continuous representation of a network back to its architecture. The performance predictor and the encoder enable us to perform gradient based optimization in the continuous space to find the embedding of a new architecture with potentially better accuracy. Such a better embedding is then decoded to a network by the decoder. Experiments show that the architecture discovered by our method is very competitive for image classification task on CIFAR-10 and language modeling task on PTB, outperforming or on par with the best results of previous architecture search methods with a significantly reduction of computational resources. Specifically we obtain 2.11% test set error rate for CIFAR-10 image classification task and 56.0 test set perplexity of PTB language modeling task. The best discovered architectures on both tasks are successfully transferred to other tasks such as CIFAR-100 and WikiText-2. Furthermore, combined with the recent proposed weight sharing mechanism, we discover powerful architecture on CIFAR-10 (with error rate 3.53%) and on PTB (with test set perplexity 56.6), with very limited computational resources (less than 10 GPU hours) for both tasks.", "title": "" }, { "docid": "a9f2acbe4bd04abc678316970828ef6d", "text": "— Choosing a university is one of the most important decisions that affects future of young student. This decision requires considering a number of criteria not only numerical but also linguistic. Istanbul is the first alternative for young students' university choice in Turkey. As well as the state universities, the private universities are also so popular in this city. In this paper, a ranking method that manages to choice of university selection is created by using technique for order preference by similarity to ideal solution (TOPSIS) method based on type-2 fuzzy set. This method has been used for ranking private universities in Istanbul.", "title": "" }, { "docid": "78a38e1bdb15fc57d94a1d8ddd330459", "text": "One of the most powerful aspects of biological inquiry using model organisms is the ability to control gene expression. A holy grail is both temporal and spatial control of the expression of specific gene products - that is, the ability to express or withhold the activity of genes or their products in specific cells at specific times. Ideally such a method would also regulate the precise levels of gene activity, and alterations would be reversible. The related goal of controlled or purposefully randomized expression of visible markers is also tremendously powerful. While not all of these feats have been accomplished in Caenorhabditis elegans to date, much progress has been made, and recent technologies put these goals within closer reach. Here, I present published examples of successful two-component site-specific recombination in C. elegans. These technologies are based on the principle of controlled intra-molecular excision or inversion of DNA sequences between defined sites, as driven by FLP or Cre recombinases. I discuss several prospects for future applications of this technology.", "title": "" } ]
scidocsrr
81d208da1f8bc86a369e5608a8e6dd6b
Automated Attack Planning
[ { "docid": "822c41ec0b2da978233d59c8fd871936", "text": "We present a novel POMDP planning algorithm called heuristic search value iteration (HSVI). HSVI is an anytime algorithm that returns a policy and a provable bound on its regret with respect to the optimal policy. HSVI gets its power by combining two well-known techniques: attention-focusing search heuristics and piecewise linear convex representations of the value function. HSVI’s soundness and convergence have been proven. On some benchmark problems from the literature, HSVI displays speedups of greater than 100 with respect to other state-of-the-art POMDP value iteration algorithms. We also apply HSVI to a new rover exploration problem 10 times larger than most POMDP problems in the literature.", "title": "" } ]
[ { "docid": "9b2e025c6bb8461ddb076301003df0e4", "text": "People are sharing their opinions, stories and reviews through online video sharing websites every day. Studying sentiment and subjectivity in these opinion videos is experiencing a growing attention from academia and industry. While sentiment analysis has been successful for text, it is an understudied research question for videos and multimedia content. The biggest setbacks for studies in this direction are lack of a proper dataset, methodology, baselines and statistical analysis of how information from different modality sources relate to each other. This paper introduces to the scientific community the first opinion-level annotated corpus of sentiment and subjectivity analysis in online videos called Multimodal Opinionlevel Sentiment Intensity dataset (MOSI). The dataset is rigorously annotated with labels for subjectivity, sentiment intensity, per-frame and per-opinion annotated visual features, and per-milliseconds annotated audio features. Furthermore, we present baselines for future studies in this direction as well as a new multimodal fusion approach that jointly models spoken words and visual gestures.", "title": "" }, { "docid": "f8a89a023629fa9bcb2c3566b6817b0c", "text": "In this paper, we propose a robust on-the-fly estimator initialization algorithm to provide high-quality initial states for monocular visual-inertial systems (VINS). Due to the non-linearity of VINS, a poor initialization can severely impact the performance of either filtering-based or graph-based methods. Our approach starts with a vision-only structure from motion (SfM) to build the up-to-scale structure of camera poses and feature positions. By loosely aligning this structure with pre-integrated IMU measurements, our approach recovers the metric scale, velocity, gravity vector, and gyroscope bias, which are treated as initial values to bootstrap the nonlinear tightly-coupled optimization framework. We highlight that our approach can perform on-the-fly initialization in various scenarios without using any prior information about system states and movement. The performance of the proposed approach is verified through the public UAV dataset and real-time onboard experiment. We make our implementation open source, which is the initialization part integrated in the VINS-Mono1.", "title": "" }, { "docid": "d4f4939967b69eec9af8252759074820", "text": "Kernel methods are ubiquitous tools in machine learning. However, there is often little reason for the common practice of selecting a kernel a priori. Even if a universal approximating kernel is selected, the quality of the finite sample estimator may be greatly affected by the choice of kernel. Furthermore, when directly applying kernel methods, one typically needs to compute a N×N Gram matrix of pairwise kernel evaluations to work with a dataset of N instances. The computation of this Gram matrix precludes the direct application of kernel methods on large datasets, and makes kernel learning especially difficult. In this paper we introduce Bayesian nonparmetric kernel-learning (BaNK), a generic, data-driven framework for scalable learning of kernels. BaNK places a nonparametric prior on the spectral distribution of random frequencies allowing it to both learn kernels and scale to large datasets. We show that this framework can be used for large scale regression and classification tasks. Furthermore, we show that BaNK outperforms several other scalable approaches for kernel learning on a variety of real world datasets.", "title": "" }, { "docid": "a1e5885f0bc2feda1454f34efbcbedb2", "text": "tronomy. It is common practice for manufacturers of image acquisition devices to include dedicated image processing software, but these programs are usually not very flexible and/or do not allow more complex image manipulations. Image processing programs also are available by themselves. ImageJ holds a unique position because T he advances of the medical and biological sciences over recent years, and the growing importance of determining the relationships between structure and function, have made imaging an increasingly important discipline. The ubiquitousness of digital technology — from banal digital cameras to highly specific micro-CT scanners — has made images an essential part of a number of reAs the popularity of the ImageJ open-source, Java-based imaging program grows, its capabilities increase, too. It is now being used for imaging applications ranging from skin analysis to neuroscience. by Dr. Michael D. Abràmoff, University of Iowa Hospitals and Clinics; Dr. Paulo J. Magalhães, University of Padua; and Dr. Sunanda J. Ram, Louisiana State University Health Sciences Center Image Processing with ImageJ", "title": "" }, { "docid": "6543f2be14582b0c4d3fbd3185bc7771", "text": "Vital signs (i.e., heartbeat and respiration) are crucial physiological signals that are useful in numerous medical applications. The process of measuring these signals should be simple, reliable, and comfortable for patients. In this paper, a noncontact self-calibrating vital signs monitoring system based on the Doppler radar is presented. The system hardware and software were designed with a four-tiered layer structure. To enable accurate vital signs measurement, baseband signals in the radar sensor were modeled and a framework for signal demodulation was proposed. Specifically, a signal model identification method was formulated into a quadratically constrained l1 minimization problem and solved using the upper bound and linear matrix inequality (LMI) relaxations. The performance of the proposed system was comprehensively evaluated using three experimental sets, and the results indicated that this system can be used to effectively measure human vital signs.", "title": "" }, { "docid": "095f4ea337421d6e1310acf73977fdaa", "text": "We consider the problem of autonomous robotic laundry folding, and propose a solution to the perception and manipulation challenges inherent to the task. At the core of our approach is a quasi-static cloth model which allows us to neglect the complex dynamics of cloth under significant parts of the state space, allowing us to reason instead in terms of simple geometry. We present an algorithm which, given a 2D cloth polygon and a desired sequence of folds, outputs a motion plan for executing the corresponding manipulations, deemed g-folds, on a minimal number of robot grippers. We define parametrized fold sequences for four clothing categories: towels, pants, short-sleeved shirts, and long-sleeved shirts, each represented as polygons. We then devise a model-based optimization approach for visually inferring the class and pose of a spread-out or folded clothing article from a single image, such that the resulting polygon provides a parse suitable for these folding primitives. We test the manipulation and perception tasks individually, and combine them to implement an autonomous folding system on the Willow Garage PR2. This enables the PR2 to identify a clothing article spread out on a table, execute the computed folding sequence, and visually track its progress over successive folds.", "title": "" }, { "docid": "6a74c2d26f5125237929031cf1ccf204", "text": "Harnessing crowds can be a powerful mechanism for increasing innovation. However, current approaches to crowd innovation rely on large numbers of contributors generating ideas independently in an unstructured way. We introduce a new approach called distributed analogical idea generation, which aims to make idea generation more effective and less reliant on chance. Drawing from the literature in cognitive science on analogy and schema induction, our approach decomposes the creative process in a structured way amenable to using crowds. In three experiments we show that distributed analogical idea generation leads to better ideas than example-based approaches, and investigate the conditions under which crowds generate good schemas and ideas. Our results have implications for improving creativity and building systems for distributed crowd innovation.", "title": "" }, { "docid": "20f4bcde35458104271e9127d8b7f608", "text": "OBJECTIVES\nTo evaluate the effect of bulk-filling high C-factor posterior cavities on adhesion to cavity-bottom dentin.\n\n\nMETHODS\nA universal flowable composite (G-ænial Universal Flo, GC), a bulk-fill flowable base composite (SDR Posterior Bulk Fill Flowable Base, Dentsply) and a conventional paste-like composite (Z100, 3M ESPE) were bonded (G-ænial Bond, GC) into standardized cavities with different cavity configurations (C-factors), namely C=3.86 (Class-I cavity of 2.5mm deep, bulk-filled), C=5.57 (Class-I cavity of 4mm deep, bulk-filled), C=1.95 (Class-I cavity of 2.5mm deep, filled in three equal layers) and C=0.26 (flat surface). After one-week water storage, the restorations were sectioned in 4 rectangular micro-specimens and subjected to a micro-tensile bond strength (μTBS) test.\n\n\nRESULTS\nHighly significant differences were found between pairs of means of the experimental groups (Kruskal-Wallis, p<0.0001). Using the bulk-fill flowable base composite SDR (Dentsply), no significant differences in μTBS were measured among all cavity configurations (p>0.05). Using the universal flowable composite G-ænial Universal Flo (GC) and the conventional paste-like composite Z100 (3M ESPE), the μTBS to cavity-bottom dentin was not significantly different from that of SDR (Dentsply) when the cavities were layer-filled or the flat surface was build up in layers; it was however significantly lower when the Class-I cavities were filled in bulk, irrespective of cavity depth.\n\n\nSIGNIFICANCE\nThe filling technique and composite type may have a great impact on the adhesion of the composite, in particular in high C-factor cavities. While the bulk-fill flowable base composite provided satisfactory bond strengths regardless of filling technique and cavity depth, adhesion failed when conventional composites were used in bulk.", "title": "" }, { "docid": "531d387a14eefa6a8c45ad64039f29be", "text": "This paper presents an S-Transform based probabilistic neural network (PNN) classifier for recognition of power quality (PQ) disturbances. The proposed method requires less number of features as compared to wavelet based approach for the identification of PQ events. The features extracted through the S-Transform are trained by a PNN for automatic classification of the PQ events. Since the proposed methodology can reduce the features of the disturbance signal to a great extent without losing its original property, less memory space and learning PNN time are required for classification. Eleven types of disturbances are considered for the classification problem. The simulation results reveal that the combination of S-Transform and PNN can effectively detect and classify different PQ events. The classification performance of PNN is compared with a feedforward multilayer (FFML) neural network (NN) and learning vector quantization (LVQ) NN. It is found that the classification performance of PNN is better than both FFML and LVQ.", "title": "" }, { "docid": "0da1479719e63aa92d280dc627f3439d", "text": "This paper presents a low cost, precise and reliable inductive absolute position measurement system. It is suitable for rough industrial environments, offers a high inherent resolution (0.1 % to 0.01 % of antenna length), can measure target position over a wide measurement range and can potentially measure multiple target locations. The position resolution is improved by adding two additional finer pitched receive channels. The sensor works on principles similar to contactless resolvers. It consists of a rectangular antenna PCB and a passive LC resonance target. A mathematical model and the equivalent circuit of this kind of sensor is explained in detail. Such sensors suffer from transmitter to receiver coil capacitive crosstalk, which results in a phase sensitive offset. This crosstalk will be analyzed by a mathematical model and will be verified by measurements. Moreover, the mechanical transducer arrangement, the measurement setup and measured results will be presented.", "title": "" }, { "docid": "d87f336cc82cbd29df1f04095d98a7fb", "text": "The academic publishing world is changing significantly, with ever-growing numbers of publications each year and shifting publishing patterns. However, the metrics used to measure academic success, such as the number of publications, citation number, and impact factor, have not changed for decades. Moreover, recent studies indicate that these metrics have become targets and follow Goodhart’s Law, according to which “when a measure becomes a target, it ceases to be a good measure.” In this study, we analyzed over 120 million papers to examine how the academic publishing world has evolved over the last century. Our study shows that the validity of citation-based measures is being compromised and their usefulness is lessening. In particular, the number of publications has ceased to be a good metric as a result of longer author lists, shorter papers, and surging publication numbers. Citation-based metrics, such citation number and h-index, are likewise affected by the flood of papers, self-citations, and lengthy reference lists. Measures such as a journal’s impact factor have also ceased to be good metrics due to the soaring numbers of papers that are published in top journals, particularly from the same pool of authors. Moreover, by analyzing properties of over 2600 research fields, we observed that citation-based metrics are not beneficial for comparing researchers in different fields, or even in the same department. Academic publishing has changed considerably; now we need to reconsider how we measure success. Multimedia Links I Interactive Data Visualization I Code Tutorials I Fields-of-Study Features Table", "title": "" }, { "docid": "8f917c8bde6f775c7421e72563abc34c", "text": "Cognitive radio techniques allow secondary users (SU's) to opportunistically access underutilized primary channels that are licensed to primary users. We consider a group of SU's with limited spectrum sensing capabilities working cooperatively to find primary channel spectrum holes. The objective is to design the optimal sensing and access policies that maximize the total secondary throughput on primary channels accrued over time. Although the problem can be formulated as a Partially Observable Markov Decision Process (POMDP), the optimal solutions are intractable. Instead, we find the optimal sensing policy within the class of myopic policies. Compared to other existing approaches, our policy is more realistic because it explicitly assigns SU's to sense specific primary channels by taking into account spatial and temporal variations of primary channels. Contributions: (1) formulation of a centralized spectrum sensing/access architecture that allows exploitation of all available primary spectrum holes; and (2) proposing sub-optimal myopic sensing policies with low-complexity implementations and performance close to the myopic policy. We show that our proposed sensing/access policy is close to the optimal POMDP solution and outperforms other proposed strategies. We also propose a Hidden Markov Model based algorithm to estimate the parameters of primary channel Markov models with a linear complexity.", "title": "" }, { "docid": "a1b20560bbd6124db8fc8b418cd1342c", "text": "Feature selection is often an essential data processing step prior to applying a learning algorithm The re moval of irrelevant and redundant information often improves the performance of machine learning algo rithms There are two common approaches a wrapper uses the intended learning algorithm itself to evaluate the usefulness of features while a lter evaluates fea tures according to heuristics based on general charac teristics of the data The wrapper approach is generally considered to produce better feature subsets but runs much more slowly than a lter This paper describes a new lter approach to feature selection that uses a correlation based heuristic to evaluate the worth of fea ture subsets When applied as a data preprocessing step for two common machine learning algorithms the new method compares favourably with the wrapper but re quires much less computation", "title": "" }, { "docid": "9b3a9613406bd15cf6d14861ee67a144", "text": "Introduction. Electrical stimulation is used in experimental human pain models. The aim was to develop a model that visualizes the distribution of electrical field in the esophagus close to ring and patch electrodes mounted on an esophageal catheter and to explain the obtained sensory responses. Methods. Electrical field distribution in esophageal layers (mucosa, muscle layers, and surrounding tissue) was computed using a finite element model based on a 3D model. Each layer was assigned different electrical properties. An electrical field exceeding 20 V/m was considered to activate the esophageal afferents. Results. The model output showed homogeneous and symmetrical field surrounding ring electrodes compared to a saddle-shaped field around patch electrodes. Increasing interelectrode distance enlarged the electrical field in muscle layer. Conclusion. Ring electrodes with 10 mm interelectrode distance seem optimal for future catheter designs. Though the model needs further validation, the results seem useful for electrode designs and understanding of electrical stimulation patterns.", "title": "" }, { "docid": "23670ac6fb88e2f5d3a31badc6dc38f9", "text": "The purpose of this review article is to report on the recent developments and the performance level achieved in the strained-Si/SiGe material system. In the first part, the technology of the growth of a high-quality strained-Si layer on a relaxed, linear or step-graded SiGe buffer layer is reviewed. Characterization results of strained-Si films obtained with secondary ion mass spectroscopy, Rutherford backscattering spectroscopy, atomic force microscopy, spectroscopic ellipsometry and Raman spectroscopy are presented. Techniques for the determination of bandgap parameters from electrical characterization of metal–oxide–semiconductor (MOS) structures on strained-Si film are discussed. In the second part, processing issues of strained-Si films in conventional Si technology with low thermal budget are critically reviewed. Thermal and low-temperature microwave plasma oxidation and nitridation of strained-Si layers are discussed. Some recent results on contact metallization of strained-Si using Ti and Pt are presented. In the last part, device applications of strained Si with special emphasis on heterostructure metal oxide semiconductor field effect transistors and modulation-doped field effect transistors are discussed. Design aspects and simulation results of nand p-MOS devices with a strained-Si channel are presented. Possible future applications of strained-Si/SiGe in high-performance SiGe CMOS technology are indicated.", "title": "" }, { "docid": "03d02a52eb1ed03a61fe05668cfe8166", "text": "The complexity of the world around us is creating a demand for novel interfaces that will simplify and enhance the way we interact with the environment. The recently unveiled Android Wear operating system addresses this demand by providing a modern system for all those companies that are now developing wearable devices, also known as \"wearables\". Wearability of robotic devices will enable novel forms of human intention recognition through haptic signals and novel forms of communication between humans and robots. Specifically, wearable haptics will enable devices to communicate with humans during their interaction with the environment they share. Wearable haptic technology have been introduced in our everyday life by Sony. In 1997 its DualShock controller for PlayStation revolutionized the gaming industry by introducing a simple but effective vibrotactile feedback. More recently, Apple unveiled the Apple Watch, which embeds a linear actuator that can make the watch vibrate. It is used whenever the wearer receives an alert or notification, or to communicate with other Apple Watch owners.", "title": "" }, { "docid": "3db4d7a83afbbadbafe3d1c4fddf51a0", "text": "A Successive approximation analog to digital converter (ADC) for data acquisition using fully CMOS high speed self-biased comparator circuit is discussed in this paper. ASIC finds greater demand when area and speed optimization are major concern and here the entire optimized design is done in CADENCE virtuoso EDA tool in 180nm technology. Towerjazz semiconductor foundry is the base for layout design and GDSII extraction. Comparison of different DAC architecture and the precise architecture with minimum DNL and INL are chosen for the design procedure. This paper describes the design of a fully customized 9 bit SAR ADC with input voltage ranging from 0 to 2.5V and sampling frequency 16.67 KHz. Hspice simulators is used for the simulations. Keywords— SAR ADC, Comparator, CADENCE, CMOS, DAC. [1] INTRODUCTION With the development of sensors, portable devices and high speed computing systems, comparable growth is seen in the optimization of Analog to digital converters (ADC) to assist in the technology growth. All the natural signals are analog and the present digital world require the signal in digital format for storing, processing and transmitting and thereby ADC becomes an integral part of almost all electronic devices 8 . This leads to the need for power, area and speed optimized design of ADCs. There are different ADC architectures like Flash ADC, SAR ADC, sigma-delta ADC etc., with each having its own pros and cons. The designer selects the desired architecture according to the requirements 1 . Flash ADC is the fasted ADC structure where the output is obtained in a single cycle but requires a large number of resistors and comparators for the design. For an N bit 2 flash ADC 2 N resistors and 2 N-1 comparators are required consuming large amount of area and power. Modifications are done on flash ADC to form pipelined flash ADC where the number of components can be reduced but the power consumption cannot be further reduced beyond a level. Sigma-delta ADC or integrating type of ADC is used when the resolution required is very high. This is the slowest architecture compared to other architectures. Design of sigma-delta requires analog design of integrator circuit making its design complex. SAR ADC architecture gives the output in N cycles for an N-bit ADC. SAR ADC being one of the pioneer ADC architecture is been commonly used due to its good trade-off between area, power and speed, which is the required criteria for CMOS deep submicron circuits. SAR ADC consists of a Track and Hold (TH) circuit, comparator, DAC and a SAR register and control logic. Figure 1 shows the block diagram of a SAR ADC. This paper is organized into six sections. Section II describes the analog design of TH and comparator. Section III compares the DAC architecture. Section IV explains the SAR logic. Section V gives the simulation results and section VI is the conclusion. Fig 1 Block Diagram of SAR ADC [2] ANALOG DESIGN OF TH AND COMPARATOR A. Track and Hold In general, Sample and hold circuit or Track and Hold contain a switch and a capacitor. In the tracking mode, when the sampling signal (strobe pulse) is high and the switch is connected, it tracks the analog input signal 3 . Then, it holds the value when the sampling signal turns to low in the hold mode. In this case, sample and hold provides a constant voltage at the input of the ADC during conversion 7 . Figure 2 shows a simple Track and hold Vol 05, Article 11492, November 2014 International Journal of VLSI and Embedded Systems-IJVES http://ijves.com ISSN: 2249 – 6556 2010-2014 – IJVES Indexing in Process EMBASE, EmCARE, Electronics & Communication Abstracts, SCIRUS, SPARC, GOOGLE Database, EBSCO, NewJour, Worldcat, DOAJ, and other major databases etc., 1392 circuit with a NMOS transistor as switch. The capacitance value is selected as 100pF and aspect ratio of the transistor as 28 based on the design steps. Fig 2 Track and hold circuit B. Latched comparator Comparator with high resolution and high speed is the desired design criteria and here dynamic latched comparator topology and self-biased open loop comparator topology are studied and implemented. From the comparison results, the best topology considering speed and better resolution is selected. Figure 3 shows a latched comparator. Static latch consumes static power which is not attractive for low power applications. A major disadvantage of latch is low resolution. Fig 3Latched comparator C. Self-biased open loop comparator A self-biased open loop comparator is a differential input high gain amplifier with an output stage. A currentmirror acts as the load for the differential pair and converts the double ended circuit to a single ended. Since precise gain is not required for comparator circuit, no compensation techniques are required 4 . Figure 4 shows a self-biased open loop comparator. Schematic of the circuit implementation and simulation result shows that selfbiased open loop comparator has better speed of operation compared to latched comparator. The simulation results are tabulated below in table 1. Thought there are two capacitors in open loop comparator resulting in more power consumption, speed of operation and resolution is better compared to latched comparator. So open loop comparator circuit is selected for the design advancement. Both the comparator design is done based of a specific output current and slew rate. Fig 4 Self-biased open loop comparator Conversion time No of transistors Resolution Power Latched comparator 426.6ns 11 4mv 80nw Self-biased open loop comparator 712.7ns 10 15mv 58nw Table 1 Comparator simulation results [3] DAC ARCHITECTURE A. R-2R DAC The digital data bits are entered through the input lines (d0 to d(N-1)) which is to be converted to an equivalent analog voltage (Vout) using R/2R resistor network 5 . The R/2R network is built by a set of resistors of two Vol 05, Article 11492, November 2014 International Journal of VLSI and Embedded Systems-IJVES http://ijves.com ISSN: 2249 – 6556 2010-2014 – IJVES Indexing in Process EMBASE, EmCARE, Electronics & Communication Abstracts, SCIRUS, SPARC, GOOGLE Database, EBSCO, NewJour, Worldcat, DOAJ, and other major databases etc., 1393 values, with values of one sets being twice of the other. Here for simulation purpose 1K and 2K resistors are used, there by resulting R/2R ratio. Accuracy or precision of DAC depends on the values of resistors chosen, higher precision can be obtained with an exact match of the R/2R ratio. B. C-2C DAC The schematic diagram of 3bit C2C ladder is shown in figure 4.3 which is similar to that of the R2R type. The capacitor value selected as 20 fF and 40 fF for C and 2C respectively such that the impedance value of C is twice that of 2C. C. Charge scaling DAC The voltage division principle is same as that of C-2C 6 . The value of unit capacitance is selected as 20fF for the simulation purpose. In order to obtain precision between the capacitance value parallel combinations of unit capacitance is implemented for the binary weighted value. Compared to C-2C the capacitance area is considerably large. DAC type Integral Non-Linearity INL Differential Non-Linearity DNL Offset", "title": "" }, { "docid": "8a33040d6464f7792b3eeee1e0760925", "text": "We live in a data abundance era. Availability of large volume of diverse multimedia data streams (ranging from video, to tweets, to activity, and to PM2.5) can now be used to solve many critical societal problems. Causal modeling across multimedia data streams is essential to reap the potential of this data. However, effective frameworks combining formal abstract approaches with practical computational algorithms for causal inference from such data are needed to utilize available data from diverse sensors. We propose a causal modeling framework that builds on data-driven techniques while emphasizing and including the appropriate human knowledge in causal inference. We show that this formal framework can help in designing a causal model with a systematic approach that facilitates framing sharper scientific questions, incorporating expert's knowledge as causal assumptions, and evaluating the plausibility of these assumptions. We show the applicability of the framework in a an important Asthma management application using meteorological and pollution data streams.", "title": "" }, { "docid": "390ebc9975960ff7a817efc8412bd8da", "text": "OBJECTIVE\nPhysical activity is critical for health, yet only about half of the U.S. adult population meets basic aerobic physical activity recommendations and almost a third are inactive. Mindfulness meditation is gaining attention for its potential to facilitate health-promoting behavior and may address some limitations of existing interventions for physical activity. However, little evidence exists on mindfulness meditation and physical activity. This study assessed whether mindfulness meditation is uniquely associated with physical activity in a nationally representative sample.\n\n\nMETHOD\nCross-sectional data from the adult sample (N = 34,525) of the 2012 National Health Interview Survey were analyzed. Logistic regression models tested whether past-year use of mindfulness meditation was associated with (a) inactivity and (b) meeting aerobic physical activity recommendations, after accounting for sociodemographics, another health-promoting behavior, and 2 other types of meditation. Data were weighted to represent the U.S. civilian, noninstitutionalized adult population.\n\n\nRESULTS\nAccounting for covariates, U.S. adults who practiced mindfulness meditation in the past year were less likely to be inactive and more likely to meet physical activity recommendations. Mindfulness meditation showed stronger associations with these indices of physical activity than the 2 other types of meditation.\n\n\nCONCLUSIONS\nThese results suggest that mindfulness meditation specifically, beyond meditation in general, is associated with physical activity in U.S adults. Future research should test whether intervening with mindfulness meditation-either as an adjunctive component or on its own-helps to increase or maintain physical activity. (PsycINFO Database Record", "title": "" } ]
scidocsrr
804095b9fb79beead40386361f793579
ISLES 2015 - A public evaluation benchmark for ischemic stroke lesion segmentation from multispectral MRI
[ { "docid": "15ef258e08dcc0fe0298c089fbf5ae1c", "text": "In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients - manually annotated by up to four raters - and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.", "title": "" } ]
[ { "docid": "f30caea55cb1800a569a2649d1f8e388", "text": "Naive Bayes (NB) is a popular machine learning tool for classification, due to its simplicity, high computational efficiency, and good classification accuracy, especially for high dimensional data such as texts. In reality, the pronounced advantage of NB is often challenged by the strong conditional independence assumption between attributes, which may deteriorate the classification performance. Accordingly, numerous efforts have been made to improve NB, by using approaches such as structure extension, attribute selection, attribute weighting, instance weighting, local learning and so on. In this paper, we propose a new Artificial Immune System (AIS) based self-adaptive attribute weighting method for Naive Bayes classification. The proposed method, namely AISWNB, uses immunity theory in artificial immune systems to search optimal attribute weight values, where self-adjusted weight values will alleviate the conditional independence assumption and help calculate the conditional probability in an accurate way. One noticeable advantage of AISWNB is that the unique immune system based evolutionary computation process, including initialization, clone, section, and mutation, ensures that AISWNB can adjust itself to the data without explicit specification of functional or distributional forms of the underlying model. As a result, AISWNB can obtain good attribute weight values during the learning process. Experiments and comparisons on 36 machine learning benchmark data sets and six image classification data sets demonstrate that AISWNB significantly outperforms its peers in classification accuracy, class probability estimation, and class ranking performance.", "title": "" }, { "docid": "19075b16bbae94d024e4cdeaa7f6427e", "text": "Nutrient timing is a popular nutritional strategy that involves the consumption of combinations of nutrients--primarily protein and carbohydrate--in and around an exercise session. Some have claimed that this approach can produce dramatic improvements in body composition. It has even been postulated that the timing of nutritional consumption may be more important than the absolute daily intake of nutrients. The post-exercise period is widely considered the most critical part of nutrient timing. Theoretically, consuming the proper ratio of nutrients during this time not only initiates the rebuilding of damaged muscle tissue and restoration of energy reserves, but it Does So in a supercompensated fashion that enhances both body composition and exercise performance. Several researchers have made reference to an anabolic \"window of opportunity\" whereby a limited time exists after training to optimize training-related muscular adaptations. However, the importance - and even the existence - of a post-exercise 'window' can vary according to a number of factors. Not only is nutrient timing research open to question in terms of applicability, but recent evidence has directly challenged the classical view of the relevance of post-exercise nutritional intake with respect to anabolism. Therefore, the purpose of this paper will be twofold: 1) to review the existing literature on the effects of nutrient timing with respect to post-exercise muscular adaptations, and; 2) to draw relevant conclusions that allow practical, evidence-based nutritional recommendations to be made for maximizing the anabolic response to exercise.", "title": "" }, { "docid": "e1b39e972eff71eb44b39f37e7a7b2f3", "text": "The maximum mean discrepancy (MMD) is a recently proposed test statistic for the two-sample test. Its quadratic time complexity, however, greatly hampers its availability to large-scale applications. To accelerate the MMD calculation, in this study we propose an efficient method called FastMMD. The core idea of FastMMD is to equivalently transform the MMD with shift-invariant kernels into the amplitude expectation of a linear combination of sinusoid components based on Bochner’s theorem and Fourier transform (Rahimi & Recht, 2007). Taking advantage of sampling the Fourier transform, FastMMD decreases the time complexity for MMD calculation from to , where N and d are the size and dimension of the sample set, respectively. Here, L is the number of basis functions for approximating kernels that determines the approximation accuracy. For kernels that are spherically invariant, the computation can be further accelerated to by using the Fastfood technique (Le, Sarlós, & Smola, 2013). The uniform convergence of our method has also been theoretically proved in both unbiased and biased estimates. We also provide a geometric explanation for our method, ensemble of circular discrepancy, which helps us understand the insight of MMD and we hope will lead to more extensive metrics for assessing the two-sample test task. Experimental results substantiate that the accuracy of FastMMD is similar to that of MMD and with faster computation and lower variance than existing MMD approximation methods.", "title": "" }, { "docid": "718e31eabfd386768353f9b75d9714eb", "text": "The mathematical structure of Sudoku puzzles is akin to hard constraint satisfaction problems lying at the basis of many applications, including protein folding and the ground-state problem of glassy spin systems. Via an exact mapping of Sudoku into a deterministic, continuous-time dynamical system, here we show that the difficulty of Sudoku translates into transient chaotic behavior exhibited by this system. We also show that the escape rate κ, an invariant of transient chaos, provides a scalar measure of the puzzle's hardness that correlates well with human difficulty ratings. Accordingly, η = -log₁₀κ can be used to define a \"Richter\"-type scale for puzzle hardness, with easy puzzles having 0 < η ≤ 1, medium ones 1 < η ≤ 2, hard with 2 < η ≤ 3 and ultra-hard with η > 3. To our best knowledge, there are no known puzzles with η > 4.", "title": "" }, { "docid": "002abd54753db9928d8e6832d3358084", "text": "State-of-the-art semantic role labelling systems require large annotated corpora to achieve full performance. Unfortunately, such corpora are expensive to produce and often do not generalize well across domains. Even in domain, errors are often made where syntactic information does not provide sufficient cues. In this paper, we mitigate both of these problems by employing distributional word representations gathered from unlabelled data. While straight-forward word representations of predicates and arguments improve performance, we show that further gains are achieved by composing representations that model the interaction between predicate and argument, and capture full argument spans.", "title": "" }, { "docid": "948257544ca485b689d8663aaba63c5d", "text": "This paper presents a new single-pass shadow mapping technique that achieves better quality than the approaches based on perspective warping, such as perspective, light-space, and trapezoidal shadow maps. The proposed technique is appropriate for real-time rendering of large virtual environments that include dynamic objects. By performing operations in camera space, this solution successfully handles the general and the dueling frustum cases and produces high-quality shadows even for extremely large scenes. This paper also presents a fast nonlinear projection technique for shadow map stretching that enables complete utilization of the shadow map by eliminating wastage. The application of stretching results in a significant reduction in unwanted perspective aliasing, commonly found in all shadow mapping techniques. Technique is compared with other shadow mapping techniques, and the benefits of the proposed method are presented. The proposed shadow mapping technique is simple and flexible enough to handle most of the special scenarios. An API for a generic shadow mapping solution is presented. This API simplifies the generation of fast and high-quality shadows.", "title": "" }, { "docid": "c6dd897653486add8699828a2a1f9ffb", "text": "Everyone wants to know one thing about a test suite: will it detect enough bugs? Unfortunately, in most settings that matter, answering this question directly is impractical or impossible. Software engineers and researchers therefore tend to rely on various measures of code coverage (where mutation testing is considered a form of syntactic coverage). A long line of academic research efforts have attempted to determine whether relying on coverage as a substitute for fault detection is a reasonable solution to the problems of test suite evaluation. This essay argues that the profusion of coverage-related literature is in part a sign of an underlying uncertainty as to what exactly it is that measuring coverage should achieve, as well as how we would know if it can, in fact, achieve it. We propose some solutions and mitigations, but the primary focus of this essay is to clarify the state of current confusions regarding this key problem for effective software testing.", "title": "" }, { "docid": "7462f38fa4f99595bdb04a4519f7d9e9", "text": "The use of Unmanned Aerial Vehicles (UAV) has been increasing over the last few years in many sorts of applications due mainly to the decreasing cost of this technology. One can see the use of the UAV in several civilian applications such as surveillance and search and rescue. Automatic detection of pedestrians in aerial images is a challenging task. The computing vision system must deal with many sources of variability in the aerial images captured with the UAV, e.g., low-resolution images of pedestrians, images captured at distinct angles due to the degrees of freedom that a UAV can move, the camera platform possibly experiencing some instability while the UAV flies, among others. In this work, we created and evaluated different implementations of Pattern Recognition Systems (PRS) aiming at the automatic detection of pedestrians in aerial images captured with multirotor UAV. The main goal is to assess the feasibility and suitability of distinct PRS implementations running on top of low-cost computing platforms, e.g., single-board computers such as the Raspberry Pi or regular laptops without a GPU. For that, we used four machine learning techniques in the feature extraction and classification steps, namely Haar cascade, LBP cascade, HOG + SVM and Convolutional Neural Networks (CNN). In order to improve the system performance (especially the processing time) and also to decrease the rate of false alarms, we applied the Saliency Map (SM) and Thermal Image Processing (TIP) within the segmentation and detection steps of the PRS. The classification results show the CNN to be the best technique with 99.7% accuracy, followed by HOG + SVM with 92.3%. In situations of partial occlusion, the CNN showed 71.1% sensitivity, which can be considered a good result in comparison with the current state-of-the-art, since part of the original image data is missing. As demonstrated in the experiments, by combining TIP with CNN, the PRS can process more than two frames per second (fps), whereas the PRS that combines TIP with HOG + SVM was able to process 100 fps. It is important to mention that our experiments show that a trade-off analysis must be performed during the design of a pedestrian detection PRS. The faster implementations lead to a decrease in the PRS accuracy. For instance, by using HOG + SVM with TIP, the PRS presented the best performance results, but the obtained accuracy was 35 percentage points lower than the CNN. The obtained results indicate that the best detection technique (i.e., the CNN) requires more computational resources to decrease the PRS computation time. Therefore, this work shows and discusses the pros/cons of each technique and trade-off situations, and hence, one can use such an analysis to improve and tailor the design of a PRS to detect pedestrians in aerial images.", "title": "" }, { "docid": "d79f92819d5485f2631897befd686416", "text": "Information visualization is meant to support the analysis and comprehension of (often large) datasets through techniques intended to show/enhance features, patterns, clusters and trends, not always visible even when using a graphical representation. During the development of information visualization techniques the designer has to take into account the users' tasks to choose the graphical metaphor as well as the interactive methods to be provided. Testing and evaluating the usability of information visualization techniques are still a research question, and methodologies based on real or experimental users often yield significant results. To be comprehensive, however, experiments with users must rely on a set of tasks that covers the situations a real user will face when using the visualization tool. The present work reports and discusses the results of three case studies conducted as Multi-dimensional In-depth Long-term Case studies. The case studies were carried out to investigate MILCs-based usability evaluation methods for visualization tools.", "title": "" }, { "docid": "9ce1401e072fc09749d12f9132aa6b1e", "text": "In many applications based on the use of unmanned aerial vehicles (UAVs), it is possible to establish a cluster of UAVs in which each UAV knows the other vehicle's position. Assuming that the common channel condition between any two nodes of UAVs is line-of-sight (LOS), the time and energy consumption for data transmission on each path that connecting two nodes may be estimated by a node itself. In this paper, we use a modified Bellman-Ford algorithm to find the best selection of relay nodes in order to minimize the time and energy consumption for data transmission between any UAV node in the cluster and the UAV acting as the cluster head. This algorithm is applied with a proposed cooperative MAC protocol that is compatible with the IEEE 802.11 standard. The evaluations under data saturation conditions illustrate noticeable benefits in successful packet delivery ratio, average delay, and in particular the cost of time and energy.", "title": "" }, { "docid": "55dee5bdc4ff8225ef3997616af92320", "text": "Clustered regularly interspaced short palindromic repeats (CRISPR) are hypervariable loci widely distributed in prokaryotes that provide acquired immunity against foreign genetic elements. Here, we characterize a novel Streptococcus thermophilus locus, CRISPR3, and experimentally demonstrate its ability to integrate novel spacers in response to bacteriophage. Also, we analyze CRISPR diversity and activity across three distinct CRISPR loci in several S. thermophilus strains. We show that both CRISPR repeats and cas genes are locus specific and functionally coupled. A total of 124 strains were studied, and 109 unique spacer arrangements were observed across the three CRISPR loci. Overall, 3,626 spacers were analyzed, including 2,829 for CRISPR1 (782 unique), 173 for CRISPR2 (16 unique), and 624 for CRISPR3 (154 unique). Sequence analysis of the spacers revealed homology and identity to phage sequences (77%), plasmid sequences (16%), and S. thermophilus chromosomal sequences (7%). Polymorphisms were observed for the CRISPR repeats, CRISPR spacers, cas genes, CRISPR motif, locus architecture, and specific sequence content. Interestingly, CRISPR loci evolved both via polarized addition of novel spacers after exposure to foreign genetic elements and via internal deletion of spacers. We hypothesize that the level of diversity is correlated with relative CRISPR activity and propose that the activity is highest for CRISPR1, followed by CRISPR3, while CRISPR2 may be degenerate. Globally, the dynamic nature of CRISPR loci might prove valuable for typing and comparative analyses of strains and microbial populations. Also, CRISPRs provide critical insights into the relationships between prokaryotes and their environments, notably the coevolution of host and viral genomes.", "title": "" }, { "docid": "4f6f225f978bbf00c20f80538dc12aad", "text": "A smart building is created when it is engineered, delivered and operated smart. The Internet of Things (IoT) is advancing a new breed of smart buildings enables operational systems that deliver more accurate and useful information for improving operations and providing the best experiences for tenants. Big Data Analytics framework analyze building data to uncover new insight capable of driving real value and greater performance. Internet of Things technologies enhance the situational awareness or “smartness” of service providers and consumers alike. There is a need for an integrated IoT Big Data Analytics framework to fill the research gap in the Big Data Analytics domain. This paper also presents a novel approach for mobile phone centric observation applied to indoor localization for smart buildings. The applicability of the framework of this paper is demonstrated with the help of a scenario involving the analysis of real-time smart building data for automatically managing the oxygen level, luminosity and smoke/hazardous gases in different parts of the smart building. Lighting control in smart buildings and homes can be automated by having computer controlled lights and blinds along with illumination sensors that are distributed in the building. This paper gives an overview of an approach that algorithmically sets up the control system that can automate any building without custom programming. The resulting system controls blinds to ensure even lighting and also adds artificial illumination to ensure light coverage remains adequate at all times of the day, adjusting for weather and seasons. The key contribution of this paper is the complex integration of Big Data Analytics and IoT for addressing the large volume and velocity challenge of real-time data in the smart building domain.", "title": "" }, { "docid": "46de8aa53a304c3f66247fdccbe9b39f", "text": "The effect of pH and electrochemical potential on copper uptake, xanthate adsorption and the hydrophobicity of sphalerite were studied from flotation practice point of view using electrochemical and micro-flotation techniques. Voltammetric studies conducted using the combination of carbon matrix composite (CMC) electrode and surface conduction (SC) electrode show that the kinetics of activation increases with decreasing activating pH. Controlling potential contact angle measurements conducted on a copper-activated SC electrode in xanthate solution with different pHs show that, xanthate adsorption occurs at acidic and alkaline pHs and renders the mineral surface hydrophobic. At near neutral pH, although xanthate adsorbs on Cu:ZnS, the mineral surface is hydrophilic. Microflotation tests confirm this finding. Cleaning reagent was used to improve the flotation response of sphalerite at near neutral pH.", "title": "" }, { "docid": "ddd4ccf3d68d12036ebb9e5b89cb49b8", "text": "This paper presents a modified FastSLAM approach for the specific application of radar sensors using the Doppler information to increase the localization and map accuracy. The developed approach is based on the FastSLAM 2.0 algorithm. It is shown how the FastSLAM 2.0 approach can be significantly improved by taking the Doppler information into account. Therefore, the modelled, so-called expected Doppler, and the measured Doppler are compared for every detection. Both, simulations and experiments on real world data show the increase in accuracy of the modified FastSLAM approach by incorporating the Doppler measurements of automotive radar sensors. The proposed algorithm is compared to the state-of-the-art FastSLAM 2.0 algorithm and the vehicle odometry, whereas profiles of an Automotive Dynamic Motion Analyzer serve as the reference.", "title": "" }, { "docid": "1e8caa9f0a189bafebd65df092f918bc", "text": "For several decades, the role of hormone-replacement therapy (HRT) has been debated. Early observational data on HRT showed many benefits, including a reduction in coronary heart disease (CHD) and mortality. More recently, randomized trials, including the Women's Health Initiative (WHI), studying mostly women many years after the the onset of menopause, showed no such benefit and, indeed, an increased risk of CHD and breast cancer, which led to an abrupt decrease in the use of HRT. Subsequent reanalyzes of data from the WHI with age stratification, newer randomized and observational data and several meta-analyses now consistently show reductions in CHD and mortality when HRT is initiated soon after menopause. HRT also significantly decreases the incidence of various symptoms of menopause and the risk of osteoporotic fractures, and improves quality of life. In younger healthy women (aged 50–60 years), the risk–benefit balance is positive for using HRT, with risks considered rare. As no validated primary prevention strategies are available for younger women (<60 years of age), other than lifestyle management, some consideration might be given to HRT as a prevention strategy as treatment can reduce CHD and all-cause mortality. Although HRT should be primarily oestrogen-based, no particular HRT regimen can be advocated.", "title": "" }, { "docid": "502a948fbf73036a4a1546cdd4a04833", "text": "The literature review is an established research genre in many academic disciplines, including the IS discipline. Although many scholars agree that systematic literature reviews should be rigorous, few instructional texts for compiling a solid literature review, at least with regard to the IS discipline, exist. In response to this shortage, in this tutorial, I provide practical guidance for both students and researchers in the IS community who want to methodologically conduct qualitative literature reviews. The tutorial differs from other instructional texts in two regards. First, in contrast to most textbooks, I cover not only searching and synthesizing the literature but also the challenging tasks of framing the literature review, interpreting research findings, and proposing research paths. Second, I draw on other texts that provide guidelines for writing literature reviews in the IS discipline but use many examples of published literature reviews. I use an integrated example of a literature review, which guides the reader through the overall process of compiling a literature review.", "title": "" }, { "docid": "2d0c16376e71989031b99f3e5d79025c", "text": "In this paper, we present a novel and general network structure towards accelerating the inference process of convolutional neural networks, which is more complicated in network structure yet with less inference complexity. The core idea is to equip each original convolutional layer with another low-cost collaborative layer (LCCL), and the element-wise multiplication of the ReLU outputs of these two parallel layers produces the layer-wise output. The combined layer is potentially more discriminative than the original convolutional layer, and its inference is faster for two reasons: 1) the zero cells of the LCCL feature maps will remain zero after element-wise multiplication, and thus it is safe to skip the calculation of the corresponding high-cost convolution in the original convolutional layer, 2) LCCL is very fast if it is implemented as a 1*1 convolution or only a single filter shared by all channels. Extensive experiments on the CIFAR-10, CIFAR-100 and ILSCRC-2012 benchmarks show that our proposed network structure can accelerate the inference process by 32% on average with negligible performance drop.", "title": "" }, { "docid": "408f58b7dd6cb1e6be9060f112773888", "text": "Semantic hashing has become a powerful paradigm for fast similarity search in many information retrieval systems. While fairly successful, previous techniques generally require two-stage training, and the binary constraints are handled ad-hoc. In this paper, we present an end-to-end Neural Architecture for Semantic Hashing (NASH), where the binary hashing codes are treated as Bernoulli latent variables. A neural variational inference framework is proposed for training, where gradients are directly backpropagated through the discrete latent variable to optimize the hash function. We also draw connections between proposed method and rate-distortion theory, which provides a theoretical foundation for the effectiveness of the proposed framework. Experimental results on three public datasets demonstrate that our method significantly outperforms several state-of-the-art models on both unsupervised and supervised scenarios.", "title": "" }, { "docid": "a53065d1cfb1fe898182d540d65d394b", "text": "This paper presents a novel approach for detecting affine invariant interest points. Our method can deal with significant affine transformations including large scale changes. Such transformations introduce significant changes in the point location as well as in the scale and the shape of the neighbourhood of an interest point. Our approach allows to solve for these problems simultaneously. It is based on three key ideas : 1) The second moment matrix computed in a point can be used to normalize a region in an affine invariant way (skew and stretch). 2) The scale of the local structure is indicated by local extrema of normalized derivatives over scale. 3) An affine-adapted Harris detector determines the location of interest points. A multi-scale version of this detector is used for initialization. An iterative algorithm then modifies location, scale and neighbourhood of each point and converges to affine invariant points. For matching and recognition, the image is characterized by a set of affine invariant points ; the affine transformation associated with each point allows the computation of an affine invariant descriptor which is also invariant to affine illumination changes. A quantitative comparison of our detector with existing ones shows a significant improvement in the presence of large affine deformations. Experimental results for wide baseline matching show an excellent performance in the presence of large perspective transformations including significant scale changes. Results for recognition are very good for a database with more than 5000 images.", "title": "" }, { "docid": "7435d1591725bbcd86fe93c607d5683c", "text": "This study evaluated the role of breast magnetic resonance (MR) imaging in the selective study breast implant integrity. We retrospectively analysed the signs of breast implant rupture observed at breast MR examinations of 157 implants and determined the sensitivity and specificity of the technique in diagnosing implant rupture by comparing MR data with findings at surgical explantation. The linguine and the salad-oil signs were statistically the most significant signs for diagnosing intracapsular rupture; the presence of siliconomas/seromas outside the capsule and/or in the axillary lymph nodes calls for immediate explantation. In agreement with previous reports, we found a close correlation between imaging signs and findings at explantation. Breast MR imaging can be considered the gold standard in the study of breast implants. Scopo del nostro lavoro è stato quello di valutare il ruolo della risonanza magnetica (RM) mammaria nello studio selettivo dell’integrità degli impianti protesici. è stata eseguita una valutazione retrospettiva dei segni di rottura documentati all’esame RM effettuati su 157 protesi mammarie, al fine di stabilire la sensibilità e specificità nella diagnosi di rottura protesica, confrontando tali dati RM con i reperti riscontrati in sala operatoria dopo la rimozione della protesi stessa. Il linguine sign e il salad-oil sign sono risultati i segni statisticamente più significativi nella diagnosi di rottura protesica intracapsulare; la presenza di siliconomi/sieromi extracapsulari e/o nei linfonodi ascellari impone l’immediato intervento chirurgico di rimozione della protesi rotta. I dati ottenuti dimostrano, in accordo con la letteratura, una corrispondenza tra i segni dell’imaging e i reperti chirurgici, confermando il ruolo di gold standard della RM nello studio delle protesi mammarie.", "title": "" } ]
scidocsrr
f3b2ad7432bf90aa5661f18771fff878
A study of link farm distribution and evolution using a time series of web snapshots
[ { "docid": "880b4ce4c8fd19191cb996aceabdf5a7", "text": "The study of the web as a graph is not only fascinating in its own right, but also yields valuable insight into web algorithms for crawling, searching and community discovery, and the sociological phenomena which characterize its evolution. We report on experiments on local and global properties of the web graph using two Altavista crawls each with over 200 million pages and 1.5 billion links. Our study indicates that the macroscopic structure of the web is considerably more intricate than suggested by earlier experiments on a smaller scale.", "title": "" } ]
[ { "docid": "db31a8887bfc1b24c2d2c2177d4ef519", "text": "The equilibrium microstructure of a fluid may only be described exactly in terms of a complete set of n-body atomic distribution functions, where n is 1, 2, 3 , . . . , N, and N is the total number of particles in the system. The higher order functions, i. e. n > 2, are complex and practically inaccessible but con­ siderable qualitative information can already be derived from studies of the mean radial occupation function n(r) defined as the average number of atoms in a sphere of radius r centred on a particular atom. The function for a perfect gas of non-inter­ acting particles is", "title": "" }, { "docid": "40d9fb6ce396d3629f0406661b9bbd56", "text": "Internet traffic classification has been the subject of intensive study since the birth of the Internet itself. Indeed, the evolution of approaches for traffic classification can be associated with the evolution of the Internet itself and with the adoption of new services and the emergence of novel applications and communication paradigms. Throughout the years many approaches have been proposed for addressing technical issues imposed by such novel services. Deep-Packet Inspection (DPI) has been a very important research topic within the traffic classification field and its concept consists of the analysis of the contents of the captured packets in order to accurately and timely discriminate the traffic generated by different Internet protocols. DPI was devised as a means to address several issues associated with port-based and statistical-based classification approaches in order to achieve an accurate and timely traffic classification. Many research works proposed different DPI schemes while many open-source modules have also become available for deployment. Surveys become then valuable tools for performing an overall analysis, study and comparison between the several proposed methods. In this paper we present a survey in which a complete and thorough analysis of the most important open-source DPI modules is performed. Such analysis comprises an evaluation of the classification accuracy, through a common set of traffic traces with ground truth, and of the computational requirements. In this manner, this survey presents a technical assessment of DPI modules and the analysis of the obtained evaluation results enable the proposal of general guidelines for the design and implementation of more adequate DPI modules.", "title": "" }, { "docid": "9d241d577a06f7590af79c2444c91c9d", "text": "UNLABELLED\nResearch over the last few years has revealed significant haplotype structure in the human genome. The characterization of these patterns, particularly in the context of medical genetic association studies, is becoming a routine research activity. Haploview is a software package that provides computation of linkage disequilibrium statistics and population haplotype patterns from primary genotype data in a visually appealing and interactive interface.\n\n\nAVAILABILITY\nhttp://www.broad.mit.edu/mpg/haploview/\n\n\nCONTACT\njcbarret@broad.mit.edu", "title": "" }, { "docid": "e3f847a7c815772b909fcccbafed4af3", "text": "The contribution of tumorigenic stem cells to haematopoietic cancers has been established for some time, and cells possessing stem-cell properties have been described in several solid tumours. Although chemotherapy kills most cells in a tumour, it is believed to leave tumour stem cells behind, which might be an important mechanism of resistance. For example, the ATP-binding cassette (ABC) drug transporters have been shown to protect cancer stem cells from chemotherapeutic agents. Gaining a better insight into the mechanisms of stem-cell resistance to chemotherapy might therefore lead to new therapeutic targets and better anticancer strategies.", "title": "" }, { "docid": "7de99443d9d56dacb41d467609ef45cd", "text": "Aircraft detection from very high resolution (VHR) remote sensing images has been drawing increasing interest in recent years due to the successful civil and military applications. However, several challenges still exist: 1) extracting the high-level features and the hierarchical feature representations of the objects is difficult; 2) manual annotation of the objects in large image sets is generally expensive and sometimes unreliable; and 3) locating objects within such a large image is difficult and time consuming. In this paper, we propose a weakly supervised learning framework based on coupled convolutional neural networks (CNNs) for aircraft detection, which can simultaneously solve these problems. We first develop a CNN-based method to extract the high-level features and the hierarchical feature representations of the objects. We then employ an iterative weakly supervised learning framework to automatically mine and augment the training data set from the original image. We propose a coupled CNN method, which combines a candidate region proposal network and a localization network to extract the proposals and simultaneously locate the aircraft, which is more efficient and accurate, even in large-scale VHR images. In the experiments, the proposed method was applied to three challenging high-resolution data sets: the Sydney International Airport data set, the Tokyo Haneda Airport data set, and the Berlin Tegel Airport data set. The extensive experimental results confirm that the proposed method can achieve a higher detection accuracy than the other methods.", "title": "" }, { "docid": "fc94c6fb38198c726ab3b417c3fe9b44", "text": "Tremor is a rhythmical and involuntary oscillatory movement of a body part and it is one of the most common movement disorders. Orthotic devices have been under investigation as a noninvasive tremor suppression alternative to medication or surgery. The challenge in musculoskeletal tremor suppression is estimating and attenuating the tremor motion without impeding the patient's intentional motion. In this research a robust tremor suppression algorithm was derived for patients with pathological tremor in the upper limbs. First the motion in the tremor frequency range is estimated using a high-pass filter. Then, by applying the backstepping method the appropriate amount of torque is calculated to drive the output of the estimator toward zero. This is equivalent to an estimation of the tremor torque. It is shown that the arm/orthotic device control system is stable and the algorithm is robust despite inherent uncertainties in the open-loop human arm joint model. A human arm joint simulator, capable of emulating tremorous motion of a human arm joint was used to evaluate the proposed suppression algorithm experimentally for two types of tremor, Parkinson and essential. Experimental results show 30-42 dB (97.5-99.2%) suppression of tremor with minimal effect on the intentional motion.", "title": "" }, { "docid": "dd4a95cea1cdc0351276368d5228bb6e", "text": "Shape reconstruction from raw point sets is a hot research topic. Point sets are increasingly available as primary input source, since low-cost acquisition methods are largely accessible nowadays, and these sets are more noisy than used to be. Standard reconstruction methods rely on normals or signed distance functions, and thus many methods aim at estimating these features. Human vision can however easily discern between the inside and the outside of a dense cloud even without the support of fancy measures. We propose, here, a perceptual method for estimating an indicator function for the shape, inspired from image-based methods. The resulting function nicely approximates the shape, is robust to noise, and can be used for direct isosurface extraction or as an input for other accurate reconstruction methods.", "title": "" }, { "docid": "53e7c26ce6abc85d721b2f1661d1c3c0", "text": "For the detail mapping there are multiple methods that can be used. In Battlefield 2, a 256 m patch of the terrain could have up to six different tiling detail maps that were blended together using one or two three-component unique detail mask textures (Figure 4) that controlled the visibility of the individual detail maps. Artists would paint or generate the detail masks just as for the color map.", "title": "" }, { "docid": "dcc7f48a828556808dc435deda5c1281", "text": "Object detection and segmentation represents the basis for many tasks in computer and machine vision. In biometric recognition systems the detection of the region-of-interest (ROI) is one of the most crucial steps in the overall processing pipeline, significantly impacting the performance of the entire recognition system. Existing approaches to ear detection, for example, are commonly susceptible to the presence of severe occlusions, ear accessories or variable illumination conditions and often deteriorate in their performance if applied on ear images captured in unconstrained settings. To address these shortcomings, we present in this paper a novel ear detection technique based on convolutional encoder-decoder networks (CEDs). For our technique, we formulate the problem of ear detection as a two-class segmentation problem and train a convolutional encoder-decoder network based on the SegNet architecture to distinguish between image-pixels belonging to either the ear or the non-ear class. The output of the network is then post-processed to further refine the segmentation result and return the final locations of the ears in the input image. Different from competing techniques from the literature, our approach does not simply return a bounding box around the detected ear, but provides detailed, pixel-wise information about the location of the ears in the image. Our experiments on a dataset gathered from the web (a.k.a. in the wild) show that the proposed technique ensures good detection results in the presence of various covariate factors and significantly outperforms the existing state-of-the-art.", "title": "" }, { "docid": "3cbc035529138be1d6f8f66a637584dd", "text": "Regression models such as the Cox proportional hazards model have had increasing use in modelling and estimating the prognosis of patients with a variety of diseases. Many applications involve a large number of variables to be modelled using a relatively small patient sample. Problems of overfitting and of identifying important covariates are exacerbated in analysing prognosis because the accuracy of a model is more a function of the number of events than of the sample size. We used a general index of predictive discrimination to measure the ability of a model developed on training samples of varying sizes to predict survival in an independent test sample of patients suspected of having coronary artery disease. We compared three methods of model fitting: (1) standard 'step-up' variable selection, (2) incomplete principal components regression, and (3) Cox model regression after developing clinical indices from variable clusters. We found regression using principal components to offer superior predictions in the test sample, whereas regression using indices offers easily interpretable models nearly as good as the principal components models. Standard variable selection has a number of deficiencies.", "title": "" }, { "docid": "725e92f13cc7c03b890b5d2e7380b321", "text": "Developing algorithms for solving high-dimensional partial differential equations (PDEs) has been an exceedingly difficult task for a long time, due to the notoriously difficult problem known as “the curse of dimensionality”. This paper presents a deep learning-based approach that can handle general high-dimensional parabolic PDEs. To this end, the PDEs are reformulated as a control theory problem and the gradient of the unknown solution is approximated by neural networks, very much in the spirit of deep reinforcement learning with the gradient acting as the policy function. Numerical results on examples including the nonlinear Black-Scholes equation, the Hamilton-Jacobi-Bellman equation, and the Allen-Cahn equation suggest that the proposed algorithm is quite effective in high dimensions, in terms of both accuracy and speed. This opens up new possibilities in economics, finance, operational research, and physics, by considering all participating agents, assets, resources, or particles together at the same time, instead of making ad hoc assumptions on their inter-relationships.", "title": "" }, { "docid": "264d5db966f9cbed6b128087c7e3761e", "text": "We study auction mechanisms for sharing spectrum among a group of users, subject to a constraint on the interference temperature at a measurement point. The users access the channel using spread spectrum signaling and so interfere with each other. Each user receives a utility that is a function of the received signal-to-interference plus noise ratio. We propose two auction mechanisms for allocating the received power. The first is an auction in which users are charged for received SINR, which, when combined with logarithmic utilities, leads to a weighted max-min fair SINR allocation. The second is an auction in which users are charged for power, which maximizes the total utility when the bandwidth is large enough and the receivers are co-located. Both auction mechanisms are shown to be socially optimal for a limiting “large system” with co-located receivers, where bandwidth, power and the number of users are increased in fixed proportion. We also formulate an iterative and distributed bid updating algorithm, and specify conditions under which this algorithm converges globally to the Nash equilibrium of the auction.", "title": "" }, { "docid": "dc66c67cb33e405a548b0ec665df547f", "text": "This paper presents a deep learning method for faster magnetic resonance imaging (MRI) by reducing k-space data with sub-Nyquist sampling strategies and provides a rationale for why the proposed approach works well. Uniform subsampling is used in the time-consuming phase-encoding direction to capture high-resolution image information, while permitting the image-folding problem dictated by the Poisson summation formula. To deal with the localization uncertainty due to image folding, a small number of low-frequency k-space data are added. Training the deep learning net involves input and output images that are pairs of the Fourier transforms of the subsampled and fully sampled k-space data. Our experiments show the remarkable performance of the proposed method; only 29[Formula: see text] of the k-space data can generate images of high quality as effectively as standard MRI reconstruction with the fully sampled data.", "title": "" }, { "docid": "852391aa93e00f9aebdbc65c2e030abf", "text": "The iSTAR Micro Air Vehicle (MAV) is a unique 9-inch diameter ducted air vehicle weighing approximately 4 lb. The configuration consists of a ducted fan with control vanes at the duct exit plane. This VTOL aircraft not only hovers, but it can also fly at high forward speed by pitching over to a near horizontal attitude. The duct both increases propulsion efficiency and produces lift in horizontal flight, similar to a conventional planar wing. The vehicle is controlled using a rate based control system with piezo-electric gyroscopes. The Flight Control Computer (FCC) processes the pilot’s commands and the rate data from the gyroscopes to stabilize and control the vehicle. First flight of the iSTAR MAV was successfully accomplished in October 2000. Flight at high pitch angles and high speed took place in November 2000. This paper describes the vehicle, control system, and ground and flight-test results . Presented at the American Helicopter Society 57 Annual forum, Washington, DC, May 9-11, 2001. Copyright  2001 by the American Helicopter Society International, Inc. All rights reserved. Introduction The Micro Craft Inc. iSTAR is a Vertical Take-Off and Landing air vehicle (Figure 1) utilizing ducted fan technology to hover and fly at high forward speed. The duct both increases the propulsion efficiency and provides direct lift in forward flight similar to a conventional planar wing. However, there are many other benefits inherent in the iSTAR design. In terms of safety, the duct protects personnel from exposure to the propeller. The vehicle also has a very small footprint, essentially a circle equal to the diameter of the duct. This is beneficial for stowing, transporting, and in operations where space is critical, such as on board ships. The simplicity of the design is another major benefit. The absence of complex mechanical systems inherent in other VTOL designs (e.g., gearboxes, articulating blades, and counter-rotating propellers) benefits both reliability and cost. Figure 1: iSTAR Micro Air Vehicle The Micro Craft iSTAR VTOL aircraft is able to both hover and fly at high speed by pitching over towards a horizontal attitude (Figure 2). Although many aircraft in history have utilized ducted fans, most of these did not attempt to transition to high-speed forward flight. One of the few aircraft that did successfully transition was the Bell X-22 (Reference 1), first flown in 1965. The X-22, consisted of a fuselage and four ducted fans that rotated relative to the fuselage to transition the vehicle forward. The X-22 differed from the iSTAR in that its fuselage remained nearly level in forward flight, and the ducts rotated relative to the fuselage. Also planar tandem wings, not the ducts themselves, generated a large portion of the lift in forward flight. 1 Micro Craft Inc. is a division of Allied Aerospace Industry Incorporated (AAII) One of the first aircraft using an annular wing for direct lift was the French Coleoptère (Reference 1) built in the late 1950s. This vehicle successfully completed transition from hovering flight using an annular wing, however a ducted propeller was not used. Instead, a single jet engine was mounted inside the center-body for propulsion. Control was achieved by deflecting vanes inside the jet exhaust, with small external fins attached to the duct, and also with deployable strakes on the nose. Figure 2: Hover & flight at forward speed Less well-known are the General Dynamics ducted-fan Unmanned Air Vehicles, which were developed and flown starting in 1960 with the PEEK (Reference 1) aircraft. These vehicles, a precursor to the Micro Craft iSTAR, demonstrated stable hover and low speed flight in free-flight tests, and transition to forward flight in tethered ground tests. In 1999, Micro Craft acquired the patent, improved and miniaturized the design, and manufactured two 9-inch diameter flight test vehicles under DARPA funding (Reference 1). Working in conjunction with BAE systems (formerly Lockheed Sanders) and the Army/NASA Rotorcraft Division, these vehicles have recently completed a proof-ofconcept flight test program and have been demonstrated to DARPA and the US Army. Military applications of the iSTAR include intelligence, surveillance, target acquisition, and reconnaissance. Commercial applications include border patrol, bridge inspection, and police surveillance. Vehicle Description The iSTAR is composed of four major assemblies as shown in Figure 3: (1) the upper center-body, (2) the lower center body, (3) the duct, and (4) the landing ring. The majority of the vehicle’s structure is composed of Kevlar composite material resulting in a very strong and lightweight structure. Kevlar also lacks the brittleness common to other composite materials. Components that are not composite include the engine bulkhead (aluminum) and the landing ring (steel wire). The four major assemblies are described below. The upper center-body (UCB) is cylindrical in shape and contains the engine, engine controls, propeller, and payload. Three sets of hollow struts support the UCB and pass fuel and wiring to the duct. The propulsion Hover Low Speed High Speed system is a commercial-off-the-shelf (COTS) OS-32 SX single cylinder engine. This engine develops 1.2 hp and weighs approximately 250 grams (~0.5 lb.). Fuel consists of a mixture of alcohol, nitro-methane, and oil. The fixed-pitch propeller is attached directly to the engine shaft (without a gearbox). Starting the engine is accomplished by inserting a cylindrical shaft with an attached gear into the upper center-body and meshing it with a gear fit onto the propeller shaft (see Figure 4). The shaft is rotated using an off-board electric starter (Micro Craft is also investigating on-board starting systems). Figure 3: iSTAR configuration A micro video camera is mounted inside the nose cone, which is easily removable to accommodate modular payloads. The entire UCB can be removed in less than five minutes by removing eight screws securing the struts, and then disconnecting one fuel line and one electrical connector. Figure 4: Engine starting The lower center-body (LCB) is cylindrical in shape and is supported by eight stators. The sensor board is housed in the LCB, and contains three piezo-electric gyroscopes, three accelerometers, a voltage regulator, and amplifiers. The sensor signals are routed to the processor board in the duct via wires integrated into the stators. The duct is nine inches in diameter and contains a significant amount of volume for packaging. The fuel tank, flight control Computer (FCC), voltage regulator, batteries, servos, and receiver are all housed inside the duct. Fuel is contained in the leading edge of the duct. This tank is non-structural, and easily removable. It is attached to the duct with tape. Internal to the duct are eight fixed stators. The angle of the stators is set so that they produce an aerodynamic rolling moment countering the torque of the engine. Control vanes are attached to the trailing edge of the stators, providing roll, yaw, and pitch control. Four servos mounted inside the duct actuate the control vanes. Many different landing systems have been studied in the past. These trade studies have identified the landing ring as superior overall to other systems. The landing ring stabilizes the vehicle in close proximity to the ground by providing a restoring moment in dynamic situations. For example, if the vehicle were translating slowly and contacted the ground, the ring would pitch the vehicle upright. The ring also reduces blockage of the duct during landing and take-off by raising the vehicle above the ground. Blocking the duct can lead to reduced thrust and control power. Landing feet have also been considered because of their reduced weight. However, landing ‘feet’ lack the self-stabilizing characteristics of the ring in dynamic situations and tend to ‘catch’ on uneven surfaces. Electronics and Control System The Flight Control Computer (FCC) is housed in the duct (Figure 5). The computer processes the sensor output and pilot commands and generates pulse width modulated (PWM) signals to drive the servos. Pilot commands are generated using two conventional joysticks. The left joystick controls throttle position and heading. The right joystick controls pitch and yaw rate. The aircraft axis system is defined such that the longitudinal axis is coaxial with the engine shaft. Therefore, in hover the pitch attitude is 90 degrees and rolling the aircraft produces a heading change. Dedicated servos are used for pitch and yaw control. However, all control vanes are used for roll control (four quadrant roll control). The FCC provides the appropriate mixing for each servo. In each axis, the control system architecture consists of a conventional Proportional-Integral-Derivative (PID) controller with single-input and single-output. Initially, an attitude-based control system was desired, however Upper Center-body Fuel Tank Fixed Stator Control Vane Actuator Landing Ring Lower Center-body Duct Engine and Controls Prop/Fan Support struts due to the lack of acceleration information and the high gyroscope drift rates, accurate attitudes could not be calculated. For this reason, a rate system was ultimately implemented. Three Murata micro piezo-electric gyroscopes provide rates about all three axes. These gyroscopes are approximately 0.6”x0.3”x0.15” in size and weigh 1 gram each (Figure 6). Figure 5: Flight Control Computer Four COTS servos are located in the duct to actuate the control surfaces. Each servo weighs 28 grams and is 1.3”x1.3”x0.6” in size. Relative to typical UAV servos, they can generate high rates, but have low bandwidth. Bandwidth is defined by how high a frequency the servo can accurately follow an input signal. For all servos, the output lags behind the input and the signal degrades in magnitude as the frequency increases. At low frequency, the iSTAR MAV servo output signal lags by approximately 30°,", "title": "" }, { "docid": "451110458791809898c854991a073119", "text": "This paper considers the problem of face detection in first attempt using haar cascade classifier from images containing simple and complex backgrounds. It is one of the best detector in terms of reliability and speed. Experiments were carried out on standard database i.e. Indian face database (IFD) and Caltech database. All images are frontal face images because side face views are harder to detect with this technique. Opencv 2.4.2 is used to implement the haar cascade classifier. We achieved 100% face detection rate on Indian database containing simple background and 93.24% detection rate on Caltech database containing complex background. Haar cascade classifier provides high accuracy even the images are highly affected by the illumination. The haar cascade classifier has shown superior performance with simple background images.", "title": "" }, { "docid": "49a13503920438f546822b344ad68d58", "text": "OBJECTIVES\nThe determination of cholinesterase activity has been commonly applied in the biomonitoring of exposure to organophosphates and carbamates and in the diagnosis of poisoning with anticholinesterase compounds. One of the groups who are at risk of pesticide intoxication are the workers engaged in the production of these chemicals.\n\n\nAIMS\nThe aim of this study was to assess the effect of pesticides on erythrocyte and serum cholinesterase activity in workers occupationally exposed to these chemicals.\n\n\nMETHODS\nThe subjects were 63 workers at a pesticide plant. Blood samples were collected before they were employed (phase I) and after 3 months of working in the plant (phase II). Cholinesterase level in erythrocytes (EChE) was determined using the modified Ellman method, and serum cholinesterase (SChE) by butyrylthiocholine substrate assay.\n\n\nRESULTS\nThe mean EChE levels were 48+/-11 IU/g Hb in phase I and 37+/-17 IU/g Hb in phase II (paired t-test, mean=-29; 95% CI=-43-14), p<0.001). The mean SChE level was 9569+/-2496 IU/l in phase I, and 7970+/-2067 IU/l in phase II (paired t-test, mean=1599; 95% CI=1140-2058, p<0.001). There was a significant increase in ALT level (p < 0.001) and a decrease in serum albumin level (p<0.001).\n\n\nCONCLUSION\nIn view of the significant decrease in EChE and SChE levels among pesticide workers, it seems that routine assessment of cholinesterase level in workers employed in such occupations and people handling pesticides should be made obligatory.", "title": "" }, { "docid": "7b215780b323aa3672d34ca243b1cf46", "text": "In this paper, we study the problem of semantic annotation on 3D models that are represented as shape graphs. A functional view is taken to represent localized information on graphs, so that annotations such as part segment or keypoint are nothing but 0-1 indicator vertex functions. Compared with images that are 2D grids, shape graphs are irregular and non-isomorphic data structures. To enable the prediction of vertex functions on them by convolutional neural networks, we resort to spectral CNN method that enables weight sharing by parametrizing kernels in the spectral domain spanned by graph Laplacian eigenbases. Under this setting, our network, named SyncSpecCNN, strives to overcome two key challenges: how to share coefficients and conduct multi-scale analysis in different parts of the graph for a single shape, and how to share information across related but different shapes that may be represented by very different graphs. Towards these goals, we introduce a spectral parametrization of dilated convolutional kernels and a spectral transformer network. Experimentally we tested SyncSpecCNN on various tasks, including 3D shape part segmentation and keypoint prediction. State-of-the-art performance has been achieved on all benchmark datasets.", "title": "" }, { "docid": "a7f535275801ee4ed9f83369f416c408", "text": "A recent development in text compression is a “block sorting” algorithm which permutes the input text according to a special sort procedure and then processes the permuted text with Move-to-Front and a final statistical compressor. The technique combines good speed with excellent compression performance. This paper investigates the fundamental operation of the algorithm and presents some improvements based on that analysis. Although block sorting is clearly related to previous compression techniques, it appears that it is best described by techniques derived from work by Shannon in 1951 on the prediction and entropy of English text. A simple model is developed which relates the compression to the proportion of zeros after the MTF stage. Short Title Block Sorting Text Compression Author Peter M. Fenwick Affiliation Department of Computer Science The University of Auckland Private Bag 92019 Auckland, New Zealand. Postal Address Dr P.M. Fenwick Dept of Computer Science The University of Auckland Private Bag 92019 Auckland New Zealand. E-mail p_fenwick@cs.auckland.ac.nz Telephone + 64 9 373 7599 ext 8298", "title": "" }, { "docid": "65d00120929fe519a64ad50392a23924", "text": "A compact printed UWB MIMO antenna with a 5.8 GHz band-notch is presented. The two antennas are located on the two opposite sides of a Printed-Circuits-Board (PCB), separated by a spacing of 13.2 mm and a small isolated element, which provides a good isolation. The antenna structure adopts coupled and parasitic modes to form multi-modal resonance that results in the desired ultra-wideband operation. There is a parasitic slit embedded on the main radiator and an isolated element employed between the two antennas. An excellent desired band-notched UWB characteristic was obtained by care design of the parasitic slit. The overall size of the proposed antenna is mere 40.2×54×0.8 mm; the radiation patterns of the two antennas cover the complementary space of 180o; the antenna yields peak gains varied from 5 to 8 dBi, and antenna radiation efficiency exceeding about 70~90 % over the operation band. The antenna port Envelope Correlation Coefficient (ECC) was less than about 0.07. Moreover, the antenna is easy to fabricate and suitable for any wireless modules applications at the UWB band.", "title": "" }, { "docid": "d846d16aac9067c82dc85b9bc17756e0", "text": "We present a novel solution to improve the performance of Chinese word segmentation (CWS) using a synthetic word parser. The parser analyses the internal structure of words, and attempts to convert out-of-vocabulary words (OOVs) into in-vocabulary fine-grained sub-words. We propose a pipeline CWS system that first predicts this fine-grained segmentation, then chunks the output to reconstruct the original word segmentation standard. We achieve competitive results on the PKU and MSR datasets, with substantial improvements in OOV recall.", "title": "" } ]
scidocsrr
44ed214d3eb52e6b51e2b434d9f918c3
A segmented topic model based on the two-parameter Poisson-Dirichlet process
[ { "docid": "53be2c41da023d9e2380e362bfbe7cce", "text": "A rich and  exible class of random probability measures, which we call stick-breaking priors, can be constructed using a sequence of independent beta random variables. Examples of random measures that have this characterization include the Dirichlet process, its two-parameter extension, the two-parameter Poisson–Dirichlet process, Ž nite dimensional Dirichlet priors, and beta two-parameter processes. The rich nature of stick-breaking priors offers Bayesians a useful class of priors for nonparametri c problems, while the similar construction used in each prior can be exploited to develop a general computational procedure for Ž tting them. In this article we present two general types of Gibbs samplers that can be used to Ž t posteriors of Bayesian hierarchical models based on stick-breaking priors. The Ž rst type of Gibbs sampler, referred to as a Pólya urn Gibbs sampler, is a generalized version of a widely used Gibbs sampling method currently employed for Dirichlet process computing. This method applies to stick-breaking priors with a known Pólya urn characterization, that is, priors with an explicit and simple prediction rule. Our second method, the blocked Gibbs sampler, is based on an entirely different approach that works by directly sampling values from the posterior of the random measure. The blocked Gibbs sampler can be viewed as a more general approach because it works without requiring an explicit prediction rule. We Ž nd that the blocked Gibbs avoids some of the limitations seen with the Pólya urn approach and should be simpler for nonexperts to use.", "title": "" } ]
[ { "docid": "f1052f4704b5ec55e2a131dc2f2d6afc", "text": "A simple control for a permanent motor drive is described which provides a wide speed range without the use of a shaft sensor. Two line-to-line voltages and two stator currents are sensed and processed in analog form to produce the stator flux linkage space vector. The angle of this vector is then used in a microcontroller to produce the appropriate stator current command signals for the hysteresis current controller of the inverter so that near unity power factor can be achieved over a wide range of torque and speed. A speed signal is derived from the rate of change of angle of the flux linkage. A drift compensation program is proposed to avoid calculation errors in the determination of angle position and speed. The control system has been implemented on a 5 kW motor using Nd-Fe-B magnets. The closed loop speed control has been shown to be effective down to a frequency of less than 1 Hz, thus providing a wide range of speed control. An open loop starting program is used to accelerate the motor up to this limit frequency with minimum speed oscillation.<<ETX>>", "title": "" }, { "docid": "fb204d2f9965d17ed87c8fe8d1f22cdd", "text": "Are metaphors departures from a norm of literalness? According to classical rhetoric and most later theories, including Gricean pragmatics, they are. No, metaphors are wholly normal, say the Romantic critics of classical rhetoric and a variety of modern scholars ranging from hard-nosed cognitive scientists to postmodern critical theorists. On the metaphor-as-normal side, there is a broad contrast between those, like the cognitive linguists Lakoff, Talmy or Fauconnier, who see metaphor as pervasive in language because it is constitutive of human thought, and those, like the psycholinguists Glucksberg or Kintsch, or relevance theorists, who describe metaphor as emerging in the process of verbal communication. 1 While metaphor cannot be both wholly normal and a departure from normal language use, there might be distinct, though related, metaphorical phenomena at the level of thought, on the one hand, and verbal communication, on the other. This possibility is being explored (for instance) in the work of Raymond Gibbs. 2 In this chapter, we focus on the relevance-theoretic approach to linguistic metaphors.", "title": "" }, { "docid": "ab0154cea907abbb26d074496c856bd7", "text": "So far, empirically grounded studies, which compare the phenomena of e-commerce and e-government, have been in short supply. However, such studies it has been argued would most likely deepen the understanding of the sector-specific similarities and differences leading to potential cross-fertilization between the two sectors as well as to the establishment of performance measures and success criteria. This paper reports on the findings of an empirical research pilot, which is the first in a series of planned exploratory and theory-testing studies on the subject", "title": "" }, { "docid": "d76246dfee7e2f3813e025ac34ffc354", "text": "Web usage mining is application of data mining techniques to discover usage patterns from web data, in order to better serve the needs of web based applications. The user access log files present very significant information about a web server. This paper is concerned with the in-depth analysis of Web Log Data of NASA website to find information about a web site, top errors, potential visitors of the site etc. which help system administrator and Web designer to improve their system by determining occurred systems errors, corrupted and broken links by using web using mining. The obtained results of the study will be used in the further development of the web site in order to increase its effectiveness.", "title": "" }, { "docid": "24c1b31bac3688c901c9b56ef9a331da", "text": "Advanced Persistent Threats (APTs) are a new breed of internet based smart threats, which can go undetected with the existing state of-the-art internet traffic monitoring and protection systems. With the evolution of internet and cloud computing, a new generation of smart APT attacks has also evolved and signature based threat detection systems are proving to be futile and insufficient. One of the essential strategies in detecting APTs is to continuously monitor and analyze various features of a TCP/IP connection, such as the number of transferred packets, the total count of the bytes exchanged, the duration of the TCP/IP connections, and details of the number of packet flows. The current threat detection approaches make extensive use of machine learning algorithms that utilize statistical and behavioral knowledge of the traffic. However, the performance of these algorithms is far from satisfactory in terms of reducing false negatives and false positives simultaneously. Mostly, current algorithms focus on reducing false positives, only. This paper presents a fractal based anomaly classification mechanism, with the goal of reducing both false positives and false negatives, simultaneously. A comparison of the proposed fractal based method with a traditional Euclidean based machine learning algorithm (k-NN) shows that the proposed method significantly outperforms the traditional approach by reducing false positive and false negative rates, simultaneously, while improving the overall classification rates.", "title": "" }, { "docid": "e2459b9991cfda1e81119e27927140c5", "text": "This research demo describes the implementation of a mobile AR-supported educational course application, AR Circuit, which is designed to promote the effectiveness of remote collaborative learning for physics. The application employs the TCP/IP protocol enabling multiplayer functionality in a mobile AR environment. One phone acts as the server and the other acts as the client. The server phone will capture the video frames, process the video frame, and send the current frame and the markers transformation matrices to the client phone.", "title": "" }, { "docid": "f2a2f1e8548cc6fcff6f1d565dfa26c9", "text": "Cabbage contains the glucosinolate sinigrin, which is hydrolyzed by myrosinase to allyl isothiocyanate. Isothiocyanates are thought to inhibit the development of cancer cells by a number of mechanisms. The effect of cooking cabbage on isothiocyanate production from glucosinolates during and after their ingestion was examined in human subjects. Each of 12 healthy human volunteers consumed three meals, at 48-h intervals, containing either raw cabbage, cooked cabbage, or mustard according to a cross-over design. At each meal, watercress juice, which is rich in phenethyl isothiocyanate, was also consumed to allow individual and temporal variation in postabsorptive isothiocyanate recovery to be measured. Volunteers recorded the time and volume of each urination for 24 h after each meal. Samples of each urination were analyzed for N-acetyl cysteine conjugates of isothiocyanates as a measure of entry of isothiocyanates into the peripheral circulation. Excretion of isothiocyanates was rapid and substantial after ingestion of mustard, a source of preformed allyl isothiocyanate. After raw cabbage consumption, allyl isothiocyanate was again rapidly excreted, although to a lesser extent than when mustard was consumed. On the cooked cabbage treatment, excretion of allyl isothiocyanate was considerably less than for raw cabbage, and the excretion was delayed. The results indicate that isothiocyanate production is more extensive after consumption of raw vegetables but that isothiocyanates still arise, albeit to a lesser degree, when cooked vegetables are consumed. The lag in excretion on the cooked cabbage treatment suggests that the colon microflora catalyze glucosinolate hydrolysis in this case.", "title": "" }, { "docid": "30fb0e394f6c4bf079642cd492229b67", "text": "Although modern communications services are susceptible to third-party eavesdropping via a wide range of possible techniques, law enforcement agencies in the US and other countries generally use one of two technologies when they conduct legally-authorized interception of telephones and other communications traffic. The most common of these, designed to comply with the 1994 Communications Assistance for Law Enforcement Act(CALEA), use a standard interface provided in network switches.\n This paper analyzes the security properties of these interfaces. We demonstrate that the standard CALEA interfaces are vulnerable to a range of unilateral attacks by the intercept target. In particular, because of poor design choices in the interception architecture and protocols, our experiments show it is practical for a CALEA-tapped target to overwhelm the link to law enforcement with spurious signaling messages without degrading her own traffic, effectively preventing call records as well as content from being monitored or recorded. We also identify stop-gap mitigation strategies that partially mitigate some of our identified attacks.", "title": "" }, { "docid": "b882d6bc42e34506ba7ab26ed44d9265", "text": "Production datacenters operate under various uncertainties such as tra c dynamics, topology asymmetry, and failures. Therefore, datacenter load balancing schemes must be resilient to these uncertainties; i.e., they should accurately sense path conditions and timely react to mitigate the fallouts. Despite signi cant e orts, prior solutions have important drawbacks. On the one hand, solutions such as Presto and DRB are oblivious to path conditions and blindly reroute at xed granularity. On the other hand, solutions such as CONGA and CLOVE can sense congestion, but they can only reroute when owlets emerge; thus, they cannot always react timely to uncertainties. To make things worse, these solutions fail to detect/handle failures such as blackholes and random packet drops, which greatly degrades their performance. In this paper, we introduce Hermes, a datacenter load balancer that is resilient to the aforementioned uncertainties. At its heart, Hermes leverages comprehensive sensing to detect path conditions including failures unattended before, and it reacts using timely yet cautious rerouting. Hermes is a practical edge-based solution with no switch modi cation. We have implemented Hermes with commodity switches and evaluated it through both testbed experiments and large-scale simulations. Our results show that Hermes achieves comparable performance to CONGA and Presto in normal cases, and well handles uncertainties: under asymmetries, Hermes achieves up to 10% and 20% better ow completion time (FCT) than CONGA and CLOVE; under switch failures, it outperforms all other schemes by over 32%.", "title": "" }, { "docid": "1cd860c1fd2df1a773f2324af324e72a", "text": "Network anomaly detection is an important and dynamic research area. Many network intrusion detection methods and systems (NIDS) have been proposed in the literature. In this paper, we provide a structured and comprehensive overview of various facets of network anomaly detection so that a researcher can become quickly familiar with every aspect of network anomaly detection. We present attacks normally encountered by network intrusion detection systems. We categorize existing network anomaly detection methods and systems based on the underlying computational techniques used. Within this framework, we briefly describe and compare a large number of network anomaly detection methods and systems. In addition, we also discuss tools that can be used by network defenders and datasets that researchers in network anomaly detection can use. We also highlight research directions in network anomaly detection.", "title": "" }, { "docid": "5100ef5ffa501eb7193510179039cd82", "text": "The interplay between caching and HTTP Adaptive Streaming (HAS) is known to be intricate, and possibly detrimental to QoE. In this paper, we make the case for caching-aware rate decision algorithms at the client side which do not require any collaboration with cache or server. To this goal, we introduce the optimization model which allows to compute the optimal rate decisions in the presence of cache, and compare the current main representatives of HAS algorithms (RBA and BBA) to this optimal. This allows us to assess how far from the optimal these versions are, and on which to build a caching-aware rate decision algorithm.", "title": "" }, { "docid": "678bcac5e2cc072ecdd4290ad7f4d769", "text": "Health insurance companies in Brazil have their data about claims organized having the view only for providers. In this way, they loose the physician view and how they share patients. Partnership between physicians can view as a fruitful work in most of the cases but sometimes this could be a problem for health insurance companies and patients, for example a recommendation to visit another physician only because they work in same clinic. Œe focus of the work is to beŠer understand physicians activities and how these activities are represented in the data. Our approach considers three aspects: the relationships among physicians, the relationships between physicians and patients, and the relationships between physicians and health providers. We present the results of an analysis of a claims database (detailing 18 months of activity) from a large health insurance company in Brazil. Œe main contribution presented in this paper is a set of models to represent: mutual referral between physicians, patient retention, and physician centrality in the health insurance network. Our results show the proposed models based on social network frameworks, extracted surprising insights about physicians from real health insurance claims data.", "title": "" }, { "docid": "8edcb0c2c5f4732a8c06121b8d774b44", "text": "We propose a novel scene graph generation model called Graph R-CNN, that is both effective and efficient at detecting objects and their relations in images. Our model contains a Relation Proposal Network (RePN) that efficiently deals with the quadratic number of potential relations between objects in an image. We also propose an attentional Graph Convolutional Network (aGCN) that effectively captures contextual information between objects and relations. Finally, we introduce a new evaluation metric that is more holistic and realistic than existing metrics. We report state-of-the-art performance on scene graph generation as evaluated using both existing and our proposed metrics.", "title": "" }, { "docid": "9c510d7ddeb964c5d762d63d9e284f44", "text": "This paper explains the rationale for the development of reconfigurable manufacturing systems, which possess the advantages both of dedicated lines and of flexible systems. The paper defines the core characteristics and design principles of reconfigurable manufacturing systems (RMS) and describes the structure recommended for practical RMS with RMS core characteristics. After that, a rigorous mathematical method is introduced for designing RMS with this recommended structure. An example is provided to demonstrate how this RMS design method is used. The paper concludes with a discussion of reconfigurable assembly systems. © 2011 The Society of Manufacturing Engineers. Published by Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "404fdd6f2d7f1bf69f2f010909969fa9", "text": "Many applications in Multilingual and Multimodal Information Access involve searching large databases of high dimensional data objects with multiple (conditionally independent) views. In this work we consider the problem of learning hash functions for similarity search across the views for such applications. We propose a principled method for learning a hash function for each view given a set of multiview training data objects. The hash functions map similar objects to similar codes across the views thus enabling cross-view similarity search. We present results from an extensive empirical study of the proposed approach which demonstrate its effectiveness on Japanese language People Search and Multilingual People Search problems.", "title": "" }, { "docid": "f1132d786a6384e3c1a6db776922ee69", "text": "The analysis of forensic investigation results has generally been identified as the most complex phase of a digital forensic investigation. This phase becomes more complicated and time consuming as the storage capacity of digital devices is increasing, while at the same time the prices of those devices are decreasing. Although there are some tools and techniques that assist the investigator in the analysis of digital evidence, they do not adequately address some of the serious challenges, particularly with the time and effort required to conduct such tasks. In this paper, we consider the use of semantic web technologies and in particular the ontologies, to assist the investigator in analyzing digital evidence. A novel ontology-based framework is proposed for forensic analysis tools, which we believe has the potential to influence the development of such tools. The framework utilizes a set of ontologies to model the environment under investigation. The evidence extracted from the environment is initially annotated using the Resource Description Framework (RDF). The evidence is then merged from various sources to identify new and implicit information with the help of inference engines and classification mechanisms. In addition, we present the ongoing development of a forensic analysis tool to analyze content retrieved from Android smart phones. For this purpose, several ontologies have been created to model some concepts of the smart phone environment.", "title": "" }, { "docid": "38a18bfce2cb33b390dd7c7cf5a4afd1", "text": "Automatic photo assessment is a high emerging research field with wide useful ‘real-world’ applications. Due to the recent advances in deep learning, one can observe very promising approaches in the last years. However, the proposed solutions are adapted and optimized for ‘isolated’ datasets making it hard to understand the relationship between them and to benefit from the complementary information. Following a unifying approach, we propose in this paper a learning model that integrates the knowledge from different datasets. We conduct a study based on three representative benchmark datasets for photo assessment. Instead of developing for each dataset a specific model, we design and adapt sequentially a unique model which we nominate UNNA. UNNA consists of a deep convolutional neural network, that predicts for a given image three kinds of aesthetic information: technical quality, high-level semantical quality, and a detailed description of photographic rules. Due to the sequential adaptation that exploits the common features between the chosen datasets, UNNA has comparable performances with the state-of-the-art solutions with effectively less parameter. The final architecture of UNNA gives us some interesting indication of the kind of shared features as well as individual aspects of the considered datasets.", "title": "" }, { "docid": "033b05d21f5b8fb5ce05db33f1cedcde", "text": "Seasonal occurrence of the common cutworm Spodoptera litura (Fab.) (Lepidoptera: Noctuidae) moths captured in synthetic sex pheromone traps and associated field population of eggs and larvae in soybean were examined in India from 2009 to 2011. Male moths of S. litura first appeared in late July or early August and continued through October. Peak male trap catches occurred during the second fortnight of September, which was within soybean reproductive stages. Similarly, the first appearance of S. litura egg masses and larval populations were observed after the first appearance of male moths in early to mid-August, and were present in the growing season up to late September to mid-October. The peak appearance of egg masses and larval populations always corresponded with the peak activity of male moths recorded during mid-September in all years. Correlation studies showed that weekly mean trap catches were linearly and positively correlated with egg masses and larval populations during the entire growing season of soybean. Seasonal means of male moth catches in pheromone traps during the 2010 and 2011 seasons were significantly lower than the catches during the 2009 season. However, seasonal means of the egg masses and larval populations were not significantly different between years. Pheromone traps may be useful indicators of the onset of numbers of S. litura eggs and larvae in soybean fields.", "title": "" }, { "docid": "3eec1e9abcb677a4bc8f054fa8827f4f", "text": "We present a neural semantic parser that translates natural language questions into executable SQL queries with two key ideas. First, we develop an encoder-decoder model, where the decoder uses a simple type system of SQL to constraint the output prediction, and propose a value-based loss when copying from input tokens. Second, we explore using the execution semantics of SQL to repair decoded programs that result in runtime error or return empty result. We propose two modelagnostics repair approaches, an ensemble model and a local program repair, and demonstrate their effectiveness over the original model. We evaluate our model on the WikiSQL dataset and show that our model achieves close to state-of-the-art results with lesser model complexity.", "title": "" } ]
scidocsrr
fba44c92f0153a324d800ac71a54c886
Gender Representation in Cinematic Content: A Multimodal Approach
[ { "docid": "e95541d0401a196b03b94dd51dd63a4b", "text": "In the information age, computer applications have become part of modern life and this has in turn encouraged the expectations of friendly interaction with them. Speech, as “the” communication mode, has seen the successful development of quite a number of applications using automatic speech recognition (ASR), including command and control, dictation, dialog systems for people with impairments, translation, etc. But the actual challenge goes beyond the use of speech in control applications or to access information. The goal is to use speech as an information source, competing, for example, with text online. Since the technology supporting computer applications is highly dependent on the performance of the ASR system, research into ASR is still an active topic, as is shown by the range of research directions suggested in (Baker et al., 2009a, 2009b). Automatic speech recognition – the recognition of the information embedded in a speech signal and its transcription in terms of a set of characters, (Junqua & Haton, 1996) – has been object of intensive research for more than four decades, achieving notable results. It is only to be expected that speech recognition advances make spoken language as convenient and accessible as online text when the recognizers reach error rates near zero. But while digit recognition has already reached a rate of 99.6%, (Li, 2008), the same cannot be said of phone recognition, for which the best rates are still under 80% 1,(Mohamed et al., 2011; Siniscalchi et al., 2007). Speech recognition based on phones is very attractive since it is inherently free from vocabulary limitations. Large Vocabulary ASR (LVASR) systems’ performance depends on the quality of the phone recognizer. That is why research teams continue developing phone recognizers, in order to enhance their performance as much as possible. Phone recognition is, in fact, a recurrent problem for the speech recognition community. Phone recognition can be found in a wide range of applications. In addition to typical LVASR systems like (Morris & Fosler-Lussier, 2008; Scanlon et al., 2007; Schwarz, 2008), it can be found in applications related to keyword detection, (Schwarz, 2008), language recognition, (Matejka, 2009; Schwarz, 2008), speaker identification, (Furui, 2005) and applications for music identification and translation, (Fujihara & Goto, 2008; Gruhne et al., 2007). The challenge of building robust acoustic models involves applying good training algorithms to a suitable set of data. The database defines the units that can be trained and", "title": "" }, { "docid": "9a5e04b2a6b8e81591a602b0dd81fa10", "text": "Direct content analysis reveals important details about movies including those of gender representations and potential biases. We investigate the differences between male and female character depictions in movies, based on patterns of language used. Specifically, we use an automatically generated lexicon of linguistic norms characterizing gender ladenness. We use multivariate analysis to investigate gender depictions and correlate them with elements of movie production. The proposed metric differentiates between male and female utterances and exhibits some interesting interactions with movie genres and the screenplay writer gender.", "title": "" } ]
[ { "docid": "06e3d228e9fac29dab7180e56f087b45", "text": "Curiosity is thought to be an intrinsically motivated driving force for seeking information. Thus, the opportunity for an information gain (IG) should instil curiosity in humans and result in information gathering actions. To investigate if, and how, information acts as an intrinsic reward, a search task was set in a context of blurred background images which could be revealed by iterative clicking. The search task was designed such that it prevented efficient IG about the underlying images. Participants therefore had to trade between clicking regions with high search target probability or high expected image content information. Image content IG was established from “information-maps” based on participants exploration with the intention of understanding (1) the main theme of the image and (2) how interesting the image might appear to others. Note that IG is in this thesis not identical with the information theoretic concept of information gain, the quantities are however probably related. It was hypothesised that participants would be distracted by visually informative regions and that images independently rated as more interesting would yield higher image based IG. It was also hypothesised that image based IG would increase as a function of time. Results show that participants sometimes explored images driven by curiosity, and that there was considerable individual variation in which images participants were curious about. Independent interest ratings did not account for image based IG. The level of IG increased over trials, interestingly without affecting participants’ performance on the visual search task designed to prevent IG. Results support that IG is rewarding as participants learned to optimize IG over trials without compromising performance on the extrinsically motivated search; managing to both keep the cake and eat it.", "title": "" }, { "docid": "ba590a4ae3bab635a07054860222744a", "text": "Interactive Strategy Training for Active Reading and Thinking (iSTART) is a Web-based application that provides young adolescent to college-age students with high-level reading strategy training to improve comprehension of science texts. iSTART is modeled after an effective, human-delivered intervention called self-explanation reading training (SERT), which trains readers to use active reading strategies to self-explain difficult texts more effectively. To make the training more widely available, the Web-based trainer has been developed. Transforming the training from a human-delivered application to a computer-based one has resulted in a highly interactive trainer that adapts its methods to the performance of the students. The iSTART trainer introduces the strategies in a simulated classroom setting with interaction between three animated characters-an instructor character and two student characters-and the human trainee. Thereafter, the trainee identifies the strategies in the explanations of a student character who is guided by an instructor character. Finally, the trainee practices self-explanation under the guidance of an instructor character. We describe this system and discuss how appropriate feedback is generated.", "title": "" }, { "docid": "88128ec1201e2202f13f2c09da0f07f2", "text": "A new mechanism is proposed for exciting the magnetic state of a ferromagnet. Assuming ballistic conditions and using WKB wave functions, we predict that a transfer of vectorial spin accompanies an electric current flowing perpendicular to two parallel magnetic films connected by a normal metallic spacer. This spin transfer drives motions of the two magnetization vectors within their instantaneously common plane. Consequent new mesoscopic precession and switching phenomena with potential applications are predicted. PACS: 75.50.Rr; 75.70.Cn A magnetic multilayer (MML) is composed of alternating ferromagnetic and paramagnetic sublayers whose thicknesses usually range between 1 and l0 nm. The discovery in 1988 of gian t magne tore s i s tance (GMR) in such multilayers stimulates much current research [1]. Although the initial reports dealt with currents flowing in the layer planes (CIP), the magnetoresistive phenomenon is known to be even stronger for currents flowing perpendicular to the plane (CPP) [2]. We predict here that the spinpolarized nature of such a perpendicular current generally creates a mutual transference of spin angular momentum between the magnetic sublayers which is manifested in their dynamic response. This response, which occurs only for CPP geometry, we propose to characterize as spin transfer . It can dominate the Larmor response to the magnetic field induced by * Fax: + 1-914-945-3291; email: slon@watson.ibm.com. the current when the magnetic sublayer thickness is about 1 nm and the smaller of its other two dimensions is less than 10= to 10 3 r im. On this mesoscopic scale, two new phenomena become possible: a steady precession driven by a constant current, and alternatively a novel form of switching driven by a pulsed current. Other forms of current-driven magnetic response without the use of any electromagnetically induced magnetic field are already known. Reports of both theory and experiments show how the exchange effect of external current flowing through a ferromagnetic domain wall causes it to move [3]. Even closer to the present subject is the magnetic response to tunneling current in the case of the sandwich structure f e r r o m a g n e t / i n s u l a t o r / f e r r o m a g n e t ( F / I / F ) predicted previously [4]. Unfortunately, theoretical relations indicated that the dissipation of energy, and therefore temperature rise, needed to produce more than barely observable spin-transfer through a tunneling barrier is prohibitively large. 0304-8853/96/$15.00 Copyright © 1996 Elsevier Science B.V. All rights reserved. PH S0304-8853(96)00062-5 12 ,/.C, Slo,cgewski / Journal of Magnetism and Magnetic Materials 159 (1996) L/ L7 However. the advent of multilayers incorporating very thin paramagnetic metallic spacers, rather than a barrier, places the realization of spin transfer in a different light. In the first place, the metallic spacer implies a low resistance and therefore low Ohmic dissipation for a given current, to which spin-transfer effects are proportional. Secondly, numerous experiments [5] and theories [6] show that the fundamental interlayer exchange coupling of RKKY type diminishes in strength and varies in sign as spacer thickness increases. Indeed, there exist experimental spacers which are thick enough (e.g. 4 nm) for the exchange coupling to be negligible even though spin relaxation is too weak to significantly diminish the GMR effect which relies on preservation of spin direction during electron transit across the spacer. Moreover, the same fact of long spin relaxation time in magnetic multilayers is illustrated on an even larger distance scale, an order of magnitude greater than the circa 10 nm electron mean free path, by spin injection experiments [7]. It follows, as we show below, that interesting current-driven spin-transfer effects are expected under laboratory conditions involving very small distance scales. We begin with simple arguments to explain current-driven spin transfer and establish its physical scale. We then sketch a detailed treatment and summarize its results. Finally, we predict two spin-transfer phenomena: steady magnetic precession driven by a constant current and a novel form of magnetic switching. We consider the five metallic regions represented schematically in Fig. 1. Layers A, B, and C are paramagnetic, whilst F I and F2 are ferromagnetic. The instantaneous macroscopic vectors hS~ and kS 2 forming the included angle 0 represent the respective total spin momenta per unit area of the ferromagnets. Now consider a flow of electrons moving rightward through the sandwich. The works on spin injection [7] show that if the thickness of spacer B is less than the spin-diffusion length, usually at least 100 nm, then some degree of spin polarization along the instantaneous axis parallel to the vector S~ of local ferromagnetic polarization in FI will be present in the electrons impinging on F2. This leads us to consider a three-layer (B, F2, C in Fig. 1) model in which an electron with initial spin state along the direction Sj is incident from S i ~ i S2 ~, EF=0J. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .", "title": "" }, { "docid": "68257960bdbc6c4f326108ee7ba3e756", "text": "In computer vision pixelwise dense prediction is the task of predicting a label for each pixel in the image. Convolutional neural networks achieve good performance on this task, while being computationally efficient. In this paper we carry these ideas over to the problem of assigning a sequence of labels to a set of speech frames, a task commonly known as framewise classification. We show that dense prediction view of framewise classification offers several advantages and insights, including computational efficiency and the ability to apply batch normalization. When doing dense prediction we pay specific attention to strided pooling in time and introduce an asymmetric dilated convolution, called time-dilated convolution, that allows for efficient and elegant implementation of pooling in time. We show that by using time-dilated convolutions with a very deep VGG-style CNN with batch normalization, we achieve best published single model accuracy result on the switchboard-2000 benchmark dataset.", "title": "" }, { "docid": "90813d00050fdb1b8ce1a9dffe858d46", "text": "Background: Diabetes mellitus is associated with biochemical and pathological alterations in the liver. The aim of this study was to investigate the effects of apple cider vinegar (ACV) on serum biochemical markers and histopathological changes in the liver of diabetic rats for 30 days. Effects were evaluated using streptozotocin (STZ)-induced diabetic rats as an experimental model. Materials and methods: Diabetes mellitus was induced by a single dose of STZ (65 mg/kg) given intraperitoneally. Thirty wistar rats were divided into three groups: control group, STZ-treated group and STZ plus ACV treated group (2 ml/kg BW). Animals were sacrificed 30 days post treatment. Results: Biochemical results indicated that, ACV caused a significant decrease in glucose, TC, LDL-c and a significant increase in HDL-c. Histopathological examination of the liver sections of diabetic rats showed fatty changes in the cytoplasm of the hepatocytes in the form of accumulation of lipid droplets, lymphocytic infiltration. Electron microscopic studies revealed aggregations of polymorphic mitochondria with apparent loss of their cristae and condensed matrices. Besides, the rough endoplasmic reticulum was proliferating and fragmented into smaller stacks. The cytoplasm of the hepatocytes exhibited vacuolations and displayed a large number of lipid droplets of different sizes. On the other hand, the liver sections of diabetic rats treated with ACV showed minimal toxic effects due to streptozotocin. These ultrastructural results revealed that treatment of diabetic rats with ACV led to apparent recovery of the injured hepatocytes. In prophetic medicine, Prophet Muhammad peace is upon him strongly recommended eating vinegar in the Prophetic Hadeeth: \"vinegar is the best edible\". Conclusion: This study showed that ACV, in early stages of diabetes inductioncan decrease the destructive progress of diabetes and cause hepatoprotection against the metabolic damages resulting from streptozotocininduced diabetes mellitus.", "title": "" }, { "docid": "703696ca3af2a485ac34f88494210007", "text": "Cells navigate environments, communicate and build complex patterns by initiating gene expression in response to specific signals. Engineers seek to harness this capability to program cells to perform tasks or create chemicals and materials that match the complexity seen in nature. This Review describes new tools that aid the construction of genetic circuits. Circuit dynamics can be influenced by the choice of regulators and changed with expression 'tuning knobs'. We collate the failure modes encountered when assembling circuits, quantify their impact on performance and review mitigation efforts. Finally, we discuss the constraints that arise from circuits having to operate within a living cell. Collectively, better tools, well-characterized parts and a comprehensive understanding of how to compose circuits are leading to a breakthrough in the ability to program living cells for advanced applications, from living therapeutics to the atomic manufacturing of functional materials.", "title": "" }, { "docid": "3f0d37296258c68a20da61f34364405d", "text": "Need to develop human body's posture supervised robots, gave the push to researchers to think over dexterous design of exoskeleton robots. It requires to develop quantitative techniques to assess motor function and generate the command for the robots to act accordingly with complex human structure. In this paper, we present a new technique for the upper limb power exoskeleton robot in which load is gripped by the human subject and not by the robot while the robot assists. Main challenge is to find non-biological signal based human desired motion intention to assist as needed. For this purpose, we used newly developed Muscle Circumference Sensor (MCS) instead of electromyogram (EMG) sensors. MCS together with the force sensors is used to estimate the human interactive force from which desired human motion is extracted using adaptive Radial Basis Function Neural Network (RBFNN). Developed Upper limb power exoskeleton has seven degrees of freedom (DOF) in which five DOF are passive while two are active. Active joints include shoulder and elbow in Sagittal plane while abduction and adduction motion in shoulder joint is provided by the passive joints. To ensure high quality performance model reference based adaptive impedance controller is employed. Exoskeleton performance is evaluated experimentally by a neurologically intact subject which validates the effectiveness.", "title": "" }, { "docid": "3079e9dc5846c73c57f8d7fbf35d94a1", "text": "Data mining techniques is rapidly increasing in the research of educational domains. Educational data mining aims to discover hidden knowledge and patterns about student performance. This paper proposes a student performance prediction model by applying two classification algorithms: KNN and Naïve Bayes on educational data set of secondary schools, collected from the ministry of education in Gaza Strip for 2015 year. The main objective of such classification may help the ministry of education to improve the performance due to early prediction of student performance. Teachers also can take the proper evaluation to improve student learning. The experimental results show that Naïve Bayes is better than KNN by receiving the highest accuracy value of 93.6%.", "title": "" }, { "docid": "f5f70dca677752bcaa39db59988c088e", "text": "To examine how inclusive our schools are after 25 years of educational reform, students with disabilities and their parents were asked to identify current barriers and provide suggestions for removing those barriers. Based on a series of focus group meetings, 15 students with mobility limitations (9-15 years) and 12 parents identified four categories of barriers at their schools: (a) the physical environment (e.g., narrow doorways, ramps); (b) intentional attitudinal barriers (e.g., isolation, bullying); (c) unintentional attitudinal barriers (e.g., lack of knowledge, understanding, or awareness); and (d) physical limitations (e.g., difficulty with manual dexterity). Recommendations for promoting accessibility and full participation are provided and discussed in relation to inclusive education efforts. Exceptional Children", "title": "" }, { "docid": "6bfc3d00fe6e9fcdb09ad8993b733dfd", "text": "This article presents the upper-torso design issue of Affeto who can physically interact with humans, which biases the perception of affinity beyond the uncanny valley effect. First, we review the effect and hypothesize that the experience of physical interaction with Affetto decreases the effect. Then, the reality of physical existence is argued with existing platforms. Next, the design concept and a very preliminary experiment are shown. Finally, future issues are given. I. THE UNCANNY VALLEY REVISITED The term “Uncanny” is a translation of Freud’s term “Der Unheimliche” and applied to a phenomenon noted by Masahiro Mori who mentioned that the presence of movement steepens the slopes of the uncanny valley (Figure 2 in [1]). Several studies on this effect can be summarised as follows1. 1) Multimodal impressions such as visual appearance, body motion, sounds (speech and others), and tactile sensation should be congruent to decrease the valley steepness. 2) Antipathetic expressions may exaggerate the valley effect. The current technologies enable us to minimize the gap caused by mismatch among cross-modal factors. Therefore, the valley effect is expected to be reduced gradually. For example, facial expressions and tactile sensations of Affetto [2] are realistic and congruent due to baby-like face skin mask of urethane elastomer gel (See Figure 1). Generated facial expressions almost conquered the uncanny valley. Further, baby-like facial expressions may contribute to the reduction of the valley effect due to 2). In addition to these, we suppose that the motor experience of physical interactions with robots biases the perception of affinity as motor experiences biases the perception of movements [3]. To verify this hypothesis, Affetto needs its body which realizes physical interactions naturally. The rest of this article is organized as follows. The next section argues about the reality of physical existence with existing platforms. Then, the design concept and a very preliminary experiment are shown, and the future issues are given.", "title": "" }, { "docid": "f0365424e98ebcc0cb06ce51f65cbe7c", "text": "The most important milestone in the field of magnetic sensors was that AMR sensors started to replace Hall sensors in many application, were larger sensitivity is an advantage. GMR and SDT sensor finally found limited applications. We also review the development in miniaturization of fluxgate sensors and briefly mention SQUIDs, resonant sensors, GMIs and magnetomechanical sensors.", "title": "" }, { "docid": "316ead33d0313804b7aa95570427e375", "text": "We consider an optimal control problem with a deterministic finite horizon and state variable dynamics given by a Markovswitching jump-diffusion stochastic differential equation. Our main results extend the dynamic programming technique to this larger family of stochastic optimal control problems. More specifically, we provide a detailed proof of Bellman’s optimality principle (or dynamic programming principle) and obtain the corresponding Hamilton-Jacobi-Belman equation, which turns out to be a partial integro-differential equation due to the extra terms arising from the Lévy process and the Markov process. As an application of our results, we study a finite horizon consumptioninvestment problem for a jump-diffusion financial market consisting of one risk-free asset and one risky asset whose coefficients are assumed to depend on the state of a continuous time finite state Markov process. We provide a detailed study of the optimal strategies for this problem, for the economically relevant families of power utilities and logarithmic utilities.", "title": "" }, { "docid": "784c7c785b2e47fad138bba38b753f31", "text": "A local linear wavelet neural network (LLWNN) is presented in this paper. The difference of the network with conventional wavelet neural network (WNN) is that the connection weights between the hidden layer and output layer of conventional WNN are replaced by a local linear model. A hybrid training algorithm of particle swarm optimization (PSO) with diversity learning and gradient descent method is introduced for training the LLWNN. Simulation results for the prediction of time-series show the feasibility and effectiveness of the proposed method. r 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "f1977e5f8fbc0df4df0ac6bf1715c254", "text": "Instabilities in MOS-based devices with various substrates ranging from Si, SiGe, IIIV to 2D channel materials, can be explained by defect levels in the dielectrics and non-radiative multi-phonon (NMP) barriers. However, recent results obtained on single defects have demonstrated that they can show a highly complex behaviour since they can transform between various states. As a consequence, detailed physical models are complicated and computationally expensive. As will be shown here, as long as only lifetime predictions for an ensemble of defects is needed, considerable simplifications are possible. We present and validate an oxide defect model that captures the essence of full physical models while reducing the complexity substantially. We apply this model to investigate the improvement in positive bias temperature instabilities due to a reliability anneal. Furthermore, we corroborate the simulated defect bands with prior defect-centric studies and perform lifetime projections.", "title": "" }, { "docid": "7303f634355e24f0dba54daa29ed2737", "text": "A power divider/combiner based on a double sided slotted waveguide geometry suitable for Ka-band applications is proposed. This structure allows up to 50% reduction of the total device length compared to previous designs of this type without compromising manufacturing complexity or combining efficiency. Efficient design guidelines based on an equivalent circuit technique are provided and the performance is demonstrated by means of a 12-way divider/combiner prototype operating in the range 29-31 GHz. Numerical simulations show that back to back insertion loss of 1.19 dB can be achieved, corresponding to a combining efficiency of 87%. The design is validated by means of manufacturing and testing an experimental prototype with measured back-to-back insertion loss of 1.83 dB with a 3 dB bandwidth of 20.8%, corresponding to a combining efficiency of 81%.", "title": "" }, { "docid": "c30f721224317a41c1e316c158549d81", "text": "The oxysterol receptor LXR is a key transcriptional regulator of lipid metabolism. LXR increases expression of SREBP-1, which in turn regulates at least 32 genes involved in lipid synthesis and transport. We recently identified 25-hydroxycholesterol-3-sulfate (25HC3S) as an important regulatory molecule in the liver. We have now studied the effects of 25HC3S and its precursor, 25-hydroxycholesterol (25HC), on lipid metabolism as mediated by the LXR/SREBP-1 signaling in macrophages. Addition of 25HC3S to human THP-1-derived macrophages markedly decreased nuclear LXR protein levels. 25HC3S administration was followed by dose- and time-dependent decreases in SREBP-1 mature protein and mRNA levels. 25HC3S decreased the expression of SREBP-1-responsive genes, acetyl-CoA carboxylase-1, and fatty acid synthase (FAS) as well as HMGR and LDLR, which are key proteins involved in lipid metabolism. Subsequently, 25HC3S decreased intracellular lipids and increased cell proliferation. In contrast to 25HC3S, 25HC acted as an LXR ligand, increasing ABCA1, ABCG1, SREBP-1, and FAS mRNA levels. In the presence of 25HC3S, 25HC, and LXR agonist T0901317, stimulation of LXR targeting gene expression was repressed. We conclude that 25HC3S acts in macrophages as a cholesterol satiety signal, downregulating cholesterol and fatty acid synthetic pathways via inhibition of LXR/SREBP signaling. A possible role of oxysterol sulfation is proposed.", "title": "" }, { "docid": "33e45b66cca92f15270500c32a1c0b94", "text": "We study a dataset of billions of program binary files that appeared on 100 million computers over the course of 12 months, discovering that 94% of these files were present on a single machine. Though malware polymorphism is one cause for the large number of singleton files, additional factors also contribute to polymorphism, given that the ratio of benign to malicious singleton files is 80:1. The huge number of benign singletons makes it challenging to reliably identify the minority of malicious singletons. We present a large-scale study of the properties, characteristics, and distribution of benign and malicious singleton files. We leverage the insights from this study to build a classifier based purely on static features to identify 92% of the remaining malicious singletons at a 1.4% percent false positive rate, despite heavy use of obfuscation and packing techniques by most malicious singleton files that we make no attempt to de-obfuscate. Finally, we demonstrate robustness of our classifier to important classes of automated evasion attacks.", "title": "" }, { "docid": "7e2f657115b3c9163a7fe9b34d95a314", "text": "Even though several youth fatal suicides have been linked with school victimization, there is lack of evidence on whether cyberbullying victimization causes students to adopt suicidal behaviors. To investigate this issue, I use exogenous state-year variation in cyberbullying laws and information on high school students from the Youth Risk Behavioral Survey within a bivariate probit framework, and complement these estimates with matching techniques. I find that cyberbullying has a strong impact on all suicidal behaviors: it increases suicidal thoughts by 14.5 percentage points and suicide attempts by 8.7 percentage points. Even if the focus is on statewide fatal suicide rates, cyberbullying still leads to significant increases in suicide mortality, with these effects being stronger for men than for women. Since cyberbullying laws have an effect on limiting cyberbullying, investing in cyberbullying-preventing strategies can improve individual health by decreasing suicide attempts, and increase the aggregate health stock by decreasing suicide rates.", "title": "" }, { "docid": "f636eb06a1158f4593ce8027d6f274e7", "text": "Various modifications of bagging for class imbalanced data are discussed. An experimental comparison of known bagging modifications shows that integrating with undersampling is more powerful than oversampling. We introduce Local-and-Over-All Balanced bagging where probability of sampling an example is tuned according to the class distribution inside its neighbourhood. Experiments indicate that this proposal is competitive to best undersampling bagging extensions.", "title": "" }, { "docid": "bffbc725b52468b41c53b156f6eadedb", "text": "This paper presents the design and experimental evaluation of an underwater robot that is propelled by a pair of lateral undulatory fins, inspired by the locomotion of rays and cuttlefish. Each fin mechanism is comprised of three individually actuated fin rays, which are interconnected by an elastic membrane. An on-board microcontroller generates the rays’ motion pattern that result in the fins’ undulations, through which propulsion is generated. The prototype, which is fully untethered and energetically autonomous, also integrates an Inertial Measurement Unit for navigation purposes, a wireless communication module, and a video camera for recording underwater footage. Due to its small size and low manufacturing cost, the developed prototype can also serve as an educational platform for underwater robotics.", "title": "" } ]
scidocsrr
fcbe9a04dc40f479997af388ff4cf303
A learning style classification mechanism for e-learning
[ { "docid": "9c20658d8173101492554bcf8cf89687", "text": "Students are characterized by different learning styles, focusing on different types of information and processing this information in different ways. One of the desirable characteristics of a Web-based education system is that all the students can learn despite their different learning styles. To achieve this goal we have to detect how students learn: reflecting or acting; steadily or in fits and starts; intuitively or sensitively. In this work, we evaluate Bayesian networks at detecting the learning style of a student in a Web-based education system. The Bayesian network models different aspects of a student behavior while he/she works with this system. Then, it infers his/her learning styles according to the modeled behaviors. The proposed Bayesian model was evaluated in the context of an Artificial Intelligence Web-based course. The results obtained are promising as regards the detection of students learning styles. Different levels of precision were found for the different dimensions or aspects of a learning style. 2005 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "40d7847859a974d2a91cccab55ba625b", "text": "Programming question and answer (Q&A) websites, such as Stack Overflow, leverage the knowledge and expertise of users to provide answers to technical questions. Over time, these websites turn into repositories of software engineering knowledge. Such knowledge repositories can be invaluable for gaining insight into the use of specific technologies and the trends of developer discussions. Previous work has focused on analyzing the user activities or the social interactions in Q&A websites. However, analyzing the actual textual content of these websites can help the software engineering community to better understand the thoughts and needs of developers. In the article, we present a methodology to analyze the textual content of Stack Overflow discussions. We use latent Dirichlet allocation (LDA), a statistical topic modeling technique, to automatically discover the main topics present in developer discussions. We analyze these discovered topics, as well as their relationships and trends over time, to gain insights into the development community. Our analysis allows us to make a number of interesting observations, including: the topics of interest to developers range widely from jobs to version control systems to C# syntax; questions in some topics lead to discussions in other topics; and the topics gaining the most popularity over time are web development (especially jQuery), mobile applications (especially Android), Git, and MySQL.", "title": "" }, { "docid": "b6508d1f2b73b90a0cfe6399f6b44421", "text": "An alternative to land spreading of manure effluents is to mass-culture algae on the N and P present in the manure and convert manure N and P into algal biomass. The objective of this study was to determine how the fatty acid (FA) content and composition of algae respond to changes in the type of manure, manure loading rate, and to whether the algae was grown with supplemental carbon dioxide. Algal biomass was harvested weekly from indoor laboratory-scale algal turf scrubber (ATS) units using different loading rates of raw and anaerobically digested dairy manure effluents and raw swine manure effluent. Manure loading rates corresponded to N loading rates of 0.2 to 1.3 g TN m−2 day−1 for raw swine manure effluent and 0.3 to 2.3 g TN m−2 day−1 for dairy manure effluents. In addition, algal biomass was harvested from outdoor pilot-scale ATS units using different loading rates of raw and anaerobically digested dairy manure effluents. Both indoor and outdoor units were dominated by Rhizoclonium sp. FA content values of the algal biomass ranged from 0.6 to 1.5% of dry weight and showed no consistent relationship to loading rate, type of manure, or to whether supplemental carbon dioxide was added to the systems. FA composition was remarkably consistent among samples and >90% of the FA content consisted of 14:0, 16:0, 16:1ω7, 16:1ω9, 18:0, 18:1ω9, 18:2 ω6, and 18:3ω3.", "title": "" }, { "docid": "97e33cc9da9cb944c27d93bb4c09ef3d", "text": "Synchrophasor devices guarantee situation awareness for real-time monitoring and operational visibility of the smart grid. With their widespread implementation, significant challenges have emerged, especially in communication, data quality and cybersecurity. The existing literature treats these challenges as separate problems, when in reality, they have a complex interplay. This paper conducts a comprehensive review of quality and cybersecurity challenges for synchrophasors, and identifies the interdependencies between them. It also summarizes different methods used to evaluate the dependency and surveys how quality checking methods can be used to detect potential cyberattacks. In doing so, this paper serves as a starting point for researchers entering the fields of synchrophasor data analytics and security.", "title": "" }, { "docid": "796869acd15c4c44a59b0bc139f27841", "text": "This paper presents 1-bit CMOS full adder cell using standard static CMOS logic style. The comparison is taken out using several parameters like number of transistors, delay, power dissipation and power delay product (PDP). The circuits are designed at transistor level using 180 nm and 90nm CMOS technology. Various full adders are presented in this paper like Conventional CMOS (C-CMOS), Complementary pass transistor logic FA (CPL), Double pass transistor logic FA , Transmission gate FA (TGA), Transmission function FA, New 14T,10T, Hybrid CMOS, HPSC, 24T, LPFA (CPL), LPHS, Hybrid Full Adders.", "title": "" }, { "docid": "d6c34d138692851efdbb807a89d0fcca", "text": "Vaccine hesitancy reflects concerns about the decision to vaccinate oneself or one's children. There is a broad range of factors contributing to vaccine hesitancy, including the compulsory nature of vaccines, their coincidental temporal relationships to adverse health outcomes, unfamiliarity with vaccine-preventable diseases, and lack of trust in corporations and public health agencies. Although vaccination is a norm in the U.S. and the majority of parents vaccinate their children, many do so amid concerns. The proportion of parents claiming non-medical exemptions to school immunization requirements has been increasing over the past decade. Vaccine refusal has been associated with outbreaks of invasive Haemophilus influenzae type b disease, varicella, pneumococcal disease, measles, and pertussis, resulting in the unnecessary suffering of young children and waste of limited public health resources. Vaccine hesitancy is an extremely important issue that needs to be addressed because effective control of vaccine-preventable diseases generally requires indefinite maintenance of extremely high rates of timely vaccination. The multifactorial and complex causes of vaccine hesitancy require a broad range of approaches on the individual, provider, health system, and national levels. These include standardized measurement tools to quantify and locate clustering of vaccine hesitancy and better understand issues of trust; rapid, independent, and transparent review of an enhanced and appropriately funded vaccine safety system; adequate reimbursement for vaccine risk communication in doctors' offices; and individually tailored messages for parents who have vaccine concerns, especially first-time pregnant women. The potential of vaccines to prevent illness and save lives has never been greater. Yet, that potential is directly dependent on parental acceptance of vaccines, which requires confidence in vaccines, healthcare providers who recommend and administer vaccines, and the systems to make sure vaccines are safe.", "title": "" }, { "docid": "a8688afaad32401c6827d48e25750c43", "text": "We study how to improve the accuracy and running time of top-N recommendation with collaborative filtering (CF). Unlike existing works that use mostly rated items (which is only a small fraction in a rating matrix), we propose the notion of pre-use preferences of users toward a vast amount of unrated items. Using this novel notion, we effectively identify uninteresting items that were not rated yet but are likely to receive very low ratings from users, and impute them as zero. This simple-yet-novel zero-injection method applied to a set of carefully-chosen uninteresting items not only addresses the sparsity problem by enriching a rating matrix but also completely prevents uninteresting items from being recommended as top-N items, thereby improving accuracy greatly. As our proposed idea is method-agnostic, it can be easily applied to a wide variety of popular CF methods. Through comprehensive experiments using the Movielens dataset and MyMediaLite implementation, we successfully demonstrate that our solution consistently and universally improves the accuracies of popular CF methods (e.g., item-based CF, SVD-based CF, and SVD++) by two to five orders of magnitude on average. Furthermore, our approach reduces the running time of those CF methods by 1.2 to 2.3 times when its setting produces the best accuracy. The datasets and codes that we used in experiments are available at: https://goo.gl/KUrmip.", "title": "" }, { "docid": "d4820344d9c229ac15d002b667c07084", "text": "In this paper, we propose to integrate semantic similarity assessment in an edit distance algorithm, seeking to amend similarity judgments when comparing XML-based legal documents[3].", "title": "" }, { "docid": "3f05325680ecc8c826a77961281b9748", "text": "The purpose of this paper is to determine which variables influence consumers’ intentions towards purchasing natural cosmetics. Several variables are included in the regression analysis such as age, gender, consumers’ purchase tendency towards organic food, consumers’ new natural cosmetics brands and consumers’ tendency towards health consciousness. The data was collected through an online survey questionnaire using the purposive sample of 204 consumers from the Dubrovnik-Neretva County in March and April of 2015. Various statistical analyses were used such as binary logistic regression and correlation analysis. Binary logistic regression results show that gender, consumers’ purchase tendency towards organic food and consumers’ purchase tendency towards new natural cosmetics brands have an influence on consumer purchase intentions. However, consumers’ tendency towards health consciousness has no influence on consumers’ intentions towards purchasing natural cosmetics. Results of the correlation analysis indicate that there is a strong positive correlation between purchase intentions towards natural cosmetics and consumer references of natural cosmetics. The findings may be useful to online retailers, as well as marketers and practitioners to recognize and better understand the new trends that occur in the industry of natural cosmetics.", "title": "" }, { "docid": "c5e0ba5e8ceb8c684366b4aae1a43dc2", "text": "This document proposes to make a contribution to the conceptualization and implementation of data recovery techniques through the abstraction of recovery methodologies and aspects that influence the process, relating human motivation to research needs, whether these are for the Auditing or computer science, allowing to generate classification of recovery techniques in the absence of the metadata provided by the filesystem, in this sense have been proposed to file carving techniques as a solution option. Finally, it is revealed that while many file carving techniques are being implemented in other tools, they are still in the research phase.", "title": "" }, { "docid": "c1a96dbed9373dddd0a7a07770395a7e", "text": "Mobile devices are increasingly the dominant Internet access technology. Nevertheless, high costs, data caps, and throttling are a source of widespread frustration, and a significant barrier to adoption in emerging markets. This paper presents Flywheel, an HTTP proxy service that extends the life of mobile data plans by compressing responses in-flight between origin servers and client browsers. Flywheel is integrated with the Chrome web browser and reduces the size of proxied web pages by 50% for a median user. We report measurement results from millions of users as well as experience gained during three years of operating and evolving the production", "title": "" }, { "docid": "848eee0774708928668d4896d321fe00", "text": "Machine learning is one of the most exciting recent technologies in Artificial Intelligence. Learning algorithms in many applications that&apos;s we make use of daily. Every time a web search engine like Google or Bing is used to search the internet, one of the reasons that works so well is because a learning algorithm, one implemented by Google or Microsoft, has learned how to rank web pages. Every time Facebook is used and it recognizes friends&apos; photos, that&apos;s also machine learning. Spam filters in email saves the user from having to wade through tons of spam email, that&apos;s also a learning algorithm. In this paper, a brief review and future prospect of the vast applications of machine learning has been made.", "title": "" }, { "docid": "a377b31c0cb702c058f577ca9c3c5237", "text": "Problem statement: Extensive research efforts in the area of Natural L anguage Processing (NLP) were focused on developing reading comprehens ion Question Answering systems (QA) for Latin based languages such as, English, French and German . Approach: However, little effort was directed towards the development of such systems for bidirec tional languages such as Arabic, Urdu and Farsi. In general, QA systems are more sophisticated and more complex than Search Engines (SE) because they seek a specific and somewhat exact answer to the query. Results: Existing Arabic QA system including the most recent described excluded one or both types of questions (How and Why) from their work because of the difficulty of handling these questions. In this study, we present a new approach and a new questio nanswering system (QArabPro) for reading comprehensi on texts in Arabic. The overall accuracy of our system is 84%. Conclusion/Recommendations: These results are promising compared to existing systems. Our system handles all types of questions including (How and why).", "title": "" }, { "docid": "52a3688f1474b824a6696b03a8b6536c", "text": "Credit scoring models have been widely studied in the areas of statistics, machine learning, and artificial intelligence (AI). Many novel approaches such as artificial neural networks (ANNs), rough sets, or decision trees have been proposed to increase the accuracy of credit scoring models. Since an improvement in accuracy of a fraction of a percent might translate into significant savings, a more sophisticated model should be proposed for significantly improving the accuracy of the credit scoring models. In this paper, two-stage genetic programming (2SGP) is proposed to deal with the credit scoring problem by incorporating the advantages of the IF–THEN rules and the discriminant function. On the basis of the numerical results, we can conclude that 2SGP can provide the better accuracy than other models. 2005 Published by Elsevier Inc. 0096-3003/$ see front matter 2005 Published by Elsevier Inc. doi:10.1016/j.amc.2005.05.027 * Corresponding author. Address: Institute of Management of Technology and Institute of Traffic and Transportation College of Management, National Chiao Tung University, 1001 TaHsueh Road, Hsinchu 300, Taiwan. E-mail address: u5460637@ms16.hinet.net (G.-H. Tzeng). 2 J.-J. Huang et al. / Appl. Math. Comput. xxx (2005) xxx–xxx ARTICLE IN PRESS", "title": "" }, { "docid": "893f631e0a0ca9851097bc54a14b1ea8", "text": "Thirteen subjects detected noise burst targets presented in a white noise background at a mean rate of 10/min. Within each session, local error rate, defined as the fraction of targets detected in a 33 sec moving window, fluctuated widely. Mean coherence between slow mean variations in EEG power and in local error rate was computed for each EEG frequency and performance cycle length, and was shown by a Monte Carlo procedure to be significant for many EEG frequencies and performance cycle lengths, particularly in 4 well-defined EEG frequency bands, near 3, 10, 13, and 19 Hz, and at higher frequencies in two cycle length ranges, one longer than 4 min and the other near 90 sec/cycle. The coherence phase plane contained a prominent phase reversal near 6 Hz. Sorting individual spectra by local error rate confirmed the close relation between performance and EEG power and its relative within-subject stability. These results show that attempts to maintain alertness in an auditory detection task result in concurrent minute and multi-minute scale fluctuations in performance and the EEG power spectrum.", "title": "" }, { "docid": "5686b87484f2e78da2c33ed03b1a536c", "text": "Although an automated flexible production cell is an intriguing prospect for small to median enterprises (SMEs) in current global market conditions, the complexity of programming remains one of the major hurdles preventing automation using industrial robots for SMEs. This paper provides a comprehensive review of the recent research progresses on the programming methods for industrial robots, including online programming, offline programming (OLP), and programming using Augmented Reality (AR). With the development of more powerful 3D CAD/PLM software, computer vision, sensor technology, etc. new programming methods suitable for SMEs are expected to grow in years to come. (C) 2011 Elsevier Ltd. All rights reserved.\"", "title": "" }, { "docid": "271639e9eea6a47f3d80214517444072", "text": "The treatment of juvenile idiopathic arthritis (JIA) is evolving. The growing number of effective drugs has led to successful treatment and prevention of long-term sequelae in most patients. Although patients with JIA frequently achieve lasting clinical remission, sustained remission off medication is still elusive for most. Treatment approaches vary substantially among paediatric rheumatologists owing to the inherent heterogeneity of JIA and, until recently, to the lack of accepted and well-evidenced guidelines. Furthermore, many pertinent questions related to patient management remain unanswered, in particular regarding treatment targets, and selection, intensity and sequence of initiation or withdrawal of therapy. Existing JIA guidelines and recommendations do not specify treat-to-target or tight control strategies, in contrast to adult rheumatology in which these approaches have been successful. The concepts of window of opportunity (early treatment to improve long-term outcomes) and immunological remission (abrogation of subclinical disease activity) are also fundamental when defining treatment methodologies. This Review explores the application of these concepts to JIA and their possible contribution to the development of future clinical guidelines or consensus treatment protocols. The article also discusses how diverse forms of standardized, guideline-led care and personalized treatment can be combined into a targeted, patient-centred approach to optimize management strategies for patients with JIA.", "title": "" }, { "docid": "9888ef3aefca1049307ecd49ea5a3a49", "text": "We live in a \"small world,\" where two arbitrary people are likely connected by a short chain of intermediate friends. With scant information about a target individual, people can successively forward a message along such a chain. Experimental studies have verified this property in real social networks, and theoretical models have been advanced to explain it. However, existing theoretical models have not been shown to capture behavior in real-world social networks. Here, we introduce a richer model relating geography and social-network friendship, in which the probability of befriending a particular person is inversely proportional to the number of closer people. In a large social network, we show that one-third of the friendships are independent of geography and the remainder exhibit the proposed relationship. Further, we prove analytically that short chains can be discovered in every network exhibiting the relationship.", "title": "" }, { "docid": "7af9eaf2c3bcac72049a9d4d1e6b3498", "text": "This paper proposes a fast algorithm for integrating connected-component labeling and Euler number computation. Based on graph theory, the Euler number of a binary image in the proposed algorithm is calculated by counting the occurrences of four patterns of the mask for processing foreground pixels in the first scan of a connected-component labeling process, where these four patterns can be found directly without any additional calculation; thus, connected-component labeling and Euler number computation can be integrated more efficiently. Moreover, when computing the Euler number, unlike other conventional algorithms, the proposed algorithm does not need to process background pixels. Experimental results demonstrate that the proposed algorithm is much more efficient than conventional algorithms either for calculating the Euler number alone or simultaneously calculating the Euler number and labeling connected components.", "title": "" }, { "docid": "dda739b8c4f645162313a2a691f48aa5", "text": "Classification of time series data is an important problem with applications in virtually every scientific endeavor. The large research community working on time series classification has typically used the UCR Archive to test their algorithms. In this work we argue that the availability of this resource has isolated much of the research community from the following reality, labeled time series data is often very difficult to obtain. The obvious solution to this problem is the application of semi-supervised learning; however, as we shall show, direct applications of off-the-shelf semi-supervised learning algorithms do not typically work well for time series. In this work we explain why semi-supervised learning algorithms typically fail for time series problems, and we introduce a simple but very effective fix. We demonstrate our ideas on diverse real word problems.", "title": "" }, { "docid": "65cc9459269fb23dd97ec25ffad4f041", "text": "Most of the existing literature on CRM value chain creation has focused on the effect of customer satisfaction and customer loyalty on customer profitability. In contrast, little has been studied about the CRM value creation chain at individual customer level and the role of self-construal (i.e., independent self-construal and interdependent self-construal) in such a chain. This research aims to construct the chain from customer value to organization value (i.e., customer satisfaction ? customer loyalty ? patronage behavior) and investigate the moderating effect of self-construal. To test the hypotheses suggested by our conceptual framework, we collected 846 data points from China in the context of mobile data services. The results show that customer’s self-construal can moderate the relationship chain from customer satisfaction to customer loyalty to relationship maintenance and development. This implies firms should tailor their customer strategies based on different self-construal features. 2014 Elsevier B.V. All rights reserved.", "title": "" } ]
scidocsrr
7ec7b9d74b2aa147339e866503787244
Wireless Sensor Networks for Early Detection of Forest Fires
[ { "docid": "8e0e77e78c33225922b5a45fee9b4242", "text": "In this paper, we address the issues of maintaining sensing coverage and connectivity by keeping a minimum number of sensor nodes in the active mode in wireless sensor networks. We investigate the relationship between coverage and connectivity by solving the following two sub-problems. First, we prove that if the radio range is at least twice the sensing range, complete coverage of a convex area implies connectivity among the working set of nodes. Second, we derive, under the ideal case in which node density is sufficiently high, a set of optimality conditions under which a subset of working sensor nodes can be chosen for complete coverage. Based on the optimality conditions, we then devise a decentralized density control algorithm, Optimal Geographical Density Control (OGDC), for density control in large scale sensor networks. The OGDC algorithm is fully localized and can maintain coverage as well as connectivity, regardless of the relationship between the radio range and the sensing range. Ns-2 simulations show that OGDC outperforms existing density control algorithms [25, 26, 29] with respect to the number of working nodes needed and network lifetime (with up to 50% improvement), and achieves almost the same coverage as the algorithm with the best result.", "title": "" } ]
[ { "docid": "dbe62d1ffe794e26ac7c8418f3908f70", "text": "Numerical differentiation in noisy environment is revised through an algebraic approach. For each given order, an explicit formula yielding a pointwise derivative estimation is derived, using elementary differential algebraic operations. These expressions are composed of iterated integrals of the noisy observation signal. We show in particular that the introduction of delayed estimates affords significant improvement. An implementation in terms of a classical finite impulse response (FIR) digital filter is given. Several simulation results are presented.", "title": "" }, { "docid": "9853f157525548a35bcbe118fdefaf33", "text": "We address the task of 6D pose estimation of known rigid objects from single input images in scenarios where the objects are partly occluded. Recent RGB-D-based methods are robust to moderate degrees of occlusion. For RGB inputs, no previous method works well for partly occluded objects. Our main contribution is to present the first deep learning-based system that estimates accurate poses for partly occluded objects from RGB-D and RGB input. We achieve this with a new instance-aware pipeline that decomposes 6D object pose estimation into a sequence of simpler steps, where each step removes specific aspects of the problem. The first step localizes all known objects in the image using an instance segmentation network, and hence eliminates surrounding clutter and occluders. The second step densely maps pixels to 3D object surface positions, so called object coordinates, using an encoder-decoder network, and hence eliminates object appearance. The third, and final, step predicts the 6D pose using geometric optimization. We demonstrate that we significantly outperform the state-of-the-art for pose estimation of partly occluded objects for both RGB and RGB-D input.", "title": "" }, { "docid": "c077231164a8a58f339f80b83e5b4025", "text": "It is widely believed that refactoring improves software quality and developer productivity. However, few empirical studies quantitatively assess refactoring benefits or investigate developers' perception towards these benefits. This paper presents a field study of refactoring benefits and challenges at Microsoft through three complementary study methods: a survey, semi-structured interviews with professional software engineers, and quantitative analysis of version history data. Our survey finds that the refactoring definition in practice is not confined to a rigorous definition of semantics-preserving code transformations and that developers perceive that refactoring involves substantial cost and risks. We also report on interviews with a designated refactoring team that has led a multi-year, centralized effort on refactoring Windows. The quantitative analysis of Windows 7 version history finds that the binary modules refactored by this team experienced significant reduction in the number of inter-module dependencies and post-release defects, indicating a visible benefit of refactoring.", "title": "" }, { "docid": "3bf954a23ea3e7d5326a7b89635f966a", "text": "The particle swarm optimizer (PSO) is a stochastic, population-based optimization technique that can be applied to a wide range of problems, including neural network training. This paper presents a variation on the traditional PSO algorithm, called the cooperative particle swarm optimizer, or CPSO, employing cooperative behavior to significantly improve the performance of the original algorithm. This is achieved by using multiple swarms to optimize different components of the solution vector cooperatively. Application of the new PSO algorithm on several benchmark optimization problems shows a marked improvement in performance over the traditional PSO.", "title": "" }, { "docid": "57bd8c0c2742027de4b599b129506154", "text": "Software instrumentation is a powerful and flexible technique for analyzing the dynamic behavior of programs. By inserting extra code in an application, it is possible to study the performance and correctness of programs and systems. Pin is a software system that performs run-time binary instrumentation of unmodified applications. Pin provides an API for writing custom instrumentation, enabling its use in a wide variety of performance analysis tasks such as workload characterization, program tracing, cache modeling, and simulation. Most of the prior work on instrumentation systems has focused on executing Unix applications, despite the ubiquity and importance of Windows applications. This paper identifies the Windows-specific obstacles for implementing a process-level instrumentation system, describes a comprehensive, robust solution, and discusses some of the alternatives. The challenges lie in managing the kernel/application transitions, injecting the runtime agent into the process, and isolating the instrumentation from the application. We examine Pin's overhead on typical Windows applications being instrumented with simple tools up to commercial program analysis products. The biggest factor affecting performance is the type of analysis performed by the tool. While the proprietary nature of Windows makes measurement and analysis difficult, Pin opens the door to understanding program behavior.", "title": "" }, { "docid": "8075cc962ce18cea46a8df4396512aa5", "text": "In the last few years, neural representation learning approaches have achieved very good performance on many natural language processing tasks, such as language modelling and machine translation. This suggests that neural models will also achieve good performance on information retrieval (IR) tasks, such as relevance ranking, addressing the query-document vocabulary mismatch problem by using a semantic rather than lexical matching. Although initial iterations of neural models do not outperform traditional lexical-matching baselines, the level of interest and effort in this area is increasing, potentially leading to a breakthrough. The popularity of the recent SIGIR 2016 workshop on Neural Information Retrieval provides evidence to the growing interest in neural models for IR. While recent tutorials have covered some aspects of deep learning for retrieval tasks, there is a significant scope for organizing a tutorial that focuses on the fundamentals of representation learning for text retrieval. The goal of this tutorial will be to introduce state-of-the-art neural embedding models and bridge the gap between these neural models with early representation learning approaches in IR (e.g., LSA). We will discuss some of the key challenges and insights in making these models work in practice, and demonstrate one of the toolsets available to researchers interested in this area.", "title": "" }, { "docid": "0110e37c5525520a4db4b1a775dacddd", "text": "This paper presents a study of Linux API usage across all applications and libraries in the Ubuntu Linux 15.04 distribution. We propose metrics for reasoning about the importance of various system APIs, including system calls, pseudo-files, and libc functions. Our metrics are designed for evaluating the relative maturity of a prototype system or compatibility layer, and this paper focuses on compatibility with Linux applications. This study uses a combination of static analysis to understand API usage and survey data to weight the relative importance of applications to end users.\n This paper yields several insights for developers and researchers, which are useful for assessing the complexity and security of Linux APIs. For example, every Ubuntu installation requires 224 system calls, 208 ioctl, fcntl, and prctl codes and hundreds of pseudo files. For each API type, a significant number of APIs are rarely used, if ever. Moreover, several security-relevant API changes, such as replacing access with faccessat, have met with slow adoption. Finally, hundreds of libc interfaces are effectively unused, yielding opportunities to improve security and efficiency by restructuring libc.", "title": "" }, { "docid": "ffd84e3418a6d1d793f36bfc2efed6be", "text": "Anterior cingulate cortex (ACC) is a part of the brain's limbic system. Classically, this region has been related to affect, on the basis of lesion studies in humans and in animals. In the late 1980s, neuroimaging research indicated that ACC was active in many studies of cognition. The findings from EEG studies of a focal area of negativity in scalp electrodes following an error response led to the idea that ACC might be the brain's error detection and correction device. In this article, these various findings are reviewed in relation to the idea that ACC is a part of a circuit involved in a form of attention that serves to regulate both cognitive and emotional processing. Neuroimaging studies showing that separate areas of ACC are involved in cognition and emotion are discussed and related to results showing that the error negativity is influenced by affect and motivation. In addition, the development of the emotional and cognitive roles of ACC are discussed, and how the success of this regulation in controlling responses might be correlated with cingulate size. Finally, some theories are considered about how the different subdivisions of ACC might interact with other cortical structures as a part of the circuits involved in the regulation of mental and emotional activity.", "title": "" }, { "docid": "c10829be320a9be6ecbc9ca751e8b56e", "text": "This article analyzes two decades of research regarding the mass media's role in shaping, perpetuating, and reducing the stigma of mental illness. It concentrates on three broad areas common in media inquiry: production, representation, and audiences. The analysis reveals that descriptions of mental illness and the mentally ill are distorted due to inaccuracies, exaggerations, or misinformation. The ill are presented not only as peculiar and different, but also as dangerous. Thus, the media perpetuate misconceptions and stigma. Especially prominent is the absence of agreed-upon definitions of \"mental illness,\" as well as the lack of research on the inter-relationships in audience studies between portrayals in the media and social perceptions. The analysis concludes with suggestions for further research on mass media's inter-relationships with mental illness.", "title": "" }, { "docid": "00c19e68020aff7fd86aa7e514cc0668", "text": "Network forensic techniques help in tracking different types of cyber attack by monitoring and inspecting network traffic. However, with the high speed and large sizes of current networks, and the sophisticated philosophy of attackers, in particular mimicking normal behaviour and/or erasing traces to avoid detection, investigating such crimes demands intelligent network forensic techniques. This paper suggests a real-time collaborative network Forensic scheme (RCNF) that can monitor and investigate cyber intrusions. The scheme includes three components of capturing and storing network data, selecting important network features using chi-square method and investigating abnormal events using a new technique called correntropy-variation. We provide a case study using the UNSW-NB15 dataset for evaluating the scheme, showing its high performance in terms of accuracy and false alarm rate compared with three recent state-of-the-art mechanisms.", "title": "" }, { "docid": "1b30c14536db1161b77258b1ce213fbb", "text": "Click-through rate (CTR) prediction and relevance ranking are two fundamental problems in web advertising. In this study, we address the problem of modeling the relationship between CTR and relevance for sponsored search. We used normalized relevance scores comparable across all queries to represent relevance when modeling with CTR, instead of directly using human judgment labels or relevance scores valid only within same query. We classified clicks by identifying their relevance quality using dwell time and session information, and compared all clicks versus selective clicks effects when modeling relevance.\n Our results showed that the cleaned click signal outperforms raw click signal and others we explored, in terms of relevance score fitting. The cleaned clicks include clicks with dwell time greater than 5 seconds and last clicks in session. Besides traditional thoughts that there is no linear relation between click and relevance, we showed that the cleaned click based CTR can be fitted well with the normalized relevance scores using a quadratic regression model. This relevance-click model could help to train ranking models using processed click feedback to complement expensive human editorial relevance labels, or better leverage relevance signals in CTR prediction.", "title": "" }, { "docid": "d1a94ed95234d9ea660b6e4779a6a694", "text": "This study aims to analyse the scientific literature on sustainability and innovation in the automotive sector in the last 13 years. The research is classified as descriptive and exploratory. The process presented 31 articles in line with the research topic in the Scopus database. The bibliometric analysis identified the most relevant articles, authors, keywords, countries, research centers and journals for the subject from 2004 to 2016 in the Industrial Engineering domain. We concluded, through the systemic analysis, that the automotive sector is well structured on the issue of sustainability and process innovation. Innovations in the sector are of the incremental process type, due to the lower risk, lower costs and less complexity. However, the literature also points out that radical innovations are needed in order to fit the prevailing environmental standards. The selected studies show that environmental practices employed in the automotive sector are: the minimization of greenhouse gas emissions, life-cycle assessment, cleaner production, reverse logistics and eco-innovation. Thus, it displays the need for empirical studies in automotive companies on the environmental practices employed and how these practices impact innovation.", "title": "" }, { "docid": "5bf0406864b500084480081d8cddcb82", "text": "Polymer scaffolds have many different functions in the field of tissue engineering. They are applied as space filling agents, as delivery vehicles for bioactive molecules, and as three-dimensional structures that organize cells and present stimuli to direct the formation of a desired tissue. Much of the success of scaffolds in these roles hinges on finding an appropriate material to address the critical physical, mass transport, and biological design variables inherent to each application. Hydrogels are an appealing scaffold material because they are structurally similar to the extracellular matrix of many tissues, can often be processed under relatively mild conditions, and may be delivered in a minimally invasive manner. Consequently, hydrogels have been utilized as scaffold materials for drug and growth factor delivery, engineering tissue replacements, and a variety of other applications.", "title": "" }, { "docid": "4a1db0cab3812817c3ebb149bd8b3021", "text": "Structural information in web text provides natural annotations for NLP problems such as word segmentation and parsing. In this paper we propose a discriminative learning algorithm to take advantage of the linguistic knowledge in large amounts of natural annotations on the Internet. It utilizes the Internet as an external corpus with massive (although slight and sparse) natural annotations, and enables a classifier to evolve on the large-scaled and real-time updated web text. With Chinese word segmentation as a case study, experiments show that the segmenter enhanced with the Chinese wikipedia achieves significant improvement on a series of testing sets from different domains, even with a single classifier and local features.", "title": "" }, { "docid": "7788cf06b7c9f09013bd15607e11cd79", "text": "Separate Cox analyses of all cause-specific hazards are the standard technique of choice to study the effect of a covariate in competing risks, but a synopsis of these results in terms of cumulative event probabilities is challenging. This difficulty has led to the development of the proportional subdistribution hazards model. If the covariate is known at baseline, the model allows for a summarizing assessment in terms of the cumulative incidence function. black Mathematically, the model also allows for including random time-dependent covariates, but practical implementation has remained unclear due to a certain risk set peculiarity. We use the intimate relationship of discrete covariates and multistate models to naturally treat time-dependent covariates within the subdistribution hazards framework. The methodology then straightforwardly translates to real-valued time-dependent covariates. As with classical survival analysis, including time-dependent covariates does not result in a model for probability functions anymore. Nevertheless, the proposed methodology provides a useful synthesis of separate cause-specific hazards analyses. We illustrate this with hospital infection data, where time-dependent covariates and competing risks are essential to the subject research question.", "title": "" }, { "docid": "f1a5a1683b6796aebb98afce2068ffff", "text": "Printed text recognition is an important problem for industrial OCR systems. Printed text is constructed in a standard procedural fashion in most settings. We develop a mathematical model for this process that can be applied to the backward inference problem of text recognition from an image. Through ablation experiments we show that this model is realistic and that a multi-task objective setting can help to stabilize estimation of its free parameters, enabling use of conventional deep learning methods. Furthermore, by directly modeling the geometric perturbations of text synthesis we show that our model can help recover missing characters from incomplete text regions, the bane of multicomponent OCR systems, enabling recognition even when the detection returns incomplete in-", "title": "" }, { "docid": "9b0114697dc6c260610d0badc1d7a2a4", "text": "This review captures the synthesis, assembly, properties, and applications of copper chalcogenide NCs, which have achieved significant research interest in the last decade due to their compositional and structural versatility. The outstanding functional properties of these materials stems from the relationship between their band structure and defect concentration, including charge carrier concentration and electronic conductivity character, which consequently affects their optoelectronic, optical, and plasmonic properties. This, combined with several metastable crystal phases and stoichiometries and the low energy of formation of defects, makes the reproducible synthesis of these materials, with tunable parameters, remarkable. Further to this, the review captures the progress of the hierarchical assembly of these NCs, which bridges the link between their discrete and collective properties. Their ubiquitous application set has cross-cut energy conversion (photovoltaics, photocatalysis, thermoelectrics), energy storage (lithium-ion batteries, hydrogen generation), emissive materials (plasmonics, LEDs, biolabelling), sensors (electrochemical, biochemical), biomedical devices (magnetic resonance imaging, X-ray computer tomography), and medical therapies (photochemothermal therapies, immunotherapy, radiotherapy, and drug delivery). The confluence of advances in the synthesis, assembly, and application of these NCs in the past decade has the potential to significantly impact society, both economically and environmentally.", "title": "" }, { "docid": "7025d357898c5997e225299f398c42f0", "text": "UNLABELLED\nAnnotating genetic variants, especially non-coding variants, for the purpose of identifying pathogenic variants remains a challenge. Combined annotation-dependent depletion (CADD) is an algorithm designed to annotate both coding and non-coding variants, and has been shown to outperform other annotation algorithms. CADD trains a linear kernel support vector machine (SVM) to differentiate evolutionarily derived, likely benign, alleles from simulated, likely deleterious, variants. However, SVMs cannot capture non-linear relationships among the features, which can limit performance. To address this issue, we have developed DANN. DANN uses the same feature set and training data as CADD to train a deep neural network (DNN). DNNs can capture non-linear relationships among features and are better suited than SVMs for problems with a large number of samples and features. We exploit Compute Unified Device Architecture-compatible graphics processing units and deep learning techniques such as dropout and momentum training to accelerate the DNN training. DANN achieves about a 19% relative reduction in the error rate and about a 14% relative increase in the area under the curve (AUC) metric over CADD's SVM methodology.\n\n\nAVAILABILITY AND IMPLEMENTATION\nAll data and source code are available at https://cbcl.ics.uci.edu/public_data/DANN/.", "title": "" }, { "docid": "7b6c039783091260cee03704ce9748d8", "text": "We describe Algorithm 2 in detail. Algorithm 2 takes as input the sample set S, the query sequence F , the sensitivity of query ∆, the threshold τ , and the stop parameter s. Algorithm 2 outputs the result of each comparison with the threshold. In Algorithm 2, each noisy query output is compred with a noisy threshold at line 4 and outputs the result of comparison. Let ⊤ mean that fk(S) > τ . Algorithm 2 is terminated if outputs ⊤ s times.", "title": "" }, { "docid": "7131f6062fcb4fd1d532516499105b02", "text": "Markov influence diagrams (MIDs) are a new type of probabilistic graphical model that extends influence diagrams in the same way that Markov decision trees extend decision trees. They have been designed to build state-transition models, mainly in medicine, and perform cost-effectiveness analyses. Using a causal graph that may contain several variables per cycle, MIDs can model various patient characteristics without multiplying the number of states; in particular, they can represent the history of the patient without using tunnel states. OpenMarkov, an open-source tool, allows the decision analyst to build and evaluate MIDs-including cost-effectiveness analysis and several types of deterministic and probabilistic sensitivity analysis-with a graphical user interface, without writing any code. This way, MIDs can be used to easily build and evaluate complex models whose implementation as spreadsheets or decision trees would be cumbersome or unfeasible in practice. Furthermore, many problems that previously required discrete event simulation can be solved with MIDs; i.e., within the paradigm of state-transition models, in which many health economists feel more comfortable.", "title": "" } ]
scidocsrr
535ca445e0bf8921707453ff120bd059
Transforming Experience: The Potential of Augmented Reality and Virtual Reality for Enhancing Personal and Clinical Change
[ { "docid": "fb5a38c1dbbc7416f9b15ee19be9cc06", "text": "This study uses a body motion interactive game developed in Scratch 2.0 to enhance the body strength of children with disabilities. Scratch 2.0, using an augmented-reality function on a program platform, creates real world and virtual reality displays at the same time. This study uses a webcam integration that tracks movements and allows participants to interact physically with the project, to enhance the motivation of children with developmental disabilities to perform physical activities. This study follows a single-case research using an ABAB structure, in which A is the baseline and B is the intervention. The experimental period was 2 months. The experimental results demonstrated that the scores for 3 children with developmental disabilities increased considerably during the intervention phrases. The developmental applications of these results are also discussed.", "title": "" }, { "docid": "3da0597ce369afdec1716b1fedbce7d1", "text": "We describe a theoretical model of the neurocognitive mechanisms underlying conscious presence and its disturbances. The model is based on interoceptive prediction error and is informed by predictive models of agency, general models of hierarchical predictive coding and dopaminergic signaling in cortex, the role of the anterior insular cortex (AIC) in interoception and emotion, and cognitive neuroscience evidence from studies of virtual reality and of psychiatric disorders of presence, specifically depersonalization/derealization disorder. The model associates presence with successful suppression by top-down predictions of informative interoceptive signals evoked by autonomic control signals and, indirectly, by visceral responses to afferent sensory signals. The model connects presence to agency by allowing that predicted interoceptive signals will depend on whether afferent sensory signals are determined, by a parallel predictive-coding mechanism, to be self-generated or externally caused. Anatomically, we identify the AIC as the likely locus of key neural comparator mechanisms. Our model integrates a broad range of previously disparate evidence, makes predictions for conjoint manipulations of agency and presence, offers a new view of emotion as interoceptive inference, and represents a step toward a mechanistic account of a fundamental phenomenological property of consciousness.", "title": "" } ]
[ { "docid": "ae393c8f1afc39d6f4ad7ce4b5640034", "text": "Generative adversarial networks have gained a lot of attention in general computer vision community due to their capability of data generation without explicitly modelling the probability density function and robustness to overfitting. The adversarial loss brought by the discriminator provides a clever way of incorporating unlabeled samples into the training and imposing higher order consistency that is proven to be useful in many cases, such as in domain adaptation, data augmentation, and image-to-image translation. These nice properties have attracted researcher in the medical imaging community and we have seen quick adoptions in many traditional tasks and some novel applications. This trend will continue to grow based on our observation therefore we conducted a review of the recent advances in medical imaging using the adversarial training scheme in the hope of benefiting researchers that are interested in this technique.", "title": "" }, { "docid": "f38ad855c66a43529d268b81c9ea4c69", "text": "In the recent years, countless security concerns related to automotive systems were revealed either by academic research or real life attacks. While current attention was largely focused on passenger cars, due to their ubiquity, the reported bus-related vulnerabilities are applicable to all industry sectors where the same bus technology is deployed, i.e., the CAN bus. The SAE J1939 specification extends and standardizes the use of CAN to commercial vehicles where security plays an even higher role. In contrast to empirical results that attest such vulnerabilities in commercial vehicles by practical experiments, here, we determine that existing shortcomings in the SAE J1939 specifications open road to several new attacks, e.g., impersonation, denial of service (DoS), distributed DoS, etc. Taking the advantage of an industry-standard CANoe based simulation, we demonstrate attacks with potential safety critical effects that are mounted while still conforming to the SAE J1939 standard specification. We discuss countermeasures and security enhancements by including message authentication mechanisms. Finally, we evaluate and discuss the impact of employing these mechanisms on the overall network communication.", "title": "" }, { "docid": "4c8eaddb55bda61bd92b1f474e0be8b6", "text": "This article discusses varied ideas on games, learning, and digital literacy for 21st-century education as theorized and practiced by the author and James Paul Gee, and their colleagues. With attention to games as means for learning, the author links Gee’s theories to the learning sciences tradition (particularly those of the MIT Constructionists) and extending game media literacy to encompass “writing” (producing) as well as “reading” (playing) games. If game-playing is like reading and game-making is like writing, then we must introduce learners to both from a young age. The imagining and writing of web-games fosters the development of many essential skill-sets needed for creativity and innovation, providing an appealing new way for a global computing education, STEM education, for closing achievement gaps. Gee and the author reveal a shared aim to encourage researchers and theorists, as well as policymakers, to investigate gaming with regard to epistemology and cognition. DOI: 10.4018/jgcms.2010010101 2 International Journal of Gaming and Computer-Mediated Simulations, 2(1), 1-16, January-March 2010 Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. ing tools; 2) videogames that teach educational content; 3) games and sims that involve modding and design as a learning environment; 4) game-making systems like GameStar Mechanics, Game Maker, Scratch; and 5) widely-used professional software programming tools like Java or Flash ActionScript. This AERA session was intended to be a field-building session—a step toward a much larger conversation about the meaning and value of various kinds of game practices and literacies. We sought to shed light on why today’s students should become game-literate, and to demonstrate a variety of possible routes that lead to game literacy. We also discussed the role of utilizing games and creating game-media in the learning and cognitive development of today’s generation of students and educators. MultiPle traDitions for initiatinG anD interPretinG GaMinG PraCtiCes for learninG Game literacy is a multidimensional combination of varied practices (e.g., reading, writing, and calculating; textual, visual, and spatial cognition; interactive design, programming, and engineering; multitasking and system understanding; meaning making, storytelling, role playing, perspective taking, and exercising judgment; etc.). Different gaming practices form a whole that has roots in both traditional literacy theories and Constructionist digital literacy. Though seemingly disparate, both traditions attempt to develop methods for describing how players/learners learn and how they construct knowledge in gaming contexts. Both traditions focus on the processes of learning rather than the product (winning the game or the actual game created by a learner/designer). Both traditions struggle with the difficulties of capturing the process of learning (an intersection of individual, context and activity over time within a situated perspective) as a unit of analysis. Despite the challenges that persist in such a dynamic and distributed object of study, educators and researchers continue to explore and refine innovative methodological approaches that capture and track learning as it flourishes within the rich environments of various gaming practices so as to inform instructional practice and design (also known as design-based research, e.g., Brown, 1996; Dede, 2005). researCh into PlayinG ViDeoGaMes The fascination with and research on the cognitive and learning processes that occurs during videogame play is becoming increasingly prominent—so much so, that a national conference dedicated entirely to this topic was launched by Dr. James Paul Gee in 2004 as a venue for scholarly discourse (Games, Learning and Society, GLS, www.glsconference. org). In this growing field of gaming research, scholars are addressing the nature of cognitive and emotional development, literacy practices, and thinking and learning during gameplay in a range of gaming environments and genres (Barab, 2009; Gee, 2003, 2007; Shaffer, 2006; Squire, 2002, 2006, 2009; Steinkuehler, 2007, 2009a, 2009b). This line of research focuses on assessing different kinds of learning while playing games released commercially for entertainment (e.g., World of Warcraft, Grand Theft Auto, Zelda, Quake, Dance Dance Revolution, Guitar Hero, Rock Band), or edutainment games (e.g., Civilization, Quest Atlantis) in various contexts (mostly out of school, in homes, clubs and afterschool programs). These scholars claim that videogame players are learning—they do not just click the controller or mouse mindlessly or move around randomly. Indeed, players are found to engage in unlocking rich storylines, employing complex problem-solving strategies and mastering the underlying systems of any given game or level. Researchers offer solid evidence that children learn important content, perspectives, and vital 21st-century skills from playing digital games (e.g., Salen, 2007; Lenhart, Kahne, Mid14 more pages are available in the full version of this document, which may be purchased using the \"Add to Cart\" button on the product's webpage: www.igi-global.com/article/toward-theory-game-medialiteracy/40935?camid=4v1 This title is available in InfoSci-Journals, InfoSci-Journal Disciplines Communications and Social Science, InfoSciSelect, InfoSci-Select, InfoSci-Artificial Intelligence and Smart Computing eJournal Collection, InfoSciCommunications, Online Engagement, and Media eJournal Collection, InfoSci-Journal Disciplines Computer Science, Security, and Information Technology, InfoSci-Journal Disciplines Engineering, Natural, and Physical Science. Recommend this product to your librarian: www.igi-global.com/e-resources/libraryrecommendation/?id=2", "title": "" }, { "docid": "ddf197aa8b545181ea409d0ee28b52a6", "text": "We address the problem of instance-level semantic segmentation, which aims at jointly detecting, segmenting and classifying every individual object in an image. In this context, existing methods typically propose candidate objects, usually as bounding boxes, and directly predict a binary mask within each such proposal. As a consequence, they cannot recover from errors in the object candidate generation process, such as too small or shifted boxes. In this paper, we introduce a novel object segment representation based on the distance transform of the object masks. We then design an object mask network (OMN) with a new residual-deconvolution architecture that infers such a representation and decodes it into the final binary object mask. This allows us to predict masks that go beyond the scope of the bounding boxes and are thus robust to inaccurate object candidates. We integrate our OMN into a Multitask Network Cascade framework, and learn the resulting boundary-aware instance segmentation (BAIS) network in an end-to-end manner. Our experiments on the PASCAL VOC 2012 and the Cityscapes datasets demonstrate the benefits of our approach, which outperforms the state-of-the-art in both object proposal generation and instance segmentation.", "title": "" }, { "docid": "7fc92ce3f51a0ad3e300474e23cf7401", "text": "Dependency parsers are critical components within many NLP systems. However, currently available dependency parsers each exhibit at least one of several weaknesses, including high running time, limited accuracy, vague dependency labels, and lack of nonprojectivity support. Furthermore, no commonly used parser provides additional shallow semantic interpretation, such as preposition sense disambiguation and noun compound interpretation. In this paper, we present a new dependency-tree conversion of the Penn Treebank along with its associated fine-grain dependency labels and a fast, accurate parser trained on it. We explain how a non-projective extension to shift-reduce parsing can be incorporated into non-directional easy-first parsing. The parser performs well when evaluated on the standard test section of the Penn Treebank, outperforming several popular open source dependency parsers; it is, to the best of our knowledge, the first dependency parser capable of parsing more than 75 sentences per second at over 93% accuracy.", "title": "" }, { "docid": "e0f7f087a4d8a33c1260d4ed0558edc3", "text": "In this review paper, it is intended to summarize and compare the methods of automatic detection of microcalcifications in digitized mammograms used in various stages of the Computer Aided Detection systems (CAD). In particular, the pre processing and enhancement, bilateral subtraction techniques, segmentation algorithms, feature extraction, selection and classification, classifiers, Receiver Operating Characteristic (ROC); Free-response Receiver Operating Characteristic (FROC) analysis and their performances are studied and compared.", "title": "" }, { "docid": "36a9f1c016d0e2540460e28c4c846e9a", "text": "Nowadays PDF documents have become a dominating knowledge repository for both the academia and industry largely because they are very convenient to print and exchange. However, the methods of automated structure information extraction are yet to be fully explored and the lack of effective methods hinders the information reuse of the PDF documents. To enhance the usability for PDF-formatted electronic books, we propose a novel computational framework to analyze the underlying physical structure and logical structure. The analysis is conducted at both page level and document level, including global typographies, reading order, logical elements, chapter/section hierarchy and metadata. Moreover, two characteristics of PDF-based books, i.e., style consistency in the whole book document and natural rendering order of PDF files, are fully exploited in this paper to improve the conventional image-based structure extraction methods. This paper employs the bipartite graph as a common structure for modeling various tasks, including reading order recovery, figure and caption association, and metadata extraction. Based on the graph representation, the optimal matching (OM) method is utilized to find the global optima in those tasks. Extensive benchmarking using real-world data validates the high efficiency and discrimination ability of the proposed method.", "title": "" }, { "docid": "179c5bc5044d85c2597d41b1bd5658b3", "text": "Embedding models typically associate each word with a single real-valued vector, representing its different properties. Evaluation methods, therefore, need to analyze the accuracy and completeness of these properties in embeddings. This requires fine-grained analysis of embedding subspaces. Multi-label classification is an appropriate way to do so. We propose a new evaluation method for word embeddings based on multi-label classification given a word embedding. The task we use is finegrained name typing: given a large corpus, find all types that a name can refer to based on the name embedding. Given the scale of entities in knowledge bases, we can build datasets for this task that are complementary to the current embedding evaluation datasets in: they are very large, contain fine-grained classes, and allow the direct evaluation of embeddings without confounding factors like sentence context.", "title": "" }, { "docid": "2a8f2e8e4897f03c89d9e8a6bf8270f3", "text": "BACKGROUND\nThe aging of the population is an inexorable change that challenges governments and societies in every developed country. Based on clinical and empirical data, social isolation is found to be prevalent among elderly people, and it has negative consequences on the elderly's psychological and physical health. Targeting social isolation has become a focus area for policy and practice. Evidence indicates that contemporary information and communication technologies (ICT) have the potential to prevent or reduce the social isolation of elderly people via various mechanisms.\n\n\nOBJECTIVE\nThis systematic review explored the effects of ICT interventions on reducing social isolation of the elderly.\n\n\nMETHODS\nRelevant electronic databases (PsycINFO, PubMed, MEDLINE, EBSCO, SSCI, Communication Studies: a SAGE Full-Text Collection, Communication & Mass Media Complete, Association for Computing Machinery (ACM) Digital Library, and IEEE Xplore) were systematically searched using a unified strategy to identify quantitative and qualitative studies on the effectiveness of ICT-mediated social isolation interventions for elderly people published in English between 2002 and 2015. Narrative synthesis was performed to interpret the results of the identified studies, and their quality was also appraised.\n\n\nRESULTS\nTwenty-five publications were included in the review. Four of them were evaluated as rigorous research. Most studies measured the effectiveness of ICT by measuring specific dimensions rather than social isolation in general. ICT use was consistently found to affect social support, social connectedness, and social isolation in general positively. The results for loneliness were inconclusive. Even though most were positive, some studies found a nonsignificant or negative impact. More importantly, the positive effect of ICT use on social connectedness and social support seemed to be short-term and did not last for more than six months after the intervention. The results for self-esteem and control over one's life were consistent but generally nonsignificant. ICT was found to alleviate the elderly's social isolation through four mechanisms: connecting to the outside world, gaining social support, engaging in activities of interests, and boosting self-confidence.\n\n\nCONCLUSIONS\nMore well-designed studies that contain a minimum risk of research bias are needed to draw conclusions on the effectiveness of ICT interventions for elderly people in reducing their perceived social isolation as a multidimensional concept. The results of this review suggest that ICT could be an effective tool to tackle social isolation among the elderly. However, it is not suitable for every senior alike. Future research should identify who among elderly people can most benefit from ICT use in reducing social isolation. Research on other types of ICT (eg, mobile phone-based instant messaging apps) should be conducted to promote understanding and practice of ICT-based social-isolation interventions for elderly people.", "title": "" }, { "docid": "84cf1ce60ad3eda955abc5ca0ee4fe5b", "text": "Despite its great promise, neuroimaging has yet to substantially impact clinical practice and public health. However, a developing synergy between emerging analysis techniques and data-sharing initiatives has the potential to transform the role of neuroimaging in clinical applications. We review the state of translational neuroimaging and outline an approach to developing brain signatures that can be shared, tested in multiple contexts and applied in clinical settings. The approach rests on three pillars: (i) the use of multivariate pattern-recognition techniques to develop brain signatures for clinical outcomes and relevant mental processes; (ii) assessment and optimization of their diagnostic value; and (iii) a program of broad exploration followed by increasingly rigorous assessment of generalizability across samples, research contexts and populations. Increasingly sophisticated models based on these principles will help to overcome some of the obstacles on the road from basic neuroscience to better health and will ultimately serve both basic and applied goals.", "title": "" }, { "docid": "8d5de5dd51d5000184702d91afec5c18", "text": "Deep networks trained on large-scale data can learn transferable features to promote learning multiple tasks. As deep features eventually transition from general to specific along deep networks, a fundamental problem is how to exploit the relationship across different tasks and improve the feature transferability in the task-specific layers. In this paper, we propose Deep Relationship Networks (DRN) that discover the task relationship based on novel tensor normal priors over the parameter tensors of multiple task-specific layers in deep convolutional networks. By jointly learning transferable features and task relationships, DRN is able to alleviate the dilemma of negative-transfer in the feature layers and under-transfer in the classifier layer. Extensive experiments show that DRN yields state-of-the-art results on standard multi-task learning benchmarks.", "title": "" }, { "docid": "f8f00576f55e24a06b6c930c0cc39a85", "text": "An integrated navigation information system must know continuously the current position with a good precision. The required performance of the positioning module is achieved by using a cluster of heterogeneous sensors whose measurements are fused. The most popular data fusion method for positioning problems is the extended Kalman filter. The extended Kalman filter is a variation of the Kalman filter used to solve non-linear problems. Recently, an improvement to the extended Kalman filter has been proposed, the unscented Kalman filter. This paper describes an empirical analysis evaluating the performances of the unscented Kalman filter and comparing them with the extended Kalman filter's performances.", "title": "" }, { "docid": "0734e55ef60e9e1ef490c03a23f017e8", "text": "High-voltage (HV) pulses are used in pulsed electric field (PEF) applications to provide an effective electroporation process, a process in which harmful microorganisms are disinfected when subjected to a PEF. Depending on the PEF application, different HV pulse specifications are required such as the pulse-waveform shape, the voltage magnitude, the pulse duration, and the pulse repetition rate. In this paper, a generic pulse-waveform generator (GPG) is proposed, and the GPG topology is based on half-bridge modular multilevel converter (HB-MMC) cells. The GPG topology is formed of four identical arms of series-connected HB-MMC cells forming an H-bridge. Unlike the conventional HB-MMC-based converters in HVdc transmission, the GPG load power flow is not continuous which leads to smaller size cell capacitors utilization; hence, smaller footprint of the GPG is achieved. The GPG topology flexibility allows the controller software to generate a basic multilevel waveform which can be manipulated to generate the commonly used PEF pulse waveforms. Therefore, the proposed topology offers modularity, redundancy, and scalability. The viability of the proposed GPG converter is validated by MATLAB/Simulink simulation and experimentation.", "title": "" }, { "docid": "f87e8f9d733ed60cedfda1cbfe176cbf", "text": "Image set classification finds its applications in a number of real-life scenarios such as classification from surveillance videos, multi-view camera networks and personal albums. Compared with single image based classification, it offers more promises and has therefore attracted significant research attention in recent years. Unlike many existing methods which assume images of a set to lie on a certain geometric surface, this paper introduces a deep learning framework which makes no such prior assumptions and can automatically discover the underlying geometric structure. Specifically, a Template Deep Reconstruction Model (TDRM) is defined whose parameters are initialized by performing unsupervised pre-training in a layer-wise fashion using Gaussian Restricted Boltzmann Machines (GRBMs). The initialized TDRM is then separately trained for images of each class and class-specific DRMs are learnt. Based on the minimum reconstruction errors from the learnt class-specific models, three different voting strategies are devised for classification. Extensive experiments are performed to demonstrate the efficacy of the proposed framework for the tasks of face and object recognition from image sets. Experimental results show that the proposed method consistently outperforms the existing state of the art methods.", "title": "" }, { "docid": "78cd033a67f703b9e50c75e418a8c8e7", "text": "Volatility in stock markets has been extensively studied in the applied finance literature. In this paper, Artificial Neural Network models based on various back propagation algorithms have been constructed to predict volatility in the Indian stock market through volatility of NIFTY returns and volatility of gold returns. This model considers India VIX, CBOE VIX, volatility of crude oil returns (CRUDESDR), volatility of DJIA returns (DJIASDR), volatility of DAX returns (DAXSDR), volatility of Hang Seng returns (HANGSDR) and volatility of Nikkei returns (NIKKEISDR) as predictor variables. Three sets of experiments have been performed over three time periods to judge the effectiveness of the approach.", "title": "" }, { "docid": "d94c7ff18e4ff21d15af109002ab2932", "text": "As the proliferation of technology dramatically infiltrates all aspects of modern life, in many ways the world is becoming so dynamic and complex that technological capabilities are overwhelming human capabilities to optimally interact with and leverage those technologies. Fortunately, these technological advancements have also driven an explosion of neuroscience research over the past several decades, presenting engineers with a remarkable opportunity to design and develop flexible and adaptive brain-based neurotechnologies that integrate with and capitalize on human capabilities and limitations to improve human-system interactions. Major forerunners of this conception are brain-computer interfaces (BCIs), which to this point have been largely focused on improving the quality of life for particular clinical populations and include, for example, applications for advanced communications with paralyzed or “locked in” patients as well as the direct control of prostheses and wheelchairs. Near-term applications are envisioned that are primarily task oriented and are targeted to avoid the most difficult obstacles to development. In the farther term, a holistic approach to BCIs will enable a broad range of task-oriented and opportunistic applications by leveraging pervasive technologies and advanced analytical approaches to sense and merge critical brain, behavioral, task, and environmental information. Communications and other applications that are envisioned to be broadly impacted by BCIs are highlighted; however, these represent just a small sample of the potential of these technologies.", "title": "" }, { "docid": "85bda0726bf53015e535738711785f20", "text": "BACKGROUND AND AIM\nThere has recently been a growing interest towards patients' affective and emotional needs, especially in relational therapies, which are considered vital as to increase the understanding of those needs and patients' well-being. In particular, we paid attention to those patients who are forced to spend the last phase of their existence in residential facilities, namely elderly people in nursing homes, who often feel marginalized, useless, depressed, unstimulated or unable to communicate. The aim of this study is to verify the effectiveness of pet therapy in improving well-being in the elderly living in a nursing home.\n\n\nMETHODS\nThis is a longitudinal study with before and after intervention variables measurement in two groups of patients of a nursing home for elderly people. One group followed an AAI intervention (experimental group) the other one did not (control group). As to perform an assessment of well-being we measured the following dimensions in patients: anxiety (HAM-A), depression (GDS), apathy (AES), loneliness (UCLA), and quality of life (QUALID). Both groups filled the questionnaires as to measure the target variables (time 0). Once finished the scheduled meetings (time 1), all the participants, including the control group, filled the same questionnaires.\n\n\nRESULTS\nIn accordance with scientific evidence the results confirmed a significant reduction of the measured variables. Especially for the quality of life, which showed a greater reduction than the other.\n\n\nCONCLUSIONS\nThe implementation and success of the Pet Therapy could have a great emotional and social impact, bringing relief to patients and their family members, but also to health professionals.", "title": "" }, { "docid": "c5efce1facffb845b175018c29fef49a", "text": "0957-4174/$ see front matter 2013 Elsevier Ltd. A http://dx.doi.org/10.1016/j.eswa.2013.02.007 ⇑ Corresponding author. Tel.: +3", "title": "" }, { "docid": "f2b3643ca7a9a1759f038f15847d7617", "text": "Despite significant advances in image segmentation techniques, evaluation of these techniques thus far has been largely subjective. Typically, the effectiveness of a new algorithm is demonstrated only by the presentation of a few segmented images and is otherwise left to subjective evaluation by the reader. Little effort has been spent on the design of perceptually correct measures to compare an automatic segmentation of an image to a set of hand-segmented examples of the same image. This paper demonstrates how a modification of the Rand index, the Normalized Probabilistic Rand (NPR) index, meets the requirements of largescale performance evaluation of image segmentation. We show that the measure has a clear probabilistic interpretation as the maximum likelihood estimator of an underlying Gibbs model, can be correctly normalized to account for the inherent similarity in a set of ground truth images, and can be computed efficiently for large datasets. Results are presented on images from the publicly available Berkeley Segmentation dataset.", "title": "" }, { "docid": "937bb3c066500ddffe8d3d78b3580c26", "text": "Multimodal semantic representation is an evolving area of research in natural language processing as well as computer vision. Combining or integrating perceptual information, such as visual features, with linguistic features is recently being actively studied. This paper presents a novel bimodal autoencoder model for multimodal representation learning: the autoencoder learns in order to enhance linguistic feature vectors by incorporating the corresponding visual features. During the runtime, owing to the trained neural network, visually enhanced multimodal representations can be achieved even for words for which direct visual-linguistic correspondences are not learned. The empirical results obtained with standard semantic relatedness tasks demonstrate that our approach is generally promising. We further investigate the potential efficacy of the enhanced word embeddings in discriminating antonyms and synonyms from vaguely related words.", "title": "" } ]
scidocsrr
a57d62a7e1eab77506440bedd7651e99
Generating Consistent Land Surface Temperature and Emissivity Products Between ASTER and MODIS Data for Earth Science Research
[ { "docid": "8085eb4cf8a5e9eb6f506c475b4500ba", "text": "The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) scanner on NASA’s Earth Observing System (EOS)-AM1 satellite (launch scheduled for 1998) will collect five bands of thermal infrared (TIR) data with a noise equivalent temperature difference ( NE T ) of 0.3 K to estimate surface temperatures and emissivity spectra, especially over land, where emissivities are not known in advance. Temperature/emissivity separation (TES) is difficult because there are five measurements but six unknowns. Various approaches have been used to constrain the extra degree of freedom. ASTER’s TES algorithm hybridizes three established algorithms, first estimating the normalized emissivities and then calculating emissivity band ratios. An empirical relationship predicts the minimum emissivity from the spectral contrast of the ratioed values, permitting recovery of the emissivity spectrum. TES uses an iterative approach to remove reflected sky irradiance. Based on numerical simulation, TES should be able to recover temperatures within about 1.5 K and emissivities within about 0.015. Validation using airborne simulator images taken over playas and ponds in central Nevada demonstrates that, with proper atmospheric compensation, it is possible to meet the theoretical expectations. The main sources of uncertainty in the output temperature and emissivity images are the empirical relationship between emissivity values and spectral contrast, compensation for reflected sky irradiance, and ASTER’s precision, calibration, and atmospheric compensation.", "title": "" } ]
[ { "docid": "9faec965b145160ee7f74b80a6c2d291", "text": "Several skin substitutes are available that can be used in the management of hand burns; some are intended as temporary covers to expedite healing of shallow burns and others are intended to be used in the surgical management of deep burns. An understanding of skin biology and the relative benefits of each product are needed to determine the optimal role of these products in hand burn management.", "title": "" }, { "docid": "2c9f7053d9bcd6bc421b133dd7e62d08", "text": "Recurrent neural networks (RNN) combined with attention mechanism has proved to be useful for various NLP tasks including machine translation, sequence labeling and syntactic parsing. The attention mechanism is usually applied by estimating the weights (or importance) of inputs and taking the weighted sum of inputs as derived features. Although such features have demonstrated their effectiveness, they may fail to capture the sequence information due to the simple weighted sum being used to produce them. The order of the words does matter to the meaning or the structure of the sentences, especially for syntactic parsing, which aims to recover the structure from a sequence of words. In this study, we propose an RNN-based attention to capture the relevant and sequence-preserved features from a sentence, and use the derived features to perform the dependency parsing. We evaluated the graph-based and transition-based parsing models enhanced with the RNN-based sequence-preserved attention on the both English PTB and Chinese CTB datasets. The experimental results show that the enhanced systems were improved with significant increase in parsing accuracy.", "title": "" }, { "docid": "6cba2e960c0c4f3999ce400d93e42bac", "text": "Phylodiversity measures summarise the phylogenetic diversity patterns of groups of organisms. By using branches of the tree of life, rather than its tips (e.g., species), phylodiversity measures provide important additional information about biodiversity that can improve conservation policy and outcomes. As a biodiverse nation with a strong legislative and policy framework, Australia provides an opportunity to use phylogenetic information to inform conservation decision-making. We explored the application of phylodiversity measures across Australia with a focus on two highly biodiverse regions, the south west of Western Australia (SWWA) and the South East Queensland bioregion (SEQ). We analysed seven diverse groups of organisms spanning five separate phyla on the evolutionary tree of life, the plant genera Acacia and Daviesia, mammals, hylid frogs, myobatrachid frogs, passerine birds, and camaenid land snails. We measured species richness, weighted species endemism (WE) and two phylodiversity measures, phylogenetic diversity (PD) and phylogenetic endemism (PE), as well as their respective complementarity scores (a measure of gains and losses) at 20 km resolution. Higher PD was identified within SEQ for all fauna groups, whereas more PD was found in SWWA for both plant groups. PD and PD complementarity were strongly correlated with species richness and species complementarity for most groups but less so for plants. PD and PE were found to complement traditional species-based measures for all groups studied: PD and PE follow similar spatial patterns to richness and WE, but highlighted different areas that would not be identified by conventional species-based biodiversity analyses alone. The application of phylodiversity measures, particularly the novel weighted complementary measures considered here, in conservation can enhance protection of the evolutionary history that contributes to present day biodiversity values of areas. Phylogenetic measures in conservation can include important elements of biodiversity in conservation planning, such as evolutionary potential and feature diversity that will improve decision-making and lead to better biodiversity conservation outcomes.", "title": "" }, { "docid": "061ac4487fba7837f44293a2d20b8dd9", "text": "This paper describes a model of cooperative behavior and describes how such a model can be applied in a natural language understanding system. We assume that agents attempt to recognize the plans of other agents and, then, use this plan when deciding what response to make. In particular, we show that, given a setting in which purposeful dialogues occur, this model can account for responses that provide more information that explicitly requested and for appropriate responses to both short sentence fragments and indirect speech acts.", "title": "" }, { "docid": "1f3e600ce5be2a55234c11e19e11cb67", "text": "In this paper, we propose a noise robust speech recognition system built using generalized distillation framework. It is assumed that during training, in addition to the training data, some kind of ”privileged” information is available and can be used to guide the training process. This allows to obtain a system which at test time outperforms those built on regular training data alone. In the case of noisy speech recognition task, the privileged information is obtained from a model, called ”teacher”, trained on clean speech only. The regular model, called ”student”, is trained on noisy utterances and uses teacher’s output for the corresponding clean utterances. Thus, for this framework a parallel clean/noisy speech data are required. We experimented on the Aurora2 database which provides such kind of data. Our system uses hybrid DNN-HMM acoustic model where neural networks provide HMM state probabilities during decoding. The teacher DNN is trained on the clean data, while the student DNN is trained using multi-condition (various SNRs) data. The student DNN loss function combines the targets obtained from forced alignment of the training data and the outputs of the teacher DNN when fed with the corresponding clean features. Experimental results clearly show that distillation framework is effective and allows to achieve significant reduction in the word error rate.", "title": "" }, { "docid": "79453a45e1376e1d4cd08002b5e61ac0", "text": "Appropriate selection of learning algorithms is essential for the success of data mining. Meta-learning is one approach to achieve this objective by identifying a mapping from data characteristics to algorithm performance. Appropriate data characterization is, thus, of vital importance for the meta-learning. To this effect, a variety of data characterization techniques, based on three strategies including simple measure, statistical measure and information theory based measure, have been developed, however, the quality of them is still needed to be improved. This paper presents new measures to characterise datasets for meta-learning based on the idea to capture the characteristics from the structural shape and size of the decision tree induced from the dataset. Their effectiveness is illustrated by comparing to the results obtained by the classical data characteristics techniques, including DCT that is the most wide used technique in meta-learning and Landmarking that is the most recently developed method and produced better performance comparing to DCT.", "title": "" }, { "docid": "8971e1e9bc14663c8ae50d2640140f33", "text": "Designing for reflection is becoming of increasing interest to HCI researchers, especially as digital technologies move to supporting broader professional and quality of life issues. However, the term 'reflection' is being used and designed for in diverse ways and often with little reference to vast amount of literature on the topic outside of HCI. Here we synthesize this literature into a framework, consisting of aspects such as purposes of reflection, conditions for reflection and levels of reflection (where the levels capture the behaviours and activities associated with reflection). We then show how technologies can support these different aspects and conclude with open questions that can guide a more systematic approach to how we understand and design for support of reflection.", "title": "" }, { "docid": "94a59f1c20a6476035a00d86c222a08b", "text": "Lateral transshipments within an inventory system are stock movements between locations of the same echelon. These transshipments can be conducted periodically at predetermined points in time to proactively redistribute stock, or they can be used reactively as a method of meeting demand which cannot be satisfied from stock on hand. The elements of an inventory system considered, e.g. size, cost structures and service level definition, all influence the best method of transshipping. Models of many different systems have been considered. This paper provides a literature review which categorizes the research to date on lateral transshipments, so that these differences can be understood and gaps within the literature can be identified.", "title": "" }, { "docid": "ff705a36e71e2aa898e99fbcfc9ec9d2", "text": "This paper presents a design concept for smart home automation system based on the idea of the internet of things (IoT) technology. The proposed system has two scenarios where first one is denoted as a wireless based and the second is a wire-line based scenario. Each scenario has two operational modes for manual and automatic use. In Case of the wireless scenario, Arduino-Uno single board microcontroller as a central controller for home appliances is applied. Cellular phone with Matlab-GUI platform for monitoring and controlling processes through Wi-Fi communication technology is addressed. For the wire-line scenario, field-programmable gate array (FPGA) kit as a main controller is used. Simulation and hardware realization for the proposed system show its reliability and effectiveness.", "title": "" }, { "docid": "d86633f3add015ffc7de96cb4a6e3802", "text": "Summary • Animator and model checker for B Methode • Model & constrained based checker • ProB findes correct values for operation arguments • ProB enables user to uncover errors in specifications", "title": "" }, { "docid": "7190e8e6f6c061bed8589719b7d59e0d", "text": "Image-level feature descriptors obtained from convolutional neural networks have shown powerful representation capabilities for image retrieval. In this paper, we present an unsupervised method to aggregate deep convolutional features into compact yet discriminative image vectors by simulating the dynamics of heat diffusion. A distinctive problem in image retrieval is that repetitive or bursty features tend to dominate feature representations, leading to less than ideal matches. We show that by considering each deep feature as a heat source, our unsupervised aggregation method is able to avoiding over-representation of bursty features. We additionally provide a practical solution for the proposed aggregation method, and further show the efficiency of our method in experimental evaluation. Finally, we extensively evaluate the proposed approach with pre-trained and fine-tuned deep networks on common public benchmarks, and show superior performance compared to previous work. Image retrieval has always been an attractive research topic in the field of computer vision. By allowing users to search similar images from a large database of digital images, it provides a natural and flexible interface for image archiving and browsing. Convolutional Neural Networks (CNNs) have shown remarkable accuracy in tasks such as image classification, and object detection. Recent research has also shown positive results of using CNNs on image retrieval (Babenko and Lempitsky 2015; Kalantidis, Mellina, and Osindero 2016; Hoang et al. 2017). However, unlike image classification approaches which often use global feature vectors produced by fully connected layers, these methods extract local features depicting image patches from the outputs of convolutional layers and aggregate these features into compact (a few hundred dimensions) image-level descriptors. Once meaningful and representative image-level descriptors are defined, visually similar images are retrieved by computing similarities between pre-computed database feature representations and query representations. In this paper we devise a method to avoid overrepresenting bursty features. Inspired by an observation of similar phenomena in textual data, Jegou et al. (Jégou, Douze, and Schmid 2009) identified burstiness as the phenomenon by which overly repetitive features within an instance tend to dominate the instance feature representation. In order to alleviate this issue, we propose a feature aggregation approach that emulates the dynamics of heat diffusion. The idea is to model feature maps as a heat system where we weight highly the features leading to low system temperatures. This is because that these features are less connected to other features, and therefore they are more distinctive. The dynamics of the temperature in such system can be estimated using the partial differential equation induced by the heat equation. Heat diffusion, and more specifically anisotropic diffusion, has been used successfully in various image processing and computer vision tasks. Ranging from the classical work of Perona and Malik (Perona and Malik 1990) to further applications in image smoothing, image regularization, image co-segmentation, and optical flow estimation (Zhang, Zheng, and Cai 2010; Tschumperle and Deriche 2005; Kim et al. 2011; Bruhn, Weickert, and Schnörr 2005). However, to our knowledge, it has not been applied to weight features from the outputs of a deep convolutional neural network. We show that by combining this classical image processing technique with a deep learning model, we are able to obtain significant gains against previous work. Our contributions can be summarized as follows: • By greedily considering each deep feature as a heat source and enforcing the temperature of the system be a constant within each heat source, we propose a novel efficient feature weighting approach to reduce the undesirable influence of bursty features. • We provide a practical solution to computing weights for our feature weighting method. Additionally, we conduct extensive quantitative evaluations on commonly used image retrieval benchmarks, and demonstrate substantial performance improvement over existing unsupervised methods for feature aggregation.", "title": "" }, { "docid": "04c367bfe113af139c30e167f393acec", "text": "A novel planar magic-T using an E-plane substrate integrate waveguide (SIW) power divider and a SIW-slotline transition is proposed in this letter. Due to the metal ground between the two input/output ports, the E-plane SIW power divider has a 180° reverse phase characteristic. A SIW-slotline transition is utilized to realize the H-plane input/output port of the magic-T. Good agreement between the measured and simulated results indicate that the planar magic-T has a fractional bandwidth (FBW) of 18% (13.2-15.8 GHz), and the amplitude and phase imbalances are less than 0.24 dB and 1.5°, respectively.", "title": "" }, { "docid": "f5cb684cfff16812bafd83286a51b71f", "text": "OBJECTIVES\nTo assess the factors, motivations, and nonacademic influences that affected the choice of major among pharmacy and nonpharmacy undergraduate students.\n\n\nMETHODS\nA survey was administered to 618 pharmacy and nonpharmacy majors to assess background and motivational factors that may have influenced their choice of major. The sample consisted of freshman and sophomore students enrolled in a required speech course.\n\n\nRESULTS\nAfrican-American and Hispanic students were less likely to choose pharmacy as a major than Caucasians, whereas Asian-Americans were more likely to choose pharmacy as a major. Pharmacy students were more likely to be interested in science and math than nonpharmacy students.\n\n\nCONCLUSION\nStudents' self-reported racial/ethnic backgrounds influence their decision of whether to choose pharmacy as their academic major. Results of this survey provide further insight into developing effective recruiting strategies and enhancing the marketing efforts of academic institutions.", "title": "" }, { "docid": "b6c85badcc58249dffbbd3cebf2edd75", "text": "INTRODUCTION\nWith the continued expansion of robotically assisted procedures, general surgery residents continue to receive more exposure to this new technology as part of their training. There are currently no guidelines or standardized training requirements for robot-assisted procedures during general surgical residency. The aim of this study was to assess the effect of this new technology on general surgery training from the residents' perspective.\n\n\nMETHODS\nAn anonymous, national, web-based survey was conducted on residents enrolled in general surgery training in 2013. The survey was sent to 240 Accreditation Council for Graduate Medical Education-approved general surgery training programs.\n\n\nRESULTS\nOverall, 64% of the responding residents were men and had an average age of 29 years. Half of the responses were from postgraduate year 1 (PGY1) and PGY2 residents, and the remainder was from the PGY3 level and above. Overall, 50% of the responses were from university training programs, 32% from university-affiliated programs, and 18% from community-based programs. More than 96% of residents noted the availability of the surgical robot system at their training institution. Overall, 63% of residents indicated that they had participated in robotic surgical cases. Most responded that they had assisted in 10 or fewer robotic cases with the most frequent activities being assisting with robotic trocar placement and docking and undocking the robot. Only 18% reported experience with operating the robotic console. More senior residents (PGY3 and above) were involved in robotic cases compared with junior residents (78% vs 48%, p < 0.001). Overall, 60% of residents indicated that they received no prior education or training before their first robotic case. Approximately 64% of residents reported that formal training in robotic surgery was important in residency training and 46% of residents indicated that robotic-assisted cases interfered with resident learning. Only 11% felt that robotic-assisted cases would replace conventional laparoscopic surgery in the future.\n\n\nCONCLUSIONS\nThis study illustrates that although the most residents have a robot at their institution and have participated in robotic surgery cases, very few residents received formal training before participating in a robotic case.", "title": "" }, { "docid": "4b96679173c825db7bc334449b6c4b83", "text": "This article provides the first survey of computational models of emotion in reinforcement learning (RL) agents. The survey focuses on agent/robot emotions, and mostly ignores human user emotions. Emotions are recognized as functional in decision-making by influencing motivation and action selection. Therefore, computational emotion models are usually grounded in the agent’s decision making architecture, of which RL is an important subclass. Studying emotions in RL-based agents is useful for three research fields. For machine learning (ML) researchers, emotion models may improve learning efficiency. For the interactive ML and human–robot interaction community, emotions can communicate state and enhance user investment. Lastly, it allows affective modelling researchers to investigate their emotion theories in a successful AI agent class. This survey provides background on emotion theory and RL. It systematically addresses (1) from what underlying dimensions (e.g. homeostasis, appraisal) emotions can be derived and how these can be modelled in RL-agents, (2) what types of emotions have been derived from these dimensions, and (3) how these emotions may either influence the learning efficiency of the agent or be useful as social signals. We also systematically compare evaluation criteria, and draw connections to important RL sub-domains like (intrinsic) motivation and model-based RL. In short, this survey provides both a practical overview for engineers wanting to implement emotions in their RL agents, and identifies challenges and directions for future emotion-RL research.", "title": "" }, { "docid": "0e679dfd2ff8ced7c1391486d4329253", "text": "A significant portion of information needs in web search target entities. These may come in different forms or flavours, ranging from short keyword queries to more verbose requests, expressed in natural language. We address the task of automatically annotating queries with target types from an ontology. The identified types can subsequently be used, e.g., for creating semantically more informed query and retrieval models, filtering results, or directing the requests to specific verticals. Our study makes the following contributions. First, we formalise the task of hierarchical target type identification, argue that it is best viewed as a ranking problem, and propose multiple evaluation metrics. Second, we develop a purpose-built test collection by hand-annotating over 300 queries, from various recent entity search benchmarking campaigns, with target types from the DBpedia ontology. Finally, we introduce and examine two baseline models, inspired by federated search techniques. We show that these methods perform surprisingly well when target types are limited to a flat list of top level categories; finding the right level of granularity in the hierarchy, however, is particularly challenging and requires further investigation.", "title": "" }, { "docid": "2e0f71364c4733c90d463579916f122c", "text": "The History of HCI is briefly reviewed together with three HCI models and structure including CSCW, CSCL and CSCR. It is shown that a number of authorities consider HCI to be a fragmented discipline with no agreed set of unifying design principles. An analysis of usability criteria based upon citation frequency of authors is performed in order to discover the eight most recognised HCI principles.", "title": "" }, { "docid": "8c6622b02eb7e4e11ec684d860456056", "text": "It is the purpose of this viewpoint article to delineate the regulatory network of growth hormone (GH), insulin, and insulin-like growth factor-1 (IGF-1) signalling during puberty, associated hormonal changes in adrenal and gonadal androgen metabolism, and the impact of dietary factors and smoking involved in the pathogenesis of acne. The key regulator IGF-1 rises during puberty by the action of increased GH secretion and correlates well with the clinical course of acne. In acne patients, associations between serum levels of IGF-1, dehydroepiandrosterone sulphate, dihydrotestosterone, acne lesion counts and facial sebum secretion rate have been reported. IGF-1 stimulates 5alpha-reductase, adrenal and gonadal androgen synthesis, androgen receptor signal transduction, sebocyte proliferation and lipogenesis. Milk consumption results in a significant increase in insulin and IGF-1 serum levels comparable with high glycaemic food. Insulin induces hepatic IGF-1 secretion, and both hormones amplify the stimulatory effect of GH on sebocytes and augment mitogenic downstream signalling pathways of insulin receptors, IGF-1 receptor and fibroblast growth factor receptor-2b. Acne is proposed to be an IGF-1-mediated disease, modified by diets and smoking increasing insulin/IGF1-signalling. Metformin treatment, and diets low in milk protein content and glycaemic index reduce increased IGF-1 signalling. Persistent acne in adulthood with high IGF-1 levels may be considered as an indicator for increased risk of cancer, which may require appropriate dietary intervention as well as treatment with insulin-sensitizing agents.", "title": "" }, { "docid": "8b3557219674c8441e63e9b0ab459c29", "text": "his paper is focused on comparison of various decision tree classification algorithms using WEKA tool. Data mining tools such as classification, clustering, association and neural network solve large amount of problem. These are all open source tools, we directly communicate with each tool or by java code. In this paper we discuss on classification technique of data mining. In classification, various techniques are present such as bayes, functions, lazy, rules and tree etc. . Decision tree is one of the most frequently used classification algorithm. Decision tree classification with Waikato Environment for Knowledge Analysis (WEKA) is the simplest way to mining information from huge database. This work shows the process of WEKA analysis of file converts, step by step process of weka execution, selection of attributes to be mined and comparison with Knowledge Extraction of Evolutionary Learning . I took database [1] and execute in weka software. The conclusion of the paper shows the comparison among all type of decision tree algorithms by weka tool.", "title": "" }, { "docid": "b76af76207fa3ef07e8f2fbe6436dca0", "text": "Face recognition applications for airport security and surveillance can benefit from the collaborative coupling of mobile and cloud computing as they become widely available today. This paper discusses our work with the design and implementation of face recognition applications using our mobile-cloudlet-cloud architecture named MOCHA and its initial performance results. The challenge lies with how to perform task partitioning from mobile devices to cloud and distribute compute load among cloud servers (cloudlet) to minimize the response time given diverse communication latencies and server compute powers. Our preliminary simulation results show that optimal task partitioning algorithms significantly affect response time with heterogeneous latencies and compute powers. Motivated by these results, we design, implement, and validate the basic functionalities of MOCHA as a proof-of-concept, and develop algorithms that minimize the overall response time for face recognition. Our experimental results demonstrate that high-powered cloudlets are technically feasible and indeed help reduce overall processing time when face recognition applications run on mobile devices using the cloud as the backend servers.", "title": "" } ]
scidocsrr
d4306bb0059d1418f0cb09241742f867
Enterprise Architecture Management Patterns for Enterprise Architecture Visioning
[ { "docid": "73fdbdbff06b57195cde51ab5135ccbe", "text": "1 Abstract This paper describes five widely-applicable business strategy patterns. The initiate patterns where inspired Michael Porter's work on competitive strategy (1980). By applying the pattern form we are able to explore the strategies and consequences in a fresh light. The patterns form part of a larger endeavour to apply pattern thinking to the business domain. This endeavour seeks to map the business domain in patterns, this involves develop patterns, possibly based on existing literature, and mapping existing patterns into a coherent model of the business domain. If you find the paper interesting you might be interested in some more patterns that are currently (May 2005) in development. These describe in more detail how these strategies can be implemented: This paper is one of the most downloaded pieces on my website. I'd be interested to know more about who is downloading the paper, what use your making of it and any comments you have on it-allan@allankelly.net. Cost Leadership Build an organization that can produce your chosen product more cheaply than anyone else. You can then choose to undercut the opposition (and sell more) or sell at the same price (and make more profit per unit.) Differentiated Product Build a product that fulfils the same functions as your competitors but is clearly different, e.g. it is better quality, novel design, or carries a brand name. Customer will be prepared to pay more for your product than the competition. Market Focus You can't compete directly on cost or differentiation with the market leader; so, focus on a niche in the market. The niche will be smaller than the overall market (so sales will be lower) but the customer requirements will be different, serve these customers requirements better then the mass market and they will buy from you again and again. Sweet Spot Customers don't always want the best or the cheapest, so, produce a product that combines elements of differentiation with reasonable cost so you offer superior value. However, be careful, customer tastes", "title": "" } ]
[ { "docid": "3129b636e3739281ba59721765eeccb9", "text": "Despite the rapid adoption of Facebook as a means of photo sharing, minimal research has been conducted to understand user gratification behind this activity. In order to address this gap, the current study examines users’ gratifications in sharing photos on Facebook by applying Uses and Gratification (U&G) theory. An online survey completed by 368 respondents identified six different gratifications, namely, affection, attention seeking, disclosure, habit, information sharing, and social influence, behind sharing digital photos on Facebook. Some of the study’s prominent findings were: age was in positive correlation with disclosure and social influence gratifications; gender differences were identified among habit and disclosure gratifications; number of photos shared was negatively correlated with habit and information sharing gratifications. The study’s implications can be utilized to refine existing and develop new features and services bridging digital photos and social networking services.", "title": "" }, { "docid": "ae73f7c35c34050b87d8bf2bee81b620", "text": "D esigning a complex Web site so that it readily yields its information is a difficult task. The designer must anticipate the users' needs and structure the site accordingly. Yet users may have vastly differing views of the site's information, their needs may change over time, and their usage patterns may violate the designer's initial expectations. As a result, Web sites are all too often fossils cast in HTML, while user navigation is idiosyncratic and evolving. Understanding user needs requires understanding how users view the data available and how they actually use the site. For a complex site this can be difficult since user tests are expensive and time-consuming, and the site's server logs contain massive amounts of data. We propose a Web management assistant: a system that can process massive amounts of data about site usage Examining the potential use of automated adaptation to improve Web sites for visitors.", "title": "" }, { "docid": "251a47eb1a5307c5eba7372ce09ea641", "text": "A new class of target link flooding attacks (LFA) can cut off the Internet connections of a target area without being detected because they employ legitimate flows to congest selected links. Although new mechanisms for defending against LFA have been proposed, the deployment issues limit their usages since they require modifying routers. In this paper, we propose LinkScope, a novel system that employs both the end-to-end and the hopby-hop network measurement techniques to capture abnormal path performance degradation for detecting LFA and then correlate the performance data and traceroute data to infer the target links or areas. Although the idea is simple, we tackle a number of challenging issues, such as conducting large-scale Internet measurement through noncooperative measurement, assessing the performance on asymmetric Internet paths, and detecting LFA. We have implemented LinkScope with 7174 lines of C codes and the extensive evaluation in a testbed and the Internet show that LinkScope can quickly detect LFA with high accuracy and low false positive rate.", "title": "" }, { "docid": "33cab03ab9773efe22ba07dd461811ef", "text": "This paper describes a real-time feature-based stereo SLAM system that is robust and accurate in a wide variety of conditions –indoors, outdoors, with dynamic objects, changing light conditions, fast robot motions and large-scale loops. Our system follows a parallel-tracking-and-mapping strategy: a tracking thread estimates the camera pose at frame rate; and a mapping thread updates a keyframe-based map at a lower frequency. The stereo constraints of our system allow a robust initialization –avoiding the well-known bootstrapping problem in monocular systems– and the recovery of the real scale. Both aspects are essential for its practical use in real robotic systems that interact with the physical world. In this paper we provide the implementation details, an exhaustive evaluation of the system in public datasets and a comparison of most state-of-the-art feature detectors and descriptors on the presented system. For the benefit of the community, its code for ROS (Robot Operating System) has been released.", "title": "" }, { "docid": "815fe60934f0313c56e631d73b998c95", "text": "The scientific credibility of findings from clinical trials can be undermined by a range of problems including missing data, endpoint switching, data dredging, and selective publication. Together, these issues have contributed to systematically distorted perceptions regarding the benefits and risks of treatments. While these issues have been well documented and widely discussed within the profession, legislative intervention has seen limited success. Recently, a method was described for using a blockchain to prove the existence of documents describing pre-specified endpoints in clinical trials. Here, we extend the idea by using smart contracts - code, and data, that resides at a specific address in a blockchain, and whose execution is cryptographically validated by the network - to demonstrate how trust in clinical trials can be enforced and data manipulation eliminated. We show that blockchain smart contracts provide a novel technological solution to the data manipulation problem, by acting as trusted administrators and providing an immutable record of trial history.", "title": "" }, { "docid": "0a340a2dc4d9a6acd90d3bedad07f84a", "text": "BACKGROUND\nKhat (Catha edulis) contains a psychoactive substance, cathinone, which produces central nervous system stimulation analogous to amphetamine. It is believed that khat chewing has a negative impact on the physical and mental health of individuals as well as the socioeconomic condition of the family and the society at large. There is lack of community based studies regarding the link between khat use and poor mental health. The objective of this study was to evaluate the association between khat use and mental distress and to determine the prevalence of mental distress and khat use in Jimma City.\n\n\nMETHODS\nA cross-sectional community-based study was conducted in Jimma City from October 15 to November 15, 2009. The study used a structured questionnaire and Self Reporting Questionnaire-20 designed by WHO and which has been translated into Amharic and validated in Ethiopia. By multi stage sampling, 1200 individuals were included in the study. Data analysis was done using SPSS for window version 13.\n\n\nRESULTS\nThe Khat use prevalence was found to be 37.8% during the study period. Majority of the khat users were males (73.5%), age group 18-24 (41.1%), Muslims (46.6%), Oromo Ethnic group (47.2%), single (51.4%), high school students (46.8%) and employed (80%). Using cut-off point 7 out of 20 on the Self Reporting Questionnaire-20, 25.8% of the study population was found to have mental distress. Males (26.6%), persons older than 55 years (36.4%), Orthodox Christians (28.4%), Kefficho Ethnic groups (36.4%), widowed (44.8%), illiterates (43.8%) and farmers (40.0%) had higher rates of mental distress. We found that mental distress and khat use have significant association (34.7% Vs 20.5%, P<0.001). There was also significant association between mental distress and frequency of khat use (41% Vs 31.1%, P<0.001)\n\n\nCONCLUSION\nThe high rate of khat use among the young persons calls for public intervention to prevent more serious forms of substance use disorders. Our findings suggest that persons who use khat suffer from higher rates of mental distress. However, causal association could not be established due to cross-sectional study design.", "title": "" }, { "docid": "e914a66fc4c5b35e3fd24427ffdcbd96", "text": "This paper proposes two control algorithms for a sensorless speed control of a PMSM. One is a new low pass filter. This filter is designed to have the variable cutoff frequency according to the rotor speed. And the phase delay angle is so small as to be ignored not only in the low speed region but also in the high speed region including the field weakening region. Sensorless control of a PMSM can be guaranteed without any delay angle by using the proposed low pass filter. The other is a new iterative sliding mode observer (I-SMO). Generally the sliding mode observer (SMO) has the attractive features of the robustness to disturbances, and parameter variations. In the high speed region the switching gain of SMO must be large enough to operate the sliding mode stably. But the estimated currents and back EMF can not help having much ripple or chattering components especially in the high speed region including the flux weakening region. Using I-SMO can reduce chattering components of the estimated currents and back EMF in all speed regions without any help of the expensive hardware such as the high performance DSP and A/D converter. Experimental results show the usefulness of the proposed two algorithms for the sensorless drive system of a PMSM.", "title": "" }, { "docid": "70a94ef8bf6750cdb4603b34f0f1f005", "text": "What does this paper demonstrate. We show that a very simple 2D architecture (in the sense that it does not make any assumption or reasoning about the 3D information of the object) generally used for object classification, if properly adapted to the specific task, can provide top performance also for pose estimation. More specifically, we demonstrate how a 1-vs-all classification framework based on a Fisher Vector (FV) [1] pyramid or convolutional neural network (CNN) based features [2] can be used for pose estimation. In addition, suppressing neighboring viewpoints during training seems key to get good results.", "title": "" }, { "docid": "1bb694f68643eaf70e09ce086a77ea34", "text": "If you get the printed book in on-line book store, you may also find the same problem. So, you must move store to store and search for the available there. But, it will not happen here. The book that we will offer right here is the soft file concept. This is what make you can easily find and get this information security principles and practice by reading this site. We offer you the best product, always and always.", "title": "" }, { "docid": "d4a96cc393a3f1ca3bca94a57e07941e", "text": "With the increasing number of scientific publications, research paper recommendation has become increasingly important for scientists. Most researchers rely on keyword-based search or following citations in other papers, in order to find relevant research articles. And usually they spend a lot of time without getting satisfactory results. This study aims to propose a personalized research paper recommendation system, that facilitate this task by recommending papers based on users' explicit and implicit feedback. The users will be allowed to explicitly specify the papers of interest. In addition, user activities (e.g., viewing abstracts or full-texts) will be analyzed in order to enhance users' profiles. Most of the current research paper recommendation and information retrieval systems use the classical bag-of-words methods, which don't consider the context of the words and the semantic similarity between the articles. This study will use Recurrent Neural Networks (RNNs) to discover continuous and latent semantic features of the papers, in order to improve the recommendation quality. The proposed approach utilizes PubMed so far, since it is frequently used by physicians and scientists, but it can easily incorporate other datasets in the future.", "title": "" }, { "docid": "619165e7f74baf2a09271da789e724df", "text": "MOST verbal communication occurs in contexts where the listener can see the speaker as well as hear him. However, speech perception is normally regarded as a purely auditory process. The study reported here demonstrates a previously unrecognised influence of vision upon speech perception. It stems from an observation that, on being shown a film of a young woman's talking head, in which repeated utterances of the syllable [ba] had been dubbed on to lip movements for [ga], normal adults reported hearing [da]. With the reverse dubbing process, a majority reported hearing [bagba] or [gaba]. When these subjects listened to the soundtrack from the film, without visual input, or when they watched untreated film, they reported the syllables accurately as repetitions of [ba] or [ga]. Subsequent replications confirm the reliability of these findings; they have important implications for the understanding of speech perception.", "title": "" }, { "docid": "2e3f05ee44b276b51c1b449e4a62af94", "text": "We make some simple extensions to the Active Shape Model of Cootes et al. [4], and use it to locate features in frontal views of upright faces. We show on independent test data that with the extensions the Active Shape Model compares favorably with more sophisticated methods. The extensions are (i) fitting more landmarks than are actually needed (ii) selectively using twoinstead of one-dimensional landmark templates (iii) adding noise to the training set (iv) relaxing the shape model where advantageous (v) trimming covariance matrices by setting most entries to zero, and (vi) stacking two Active Shape Models in series.", "title": "" }, { "docid": "04384b62c17f9ff323db4d51bea86fe9", "text": "Imbalanced data widely exist in many high-impact applications. An example is in air traffic control, where among all three types of accident causes, historical accident reports with ‘personnel issues’ are much more than the other two types (‘aircraft issues’ and ‘environmental issues’) combined. Thus, the resulting data set of accident reports is highly imbalanced. On the other hand, this data set can be naturally modeled as a network, with each node representing an accident report, and each edge indicating the similarity of a pair of accident reports. Up until now, most existing work on imbalanced data analysis focused on the classification setting, and very little is devoted to learning the node representations for imbalanced networks. To bridge this gap, in this paper, we first propose Vertex-Diminished Random Walk (VDRW) for imbalanced network analysis. It is significantly different from the existing Vertex Reinforced Random Walk by discouraging the random particle to return to the nodes that have already been visited. This design is particularly suitable for imbalanced networks as the random particle is more likely to visit the nodes from the same class, which is a desired property for learning node representations. Furthermore, based on VDRW, we propose a semi-supervised network representation learning framework named ImVerde for imbalanced networks, where context sampling uses VDRW and the limited label information to create node-context pairs, and balanced-batch sampling adopts a simple under-sampling method to balance these pairs from different classes. Experimental results demonstrate that ImVerde based on VDRW outperforms stateof-the-art algorithms for learning network representations from imbalanced data.", "title": "" }, { "docid": "8c658d7663f9849a0759160886fc5690", "text": "The design and fabrication of a 76.5 GHz, planar, three beam antenna is presented. This antenna has greater than 31 dB of gain and sidelobes that are less than -29 dB below the main beam. This antenna demonstrates the ability to achieve very low sidelobes in a simple, compact, and planar structure. This is accomplished uniquely by feeding waveguide slots that are coupled to microstrip radiating elements. This illumination technique allows for a very low loss and highly efficient structure. Also, a novel beam-scanning concept is introduced. To orient a beam from bore sight it requires phase differences between the excitations of the successive elements. This is achieved by varying the width of the W-band waveguide. This simple, beam steering two-dimensional structure offers the advantage of easy manufacturing compared to present lens and alternative technologies.", "title": "" }, { "docid": "eb861eed8718e227fc2615bb6fcf0841", "text": "Immediate effects of verb-specific syntactic (subcategorization) information were found in a cross-modal naming experiment, a self-paced reading experiment, and an experiment in which eye movements were monitored. In the reading studies, syntactic misanalysis effects in sentence complements (e.g., \"The student forgot the solution was...\") occurred at the verb in the complement (e.g., was) for matrix verbs typically used with noun phrase complements but not for verbs typically used with sentence complements. In addition, a complementizer effect for sentence-complement-biased verbs was not due to syntactic misanalysis but was correlated with how strongly a particular verb prefers to be followed by the complementizer that. The results support models that make immediate use of lexically specific constraints, especially constraint-based models, but are problematic for lexical filtering models.", "title": "" }, { "docid": "16932e01fdea801f28ec6c4194f70352", "text": "Plum pox virus (PPV) causes the most economically-devastating viral disease in Prunus species. Unfortunately, few natural resistance genes are available for the control of PPV. Recessive resistance to some potyviruses is associated with mutations of eukaryotic translation initiation factor 4E (eIF4E) or its isoform eIF(iso)4E. In this study, we used an RNA silencing approach to manipulate the expression of eIF4E and eIF(iso)4E towards the development of PPV resistance in Prunus species. The eIF4E and eIF(iso)4E genes were cloned from plum (Prunus domestica L.). The sequence identity between plum eIF4E and eIF(iso)4E coding sequences is 60.4% at the nucleotide level and 52.1% at the amino acid level. Quantitative real-time RT-PCR analysis showed that these two genes have a similar expression pattern in different tissues. Transgenes allowing the production of hairpin RNAs of plum eIF4E or eIF(iso)4E were introduced into plum via Agrobacterium-mediated transformation. Gene expression analysis confirmed specific reduced expression of eIF4E or eIF(iso)4E in the transgenic lines and this was associated with the accumulation of siRNAs. Transgenic plants were challenged with PPV-D strain and resistance was evaluated by measuring the concentration of viral RNA. Eighty-two percent of the eIF(iso)4E silenced transgenic plants were resistant to PPV, while eIF4E silenced transgenic plants did not show PPV resistance. Physical interaction between PPV-VPg and plum eIF(iso)4E was confirmed. In contrast, no PPV-VPg/eIF4E interaction was observed. These results indicate that eIF(iso)4E is involved in PPV infection in plum, and that silencing of eIF(iso)4E expression can lead to PPV resistance in Prunus species.", "title": "" }, { "docid": "faea3dad1f13b8c4be3d4d5ffa88dcf1", "text": "Describing the latest advances in the field, Quantitative Risk Management covers the methods for market, credit and operational risk modelling. It places standard industry approaches on a more formal footing and explores key concepts such as loss distributions, risk measures and risk aggregation and allocation principles. The book’s methodology draws on diverse quantitative disciplines, from mathematical finance and statistics to econometrics and actuarial mathematics. A primary theme throughout is the need to satisfactorily address extreme outcomes and the dependence of key risk drivers. Proven in the classroom, the book also covers advanced topics like credit derivatives.", "title": "" }, { "docid": "ae28bc02e9f0891d8338980cd169ada4", "text": "We investigated the possibility of using a machine-learning scheme in conjunction with commercial wearable EEG-devices for translating listener's subjective experience of music into scores that can be used in popular on-demand music streaming services. Our study resulted into two variants, differing in terms of performance and execution time, and hence, subserving distinct applications in online streaming music platforms. The first method, NeuroPicks, is extremely accurate but slower. It is based on the well-established neuroscientific concepts of brainwave frequency bands, activation asymmetry index and cross frequency coupling (CFC). The second method, NeuroPicksVQ, offers prompt predictions of lower credibility and relies on a custom-built version of vector quantization procedure that facilitates a novel parameterization of the music-modulated brainwaves. Beyond the feature engineering step, both methods exploit the inherent efficiency of extreme learning machines (ELMs) so as to translate, in a personalized fashion, the derived patterns into a listener's score. NeuroPicks method may find applications as an integral part of contemporary music recommendation systems, while NeuroPicksVQ can control the selection of music tracks. Encouraging experimental results, from a pragmatic use of the systems, are presented.", "title": "" }, { "docid": "4ed47f48df37717148d985ad927b813f", "text": "Given an incorrect value produced during a failed program run (e.g., a wrong output value or a value that causes the program to crash), the backward dynamic slice of the value very frequently captures the faulty code responsible for producing the incorrect value. Although the dynamic slice often contains only a small percentage of the statements executed during the failed program run, the dynamic slice can still be large and thus considerable effort may be required by the programmer to locate the faulty code.In this paper we develop a strategy for pruning the dynamic slice to identify a subset of statements in the dynamic slice that are likely responsible for producing the incorrect value. We observe that some of the statements used in computing the incorrect value may also have been involved in computing correct values (e.g., a value produced by a statement in the dynamic slice of the incorrect value may also have been used in computing a correct output value prior to the incorrect value). For each such executed statement in the dynamic slice, using the value profiles of the executed statements, we compute a confidence value ranging from 0 to 1 - a higher confidence value corresponds to greater likelihood that the execution of the statement produced a correct value. Given a failed run involving execution of a single error, we demonstrate that the pruning of a dynamic slice by excluding only the statements with the confidence value of 1 is highly effective in reducing the size of the dynamic slice while retaining the faulty code in the slice. Our experiments show that the number of distinct statements in a pruned dynamic slice are 1.79 to 190.57 times less than the full dynamic slice. Confidence values also prioritize the statements in the dynamic slice according to the likelihood of them being faulty. We show that examining the statements in the order of increasing confidence values is an effective strategy for reducing the effort of fault location.", "title": "" }, { "docid": "e76b94af2a322cb90114ab51fde86919", "text": "In this paper, we introduce a new 2D modulation scheme referred to as OTFS (Orthogonal Time Frequency & Space) that multiplexes information QAM symbols over new class of carrier waveforms that correspond to localized pulses in a signal representation called the delay-Doppler representation. OTFS constitutes a far reaching generalization of conventional time and frequency modulations such as TDM and FDM and, from a broader perspective, it establishes a conceptual link between Radar and communication. The OTFS waveforms couple with the wireless channel in a way that directly captures the underlying physics, yielding a high-resolution delay-Doppler Radar image of the constituent reflectors. As a result, the time-frequency selective channel is converted into an invariant, separable and orthogonal interaction, where all received QAM symbols experience the same localized impairment and all the delay-Doppler diversity branches are coherently combined. The high resolution delay-Doppler separation of the reflectors enables OTFS to approach channel capacity with optimal performance-complexity tradeoff through linear scaling of spectral efficiency with the MIMO order and robustness to Doppler and multipath channel conditions. OTFS is an enabler for realizing the full promise of MUMIMO gains even in challenging 5G deployment settings where adaptation is unrealistic. 1. OTFS – A NEXT GENERATION MODULATION History teaches us that every transition to a new generation of wireless network involves a disruption in the underlying air interface: beginning with the transition from 2G networks based on single carrier GSM to 3G networks based on code division multiplexing (CDMA), then followed by the transition to contemporary 4G networks based on orthogonal frequency division multiplexing (OFDM). The decision to introduce a new air interface is made when the demands of a new generation of use cases cannot be met by legacy technology – in terms of performance, capabilities, or cost. As an example, the demands for higher capacity data services drove the transition from legacy interference-limited CDMA network (that have limited flexibility for adaptation and inferior achievable throughput) to a network based on an orthogonal narrowband OFDM that is optimally fit for opportunistic scheduling and achieves higher spectral efficiency. Emerging 5G networks are required to support diverse usage scenarios, as described for example in [1]. A fundamental requirement is multi-user MIMO, which holds the promise of massive increases in mobile broadband spectral efficiency using large numbers of antenna elements at the base-station in combination with advanced precoding techniques. This promise comes at the cost of very complex architectures that cannot practically achieve capacity using traditional OFDM techniques and suffers performance degradation in the presence of time and frequency selectivity ( [2] and [3]). Other important use cases include operation under non-trivial dynamic channel conditions (for example vehicle-to-vehicle and high-speed rail) where adaptation becomes unrealistic, rendering OFDM narrowband waveforms strictly suboptimal. As a result, one is once again faced with the dilemma of finding a better suited air interface where the new guiding philosophy is: When adaptation is not a possibility one should look for ways to eliminate the need to adapt. The challenge is to do that without sacrificing performance. To meet this challenge one should fuse together two contradictory principles – (1) the principle of spreading (as used in CDMA) to obtain resilience to narrowband interference and to exploit channel diversity gain for increased reliability under unpredictable channel conditions and (2) the principle of orthogonality (as used in OFDM) to simplify the channel coupling for achieving higher spectral densities with a superior performance-complexity tradeoff. OTFS is a modulation scheme that carries information QAM symbols over a new class of waveforms which are spread over both time and frequency while remaining roughly orthogonal to each other under general delay-Doppler channel impairments. The key characteristic of the OTFS waveforms is related to their optimal manner of interaction with the wireless reflectors. This interaction induces a simple and symmetric coupling", "title": "" } ]
scidocsrr
15a898a8d9df0467ca2ea8fc9063a030
How Many Workers to Ask?: Adaptive Exploration for Collecting High Quality Labels
[ { "docid": "904278b251c258d1dac9b652dcd7ee82", "text": "This paper addresses the repeated acquisition of labels for data items when the labeling is imperfect. We examine the improvement (or lack thereof) in data quality via repeated labeling, and focus especially on the improvement of training labels for supervised induction. With the outsourcing of small tasks becoming easier, for example via Rent-A-Coder or Amazon's Mechanical Turk, it often is possible to obtain less-than-expert labeling at low cost. With low-cost labeling, preparing the unlabeled part of the data can become considerably more expensive than labeling. We present repeated-labeling strategies of increasing complexity, and show several main results. (i) Repeated-labeling can improve label quality and model quality, but not always. (ii) When labels are noisy, repeated labeling can be preferable to single labeling even in the traditional setting where labels are not particularly cheap. (iii) As soon as the cost of processing the unlabeled data is not free, even the simple strategy of labeling everything multiple times can give considerable advantage. (iv) Repeatedly labeling a carefully chosen set of points is generally preferable, and we present a robust technique that combines different notions of uncertainty to select data points for which quality should be improved. The bottom line: the results show clearly that when labeling is not perfect, selective acquisition of multiple labels is a strategy that data miners should have in their repertoire; for certain label-quality/cost regimes, the benefit is substantial.", "title": "" }, { "docid": "a009fc320c5a61d8d8df33c19cd6037f", "text": "Over the past decade, crowdsourcing has emerged as a cheap and efficient method of obtaining solutions to simple tasks that are difficult for computers to solve but possible for humans. The popularity and promise of crowdsourcing markets has led to both empirical and theoretical research on the design of algorithms to optimize various aspects of these markets, such as the pricing and assignment of tasks. Much of the existing theoretical work on crowdsourcing markets has focused on problems that fall into the broad category of online decision making; task requesters or the crowdsourcing platform itself make repeated decisions about prices to set, workers to filter out, problems to assign to specific workers, or other things. Often these decisions are complex, requiring algorithms that learn about the distribution of available tasks or workers over time and take into account the strategic (or sometimes irrational) behavior of workers.\n As human computation grows into its own field, the time is ripe to address these challenges in a principled way. However, it appears very difficult to capture all pertinent aspects of crowdsourcing markets in a single coherent model. In this paper, we reflect on the modeling issues that inhibit theoretical research on online decision making for crowdsourcing, and identify some steps forward. This paper grew out of the authors' own frustration with these issues, and we hope it will encourage the community to attempt to understand, debate, and ultimately address them.", "title": "" } ]
[ { "docid": "d974b1ffafd9ad738303514f28a770b9", "text": "We introduce a new algorithm for reinforcement learning called Maximum aposteriori Policy Optimisation (MPO) based on coordinate ascent on a relativeentropy objective. We show that several existing methods can directly be related to our derivation. We develop two off-policy algorithms and demonstrate that they are competitive with the state-of-the-art in deep reinforcement learning. In particular, for continuous control, our method outperforms existing methods with respect to sample efficiency, premature convergence and robustness to hyperparameter settings.", "title": "" }, { "docid": "9afc04ce0ddde03789f4eaa4eab39e09", "text": "In this paper we propose a novel method for recognizing human actions by exploiting a multi-layer representation based on a deep learning based architecture. A first level feature vector is extracted and then a high level representation is obtained by taking advantage of a Deep Belief Network trained using a Restricted Boltzmann Machine. The classification is finally performed by a feed-forward neural network. The main advantage behind the proposed approach lies in the fact that the high level representation is automatically built by the system exploiting the regularities in the dataset; given a suitably large dataset, it can be expected that such a representation can outperform a hand-design description scheme. The proposed approach has been tested on two standard datasets and the achieved results, compared with state of the art algorithms, confirm its effectiveness.", "title": "" }, { "docid": "bdd69c3aabbe9f794d3ea732479b9c64", "text": "Modern imaging methods like computed tomography (CT) generate 3-D volumes of image data. How do radiologists search through such images? Are certain strategies more efficient? Although there is a large literature devoted to understanding search in 2-D, relatively little is known about search in volumetric space. In recent years, with the ever-increasing popularity of volumetric medical imaging, this question has taken on increased importance as we try to understand, and ultimately reduce, errors in diagnostic radiology. In the current study, we asked 24 radiologists to search chest CTs for lung nodules that could indicate lung cancer. To search, radiologists scrolled up and down through a \"stack\" of 2-D chest CT \"slices.\" At each moment, we tracked eye movements in the 2-D image plane and coregistered eye position with the current slice. We used these data to create a 3-D representation of the eye movements through the image volume. Radiologists tended to follow one of two dominant search strategies: \"drilling\" and \"scanning.\" Drillers restrict eye movements to a small region of the lung while quickly scrolling through depth. Scanners move more slowly through depth and search an entire level of the lung before moving on to the next level in depth. Driller performance was superior to the scanners on a variety of metrics, including lung nodule detection rate, percentage of the lung covered, and the percentage of search errors where a nodule was never fixated.", "title": "" }, { "docid": "2bf619a1af1bab48b4b6f57df8f29598", "text": "Alcoholism and drug addiction have marked impacts on the ability of families to function. Much of the literature has been focused on adult members of a family who present with substance dependency. There is limited research into the effects of adolescent substance dependence on parenting and family functioning; little attention has been paid to the parents' experience. This qualitative study looks at the parental perspective as they attempted to adapt and cope with substance dependency in their teenage children. The research looks into family life and adds to family functioning knowledge when the identified client is a youth as opposed to an adult family member. Thirty-one adult caregivers of 21 teenagers were interviewed, resulting in eight significant themes: (1) finding out about the substance dependence problem; (2) experiences as the problems escalated; (3) looking for explanations other than substance dependence; (4) connecting to the parent's own history; (5) trying to cope; (6) challenges of getting help; (7) impact on siblings; and (8) choosing long-term rehabilitation. Implications of this research for clinical practice are discussed.", "title": "" }, { "docid": "9003737b3f3e2ac6a64d3a3fe1dd358b", "text": "Cultural influence has recently received significant attention from academics due to its vital role in the success or failure of a project. In the construction industry, several empirical investigations have examined the influence of culture on project management. The aim of this study is to determine the impact of project organizational culture on the performance of construction projects. A total of 199 completed construction projects in Vietnam with specific data gathering through questionnaires were analyzed. The findings reveal that contractor commitment to contract agreements is the most significant cultural factor affecting project performance. Goal alignment and reliance, contractor commitment, and worker orientation (i.e., commitment to workers) contribute to improved overall performance and participant satisfaction. Contractor commitment and cooperative orientation enhance labor productivity, whereas goal alignment and trust and contractor commitment ensure learning performance (i.e., learning from experience). The findings of this study may assist construction professionals in implementing practices that can contribute to the sustainability and success of construction projects.", "title": "" }, { "docid": "3be99b1ef554fde94742021e4782a2aa", "text": "This is the second part of a two-part paper that has arisen from the work of the IEEE Power Engineering Society's Multi-Agent Systems (MAS) Working Group. Part I of this paper examined the potential value of MAS technology to the power industry, described fundamental concepts and approaches within the field of multi-agent systems that are appropriate to power engineering applications, and presented a comprehensive review of the power engineering applications for which MAS are being investigated. It also defined the technical issues which must be addressed in order to accelerate and facilitate the uptake of the technology within the power and energy sector. Part II of this paper explores the decisions inherent in engineering multi-agent systems for applications in the power and energy sector and offers guidance and recommendations on how MAS can be designed and implemented. Given the significant and growing interest in this field, it is imperative that the power engineering community considers the standards, tools, supporting technologies, and design methodologies available to those wishing to implement a MAS solution for a power engineering problem. This paper describes the various options available and makes recommendations on best practice. It also describes the problem of interoperability between different multi-agent systems and proposes how this may be tackled.", "title": "" }, { "docid": "500a9d141bc6bbd0972703413abef637", "text": "It is found that some “important” twitter users’ words can influence the stock prices of certain stocks. The stock price of Tesla – a famous electric automobile company – for example, recently seen a huge rise after Elon Musk, the CEO of Tesla, updated his twitter about the self-driving motors. Besides, the Dow Jones and S&P 500 indexes dropped by about one percent after the Twitter account of Associated Press falsely posted the message about an explosion in the White House.", "title": "" }, { "docid": "e3cb1c3dbed312688e75baa4ee047ff8", "text": "Aggregation of amyloid-β (Aβ) by self-assembly into oligomers or amyloids is a central event in Alzheimer's disease. Coordination of transition-metal ions, mainly copper and zinc, to Aβ occurs in vivo and modulates the aggregation process. A survey of the impact of Cu(II) and Zn(II) on the aggregation of Aβ reveals some general trends: (i) Zn(II) and Cu(II) at high micromolar concentrations and/or in a large superstoichiometric ratio compared to Aβ have a tendency to promote amorphous aggregations (precipitation) over the ordered formation of fibrillar amyloids by self-assembly; (ii) metal ions affect the kinetics of Aβ aggregations, with the most significant impact on the nucleation phase; (iii) the impact is metal-specific; (iv) Cu(II) and Zn(II) affect the concentrations and/or the types of aggregation intermediates formed; (v) the binding of metal ions changes both the structure and the charge of Aβ. The decrease in the overall charge at physiological pH increases the overall driving force for aggregation but may favor more precipitation over fibrillation, whereas the induced structural changes seem more relevant for the amyloid formation.", "title": "" }, { "docid": "771ee12eec90c042b5c2320680ddb290", "text": "1. SUMMARY In the past decade educators have developed a myriad of tools to help novices learn to program. Different tools emerge as new features or combinations of features are employed. In this panel we consider the features of recent tools that have garnered significant interest in the computer science education community. These including narrative tools which support programming to tell a story (e.g., Alice [6], Jeroo [8]), visual programming tools which support the construction of programs through a drag-and-drop interface (e.g., JPie [3], Alice [6], Karel Universe), flow-model tools (e.g., Raptor [1], Iconic Programmer [2], VisualLogic) which construct programs through connecting program elements to represent order of computation, specialized output realizations (e.g., Lego Mindstorms [5], JES [7]) that provide execution feedback in nontextual ways, like multimedia or kinesthetic robotics, and tiered language tools (e.g., ProfessorJ [4], RoboLab) in which novices can use more sophisticated versions of a language as their expertise develops.", "title": "" }, { "docid": "07c5758f83352c87d6a4d1ade91e0aaf", "text": "There is a significant need for a realistic dataset on which to evaluate layout analysis methods and examine their performance in detail. This paper presents a new dataset (and the methodology used to create it) based on a wide range of contemporary documents. Strong emphasis is placed on comprehensive and detailed representation of both complex and simple layouts, and on colour originals. In-depth information is recorded both at the page and region level. Ground truth is efficiently created using a new semi-automated tool and stored in a new comprehensive XML representation, the PAGE format. The dataset can be browsed and searched via a web-based front end to the underlying database and suitable subsets (relevant to specific evaluation goals) can be selected and downloaded.", "title": "" }, { "docid": "275ab39cc1f72691beb17936632e7307", "text": "Web searchers sometimes struggle to find relevant information. Struggling leads to frustrating and dissatisfying search experiences, even if searchers ultimately meet their search objectives. Better understanding of search tasks where people struggle is important in improving search systems. We address this important issue using a mixed methods study using large-scale logs, crowd-sourced labeling, and predictive modeling. We analyze anonymized search logs from the Microsoft Bing Web search engine to characterize aspects of struggling searches and better explain the relationship between struggling and search success. To broaden our understanding of the struggling process beyond the behavioral signals in log data, we develop and utilize a crowd-sourced labeling methodology. We collect third-party judgments about why searchers appear to struggle and, if appropriate, where in the search task it became clear to the judges that searches would succeed (i.e., the pivotal query). We use our findings to propose ways in which systems can help searchers reduce struggling. Key components of such support are algorithms that accurately predict the nature of future actions and their anticipated impact on search outcomes. Our findings have implications for the design of search systems that help searchers struggle less and succeed more.", "title": "" }, { "docid": "d1852cf0f4a03f56104861d3985071da", "text": "Running economy (RE) is typically defined as the energy demand for a given velocity of submaximal running, and is determined by measuring the steady-state consumption of oxygen (VO2) and the respiratory exchange ratio. Taking body mass (BM) into consideration, runners with good RE use less energy and therefore less oxygen than runners with poor RE at the same velocity. There is a strong association between RE and distance running performance, with RE being a better predictor of performance than maximal oxygen uptake (VO2max) in elite runners who have a similar VO2max). RE is traditionally measured by running on a treadmill in standard laboratory conditions, and, although this is not the same as overground running, it gives a good indication of how economical a runner is and how RE changes over time. In order to determine whether changes in RE are real or not, careful standardisation of footwear, time of test and nutritional status are required to limit typical error of measurement. Under controlled conditions, RE is a stable test capable of detecting relatively small changes elicited by training or other interventions. When tracking RE between or within groups it is important to account for BM. As VO2 during submaximal exercise does not, in general, increase linearly with BM, reporting RE with respect to the 0.75 power of BM has been recommended. A number of physiological and biomechanical factors appear to influence RE in highly trained or elite runners. These include metabolic adaptations within the muscle such as increased mitochondria and oxidative enzymes, the ability of the muscles to store and release elastic energy by increasing the stiffness of the muscles, and more efficient mechanics leading to less energy wasted on braking forces and excessive vertical oscillation. Interventions to improve RE are constantly sought after by athletes, coaches and sport scientists. Two interventions that have received recent widespread attention are strength training and altitude training. Strength training allows the muscles to utilise more elastic energy and reduce the amount of energy wasted in braking forces. Altitude exposure enhances discrete metabolic aspects of skeletal muscle, which facilitate more efficient use of oxygen. The importance of RE to successful distance running is well established, and future research should focus on identifying methods to improve RE. Interventions that are easily incorporated into an athlete's training are desirable.", "title": "" }, { "docid": "48019a3106c6d74e4cfcc5ac596d4617", "text": "Despite a variety of new communication technologies, loneliness is prevalent in Western countries. Boosting emotional communication through intimate connections has the potential to reduce loneliness. New technologies might exploit biosignals as intimate emotional cues because of their strong relationship to emotions. Through two studies, we investigate the possibilities of heartbeat communication as an intimate cue. In the first study (N = 32), we demonstrate, using self-report and behavioral tracking in an immersive virtual environment, that heartbeat perception influences social behavior in a similar manner as traditional intimate signals such as gaze and interpersonal distance. In the second study (N = 34), we demonstrate that a sound of the heartbeat is not sufficient to cause the effect; the stimulus must be attributed to the conversational partner in order to have influence. Together, these results show that heartbeat communication is a promising way to increase intimacy. Implications and possibilities for applications are discussed.", "title": "" }, { "docid": "0c06c0e4fec9a2cc34c38161e142032d", "text": "We introduce a novel high-level security metrics objective taxonomization model for software-intensive systems. The model systematizes and organizes security metrics development activities. It focuses on the security level and security performance of technical systems while taking into account the alignment of metrics objectives with different business and other management goals. The model emphasizes the roles of security-enforcing mechanisms, the overall security quality of the system under investigation, and secure system lifecycle, project and business management. Security correctness, effectiveness and efficiency are seen as the fundamental measurement objectives, determining the directions for more detailed security metrics development. Integration of the proposed model with riskdriven security metrics development approaches is also discussed.", "title": "" }, { "docid": "c72e0e79f83b59af58e5d8bc7d9244d5", "text": "A novel deep learning architecture (XmasNet) based on convolutional neural networks was developed for the classification of prostate cancer lesions, using the 3D multiparametric MRI data provided by the PROSTATEx challenge. End-to-end training was performed for XmasNet, with data augmentation done through 3D rotation and slicing, in order to incorporate the 3D information of the lesion. XmasNet outperformed traditional machine learning models based on engineered features, for both train and test data. For the test data, XmasNet outperformed 69 methods from 33 participating groups and achieved the second highest AUC (0.84) in the PROSTATEx challenge. This study shows the great potential of deep learning for cancer imaging.", "title": "" }, { "docid": "4b54cf876d3ab7c7277605125055c6c3", "text": "We propose a practical method for L0 norm regularization for neural networks: pruning the network during training by encouraging weights to become exactly zero. Such regularization is interesting since (1) it can greatly speed up training and inference, and (2) it can improve generalization. AIC and BIC, well-known model selection criteria, are special cases of L0 regularization. However, since the L0 norm of weights is non-differentiable, we cannot incorporate it directly as a regularization term in the objective function. We propose a solution through the inclusion of a collection of non-negative stochastic gates, which collectively determine which weights to set to zero. We show that, somewhat surprisingly, for certain distributions over the gates, the expected L0 regularized objective is differentiable with respect to the distribution parameters. We further propose the hard concrete distribution for the gates, which is obtained by “stretching” a binary concrete distribution and then transforming its samples with a hard-sigmoid. The parameters of the distribution over the gates can then be jointly optimized with the original network parameters. As a result our method allows for straightforward and efficient learning of model structures with stochastic gradient descent and allows for conditional computation in a principled way. We perform various experiments to demonstrate the effectiveness of the resulting approach and regularizer.", "title": "" }, { "docid": "e26dcac5bd568b70f41d17925593e7ef", "text": "Autoregressive generative models achieve the best results in density estimation tasks involving high dimensional data, such as images or audio. They pose density estimation as a sequence modeling task, where a recurrent neural network (RNN) models the conditional distribution over the next element conditioned on all previous elements. In this paradigm, the bottleneck is the extent to which the RNN can model long-range dependencies, and the most successful approaches rely on causal convolutions. Taking inspiration from recent work in meta reinforcement learning, where dealing with long-range dependencies is also essential, we introduce a new generative model architecture that combines causal convolutions with self attention. In this paper, we describe the resulting model and present state-of-the-art log-likelihood results on heavily benchmarked datasets: CIFAR-10 (2.85 bits per dim), 32× 32 ImageNet (3.80 bits per dim) and 64 × 64 ImageNet (3.52 bits per dim). Our implementation will be made available at anonymized.", "title": "" }, { "docid": "519e8ee14d170ce92eecc760e810ade4", "text": "Transcript-based annotation and pedigree analysis are two basic steps in the computational analysis of whole-exome sequencing experiments in genetic diagnostics and disease-gene discovery projects. Here, we present Jannovar, a stand-alone Java application as well as a Java library designed to be used in larger software frameworks for exome and genome analysis. Jannovar uses an interval tree to identify all transcripts affected by a given variant, and provides Human Genome Variation Society-compliant annotations both for variants affecting coding sequences and splice junctions as well as untranslated regions and noncoding RNA transcripts. Jannovar can also perform family-based pedigree analysis with Variant Call Format (VCF) files with data from members of a family segregating a Mendelian disorder. Using a desktop computer, Jannovar requires a few seconds to annotate a typical VCF file with exome data. Jannovar is freely available under the BSD2 license. Source code as well as the Java application and library file can be downloaded from http://compbio.charite.de (with tutorial) and https://github.com/charite/jannovar.", "title": "" }, { "docid": "d1cf416860dc8191bf2af370ae16a6bc", "text": "Cas1 integrase is the key enzyme of the clustered regularly interspaced short palindromic repeat (CRISPR)-Cas adaptation module that mediates acquisition of spacers derived from foreign DNA by CRISPR arrays. In diverse bacteria, the cas1 gene is fused (or adjacent) to a gene encoding a reverse transcriptase (RT) related to group II intron RTs. An RT-Cas1 fusion protein has been recently shown to enable acquisition of CRISPR spacers from RNA. Phylogenetic analysis of the CRISPR-associated RTs demonstrates monophyly of the RT-Cas1 fusion, and coevolution of the RT and Cas1 domains. Nearly all such RTs are present within type III CRISPR-Cas loci, but their phylogeny does not parallel the CRISPR-Cas type classification, indicating that RT-Cas1 is an autonomous functional module that is disseminated by horizontal gene transfer and can function with diverse type III systems. To compare the sequence pools sampled by RT-Cas1-associated and RT-lacking CRISPR-Cas systems, we obtained samples of a commercially grown cyanobacterium-Arthrospira platensis Sequencing of the CRISPR arrays uncovered a highly diverse population of spacers. Spacer diversity was particularly striking for the RT-Cas1-containing type III-B system, where no saturation was evident even with millions of sequences analyzed. In contrast, analysis of the RT-lacking type III-D system yielded a highly diverse pool but reached a point where fewer novel spacers were recovered as sequencing depth was increased. Matches could be identified for a small fraction of the non-RT-Cas1-associated spacers, and for only a single RT-Cas1-associated spacer. Thus, the principal source(s) of the spacers, particularly the hypervariable spacer repertoire of the RT-associated arrays, remains unknown.IMPORTANCE While the majority of CRISPR-Cas immune systems adapt to foreign genetic elements by capturing segments of invasive DNA, some systems carry reverse transcriptases (RTs) that enable adaptation to RNA molecules. From analysis of available bacterial sequence data, we find evidence that RT-based RNA adaptation machinery has been able to join with CRISPR-Cas immune systems in many, diverse bacterial species. To investigate whether the abilities to adapt to DNA and RNA molecules are utilized for defense against distinct classes of invaders in nature, we sequenced CRISPR arrays from samples of commercial-scale open-air cultures of Arthrospira platensis, a cyanobacterium that contains both RT-lacking and RT-containing CRISPR-Cas systems. We uncovered a diverse pool of naturally occurring immune memories, with the RT-lacking locus acquiring a number of segments matching known viral or bacterial genes, while the RT-containing locus has acquired spacers from a distinct sequence pool for which the source remains enigmatic.", "title": "" }, { "docid": "ec6e955f3f79ef1706fc6b9b16326370", "text": "Automating the navigation of unmanned aerial vehicles (UAVs) in diverse scenarios has gained much attention in the recent years. However, teaching UAVs to fly in challenging environments remains an unsolved problem, mainly due to the lack of data for training. In this paper, we develop a photo-realistic simulator that can afford the generation of large amounts of training data (both images rendered from the UAV camera and its controls) to teach a UAV to autonomously race through challenging tracks. We train a deep neural network to predict UAV controls from raw image data for the task of autonomous UAV racing. Training is done through imitation learning enabled by data augmentation to allow for the correction of navigation mistakes. Extensive experiments demonstrate that our trained network (when sufficient data augmentation is used) outperforms state-of-the-art methods and flies more consistently than many human pilots.", "title": "" } ]
scidocsrr
05a2b7b14c432f1a5d2c15002fedeb5b
A non-IID Framework for Collaborative Filtering with Restricted Boltzmann Machines
[ { "docid": "065c24bc712f7740b95e0d1a994bfe19", "text": "David Haussler Computer and Information Sciences University of California Santa Cruz Santa Cruz , CA 95064 We study a particular type of Boltzmann machine with a bipartite graph structure called a harmonium. Our interest is in using such a machine to model a probability distribution on binary input vectors . We analyze the class of probability distributions that can be modeled by such machines. showing that for each n ~ 1 this class includes arbitrarily good appwximations to any distribution on the set of all n-vectors of binary inputs. We then present two learning algorithms for these machines .. The first learning algorithm is the standard gradient ascent heuristic for computing maximum likelihood estimates for the parameters (i.e. weights and thresholds) of the modeL Here we give a closed form for this gradient that is significantly easier to compute than the corresponding gradient for the general Boltzmann machine . The second learning algorithm is a greedy method that creates the hidden units and computes their weights one at a time. This method is a variant of the standard method for projection pursuit density estimation . We give experimental results for these learning methods on synthetic data and natural data from the domain of handwritten digits.", "title": "" }, { "docid": "21756eeb425854184ba2ea722a935928", "text": "Collaborative filtering aims at learning predictive models of user preferences, interests or behavior from community data, that is, a database of available user preferences. In this article, we describe a new family of model-based algorithms designed for this task. These algorithms rely on a statistical modelling technique that introduces latent class variables in a mixture model setting to discover user communities and prototypical interest profiles. We investigate several variations to deal with discrete and continuous response variables as well as with different objective functions. The main advantages of this technique over standard memory-based methods are higher accuracy, constant time prediction, and an explicit and compact model representation. The latter can also be used to mine for user communitites. The experimental evaluation shows that substantial improvements in accucracy over existing methods and published results can be obtained.", "title": "" } ]
[ { "docid": "f80a07ad046587f7a303c7177e04bca5", "text": "In order to determine the impact of nitrogen deficiency in medium, growth rate and carotenoids contents were followed during 15 days in two strain Dunaliella spp. (DUN2 and DUN3), isolated respectively from Azla and Idao Iaaza saltworks in the Essaouira region (Morocco). These microalgae were incubated at 25 ± 1 °C with a salinity of 35‰ and continuous light in four growth media with different concentrations of sodium nitrate (NaNO3): 18.75 g/L, 2.5 g/L, 37.5 g/L and 75 g/L. Maximum of cell density was observed under high sodium nitrate concentration during logarithmic phase of growth. The highest specific growth rate was 0.450 × 10 ± 0.006 cells/mL and 2.680 × 10 ± 0.216 cells/mL, respectively for DUN2 and DUN3. Carotenoids production mean were not stimulated under nitrogen deficiency, and the highest content was showed in DUN2 at high nitrogen concentration (3.210 ± 0.261 μg·mL) compared with DUN3 strain.", "title": "" }, { "docid": "c10d33abc6ed1d47c11bf54ed38e5800", "text": "The past decade has seen a steady growth of interest in statistical language models for information retrieval, and much research work has been conducted on this subject. This book by ChengXiang Zhai summarizes most of this research. It opens with an introduction covering the basic concepts of information retrieval and statistical languagemodels, presenting the intuitions behind these concepts. This introduction is then followed by a chapter providing an overview of:", "title": "" }, { "docid": "1ec8f7bb8de36b625cb8fee335557acf", "text": "Airborne laser scanner technique is broadly the most appropriate way to acquire rapidly and with high density 3D data over a city. Once the 3D Lidar data are available, the next task is the automatic data processing, with major aim to construct 3D building models. Among the numerous automatic reconstruction methods, the techniques allowing the detection of 3D building roof planes are of crucial importance. Three main methods arise from the literature: region growing, Hough-transform and Random Sample Consensus (RANSAC) paradigm. Since region growing algorithms are sometimes not very transparent and not homogenously applied, this paper focuses only on the Hough-transform and the RANSAC algorithm. Their principles, their pseudocode rarely detailed in the related literature as well as their complete analyses are presented in this paper. An analytic comparison of both algorithms, in terms of processing time and sensitivity to cloud characteristics, shows that despite the limitation encountered in both methods, RANSAC algorithm is still more efficient than the first one. Under other advantages, its processing time is negligible even when the input data size is very large. On the other hand, Hough-transform is very sensitive to the segmentation parameters values. Therefore, RANSAC algorithm has been chosen and extended to exceed its limitations. Its major limitation is that it searches to detect the best mathematical plane among 3D building point cloud even if this plane does not always represent a roof plane. So the proposed extension allows harmonizing the mathematical aspect of the algorithm with the geometry of a roof. At last, it is shown that the extended approach provides very satisfying results, even in the case of very weak point density and for different levels of building complexity. Therefore, once the roof planes are successfully detected, the automatic building modelling can be carried out.", "title": "" }, { "docid": "842a1d2da67d614ecbc8470987ae85e9", "text": "The task of recovering three-dimensional (3-D) geometry from two-dimensional views of a scene is called 3-D reconstruction. It is an extremely active research area in computer vision. There is a large body of 3-D reconstruction algorithms available in the literature. These algorithms are often designed to provide different tradeoffs between speed, accuracy, and practicality. In addition, even the output of various algorithms can be quite different. For example, some algorithms only produce a sparse 3-D reconstruction while others are able to output a dense reconstruction. The selection of the appropriate 3-D reconstruction algorithm relies heavily on the intended application as well as the available resources. The goal of this paper is to review some of the commonly used motion-parallax-based 3-D reconstruction techniques and make clear the assumptions under which they are designed. To do so efficiently, we classify the reviewed reconstruction algorithms into two large categories depending on whether a prior calibration of the camera is required. Under each category, related algorithms are further grouped according to the common properties they share.", "title": "" }, { "docid": "a4267e0cd6300dc128bfe9de62322ac7", "text": "According to the most common definition, idioms are linguistic expressions whose overall meaning cannot be predicted from the meanings of the constituent parts Although we agree with the traditional view that there is no complete predictability, we suggest that there is a great deal of systematic conceptual motivation for the meaning of most idioms Since most idioms are based on conceptual metaphors and metonymies, systematic motivation arises from sets of 'conceptual mappings or correspondences' that obtain between a source and a target domain in the sense of Lakoff and Koiecses (1987) We distinguish among three aspects of idiomatic meaning First, the general meaning of idioms appears to be determined by the particular 'source domains' that apply to a particular target domain Second, more specific aspects ot idiomatic meaning are provided by the 'ontological mapping that applies to a given idiomatic expression Third, connotative aspects ot idiomatic meaning can be accounted for by 'epistemic correspondences' Finally, we also present an informal experimental study the results of which show that the cognitive semantic view can facilitate the learning of idioms for non-native speakers", "title": "" }, { "docid": "6b1a3fbdb384afded3f48dbe2978e171", "text": "This article provides a brief overview on the current development of software-defined mobile networks (SDMNs). Software defined networking is seen as a promising technology to manage the complexity in communication networks. The need for SDMN comes from the complexity of network management in 5G mobile networks and beyond, driven by increasing mobile traffic demand, heterogeneous wireless environments, and diverse service requirements. The need is strong to introduce new radio network architecture by taking advantage of software oriented design, the separation of the data and control planes, and network virtualization to manage complexity and offer flexibility in 5G networks. Clearly, software oriented design in mobile networks will be fundamentally different from SDN for the Internet, because mobile networks deal with the wireless access problem in complex radio environments, while the Internet mainly addresses the packet forwarding problem. Specific requirements in mobile networks shape the development of SDMN. In this article we present the needs and requirements of SDMN, with particular focus on the software-defined design for radio access networks. We analyze the fundamental problems in radio access networks that call for SDN design and present an SDMN concept. We give a brief overview on current solutions for SDMN and standardization activities. We argue that although SDN design is currently focusing on mobile core networks, extending SDN to radio access networks would naturally be the next step. We identify several research directions on SDN for radio access networks and expect more fundamental studies to release the full potential of software-defined 5G networks.", "title": "" }, { "docid": "ee9e24f38d7674e601ab13b73f3d37db", "text": "This paper presents the design of an application specific hardware for accelerating High Frequency Trading applications. It is optimized to achieve the lowest possible latency for interpreting market data feeds and hence enable minimal round-trip times for executing electronic stock trades. The implementation described in this work enables hardware decoding of Ethernet, IP and UDP as well as of the FAST protocol which is a common protocol to transmit market feeds. For this purpose, we developed a microcode engine with a corresponding instruction set as well as a compiler which enables the flexibility to support a wide range of applied trading protocols. The complete system has been implemented in RTL code and evaluated on an FPGA. Our approach shows a 4x latency reduction in comparison to the conventional Software based approach.", "title": "" }, { "docid": "22cd6bb300489d94a4b88f81de8b0cae", "text": "Closed circuit television systems (CCTV) are becoming more and more popular and are being deployed in many offices, housing estates and in most public spaces. Monitoring systems have been implemented in many European and American cities. This makes for an enormous load for the CCTV operators, as the number of camera views a single operator can monitor is limited by human factors. In this paper, we focus on the task of automated detection and recognition of dangerous situations for CCTV systems. We propose algorithms that are able to alert the human operator when a firearm or knife is visible in the image. We have focused on limiting the number of false alarms in order to allow for a real-life application of the system. The specificity and sensitivity of the knife detection are significantly better than others published recently. We have also managed to propose a version of a firearm detection algorithm that offers a near-zero rate of false alarms. We have shown that it is possible to create a system that is capable of an early warning in a dangerous situation, which may lead to faster and more effective response times and a reduction in the number of potential victims.", "title": "" }, { "docid": "525c6aa72a83e3261e4ffeab508c15cd", "text": "One of the major differences between markets that follow a \" sharing economy \" paradigm and traditional two-sided markets is that, in the sharing economy, the supply side includes individual nonprofessional decision makers, in contrast to firms and professional agents. Using a data set of prices and availability of listings on Airbnb, we find that there exist substantial differences in the operational and financial performances between professional and nonprofessional hosts. In particular, properties managed by professional hosts earn 16.9% more in daily revenue, have 15.5% higher occupancy rates, and are 13.6% less likely to exit the market compared with properties owned by nonprofessional hosts, even after controlling for property and market characteristics. We demonstrate that these performance discrepancies between professionals and nonprofessionals can be partly explained by pricing inefficiencies. Specifically, we provide empirical evidence that nonprofessional hosts are less likely to offer different rates across stay dates based on the underlying demand, such as major holidays and conventions. We develop a parsimonious model to analyze the implications of having two such different host groups for a profit-maximizing platform operator and for a social planner. While a profit-maximizing platform operator should charge lower prices to nonprofessional hosts, a social planner would charge the same prices to professionals and nonprofessionals.", "title": "" }, { "docid": "4eabc161187126a726a6b65f6fc6c685", "text": "In this paper, we propose a new method to estimate synthetic aperture radar interferometry (InSAR) interferometric phase in the presence of large coregistration errors. The method takes advantage of the coherence information of neighboring pixel pairs to automatically coregister the SAR images and employs the projection of the joint signal subspace onto the corresponding joint noise subspace to estimate the terrain interferometric phase. The method can automatically coregister the SAR images and reduce the interferometric phase noise simultaneously. Theoretical analysis and computer simulation results show that the method can provide accurate estimate of the terrain interferometric phase (interferogram) as the coregistration error reaches one pixel. The effectiveness of the method is also verified with the real data from the Spaceborne Imaging Radar-C/X Band SAR and the European Remote Sensing 1 and 2 satellites.", "title": "" }, { "docid": "69cca12d008d18e8516460c211beca50", "text": "This paper discusses the effective coding of Rijndael algorithm, Advanced Encryption Standard (AES) in Hardware Description Language, Verilog. In this work we analyze the structure and design of new AES, following three criteria: a) resistance against all known attacks; b) speed and code compactness on a wide range of platforms; and c) design simplicity; as well as its similarities and dissimilarities with other symmetric ciphers. On the other side, the principal advantages of new AES with respect to DES, as well as its limitations, are investigated. Thus, for example, the fact that the new cipher and its inverse use different components, which practically eliminates the possibility for weak and semi-weak keys, as existing for DES, and the non-linearity of the key expansion, which practically eliminates the possibility of equivalent keys, are two of the principal advantages of new cipher. Finally, the implementation aspects of Rijndael cipher and its inverse are treated. Thus, although Rijndael is well suited to be implemented efficiently on a wide range of processors and in dedicated hardware, we have concentrated our study on 8-bit processors, typical for current Smart Cards and on 32-bit processors, typical for PCs.", "title": "" }, { "docid": "8f360c907e197beb5e6fc82b081c908f", "text": "This paper describes a 3D object-space paint program. This program allows the user to directly manipulate the parameters used to shade the surface of the 3D shape by applying pigment to its surface. The pigment has all the properties normally associated with material shading models. This includes, but is not limited to, the diffuse color, the specular color, and the surface roughness. The pigment also can have thickness, which is modeled by simultaneously creating a bump map attached to the shape. The output of the paint program is a 3D model with associated texture maps. This information can be used with any rendering program with texture mapping capabilities. Almost all traditional techniques of 2D computer image painting have analogues in 3D object painting, but there are also many new techniques unique to 3D. One example is the use of solid textures to pattern the surface.", "title": "" }, { "docid": "e81f3e4ba0e7d1f1bd0205a4ff9c0aaf", "text": "Marlowe-Crowne Social Desirability Scale (MC) (Crowne & Marlowe, 1960) scores were collected on 1096 individuals involved in forensic evaluations. No prior publication of forensic norms was found for this instrument, which provides a measure of biased self-presentation (dissimulation). MC mean score was 19.42 for the sample. Also calculated was the score on Form C (MC-C) (Reynolds, 1982), and the mean for this 13-item scale was 7.61. The scores for the current sample generally are higher than those published for non-forensic groups, and statistical analysis indicated the difference was significant for both the MC and MC-C (d =.75 and.70, respectively, p <.001). Neither gender nor educational level proved to be significant factors in accounting for variance, and age did not appear to be correlated with scores. Group membership of subjects based on referral reason (family violence, abuse, neglect, competency, disability) was significant for both the MC and MC-C scores. Results suggest the MC or MC-C can be useful as part of a forensic-assessment battery to measure biased self-presentation.", "title": "" }, { "docid": "aded7e5301d40faf52942cd61a1b54ba", "text": "In this paper, a lower limb rehabilitation robot in sitting position is developed for patients with muscle weakness. The robot is a stationary based type which is able to perform various types of therapeutic exercises. For safe operation, the robot's joint is driven by two-stage cable transmission while the balance mechanism is used to reduce actuator size and transmission ratio. Control algorithms for passive, assistive and resistive exercises are designed to match characteristics of each therapeutic exercises and patients with different muscle strength. Preliminary experiments conducted with a healthy subject have demonstrated that the robot and the control algorithms are promising for lower limb rehabilitation task.", "title": "" }, { "docid": "9de7af8824594b5de7d510c81585c61b", "text": "The adoption of business process improvement strategies is a challenge to organizations trying to improve the quality and productivity of their services. The quest for the benefits of this improvement on resource optimization and the responsiveness of the organizations has raised several proposals for business process improvement approaches. However, proposals and results of scientific research on process improvement in higher education institutions, extremely complex and unique organizations, are still scarce. This paper presents a method that provides guidance about how practices and knowledge are gathered to contribute for business process improvement based on the communication between different stakeholders.", "title": "" }, { "docid": "081b15c3dda7da72487f5a6e96e98862", "text": "The CEDAR real-time address block location system, which determines candidates for the location of the destination address from a scanned mail piece image, is described. For each candidate destination address block (DAB), the address block location (ABL) system determines the line segmentation, global orientation, block skew, an indication of whether the address appears to be handwritten or machine printed, and a value indicating the degree of confidence that the block actually contains the destination address. With 20-MHz Sparc processors, the average time per mail piece for the combined hardware and software system components is 0.210 seconds. The system located 89.0% of the addresses as the top choice. Recent developments in the system include the use of a top-down segmentation tool, address syntax analysis using only connected component data, and improvements to the segmentation refinement routines. This has increased top choice performance to 91.4%.<<ETX>>", "title": "" }, { "docid": "4a08c16c5e091e1c6212fc606ccd854a", "text": "The problem of predicting the position of a freely foraging rat based on the ensemble firing patterns of place cells recorded from the CA1 region of its hippocampus is used to develop a two-stage statistical paradigm for neural spike train decoding. In the first, or encoding stage, place cell spiking activity is modeled as an inhomogeneous Poisson process whose instantaneous rate is a function of the animal's position in space and phase of its theta rhythm. The animal's path is modeled as a Gaussian random walk. In the second, or decoding stage, a Bayesian statistical paradigm is used to derive a nonlinear recursive causal filter algorithm for predicting the position of the animal from the place cell ensemble firing patterns. The algebra of the decoding algorithm defines an explicit map of the discrete spike trains into the position prediction. The confidence regions for the position predictions quantify spike train information in terms of the most probable locations of the animal given the ensemble firing pattern. Under our inhomogeneous Poisson model position was a three to five times stronger modulator of the place cell spiking activity than theta phase in an open circular environment. For animal 1 (2) the median decoding error based on 34 (33) place cells recorded during 10 min of foraging was 8.0 (7.7) cm. Our statistical paradigm provides a reliable approach for quantifying the spatial information in the ensemble place cell firing patterns and defines a generally applicable framework for studying information encoding in neural systems.", "title": "" }, { "docid": "d775cdc31c84d94d95dc132b88a37fae", "text": "Image guided filtering has been widely used in many image processing applications. However, it is a local filtering method and has limited propagation ability. In this paper, we propose a new image filtering method: nonlocal image guided averaging (NLGA). Derived from a nonlocal linear model, the proposed method can utilize the nonlocal similarity of the guidance image, so that it can propagate nonlocal information reliably. Consequently, NLGA can obtain a sharper filtering results in the edge regions and more smooth results in the smooth regions. It shows superiority over image guided filtering in different applications, such as image dehazing, depth map super-resolution and image denoising.", "title": "" }, { "docid": "e07198de4fe8ea55f2c04ba5b6e9423a", "text": "Query expansion (QE) is a well known technique to improve retrieval effectiveness, which expands original queries with extra terms that are predicted to be relevant. A recent trend in the literature is Supervised Query Expansion (SQE), where supervised learning is introduced to better select expansion terms. However, an important but neglected issue for SQE is its efficiency, as applying SQE in retrieval can be much more time-consuming than applying Unsupervised Query Expansion (UQE) algorithms. In this paper, we point out that the cost of SQE mainly comes from term feature extraction, and propose a Two-stage Feature Selection framework (TFS) to address this problem. The first stage is adaptive expansion decision, which determines if a query is suitable for SQE or not. For unsuitable queries, SQE is skipped and no term features are extracted at all, which reduces the most time cost. For those suitable queries, the second stage is cost constrained feature selection, which chooses a subset of effective yet inexpensive features for supervised learning. Extensive experiments on four corpora (including three academic and one industry corpus) show that our TFS framework can substantially reduce the time cost for SQE, while maintaining its effectiveness.", "title": "" }, { "docid": "c2a2e9903859a6a9f9b3db5696cb37ff", "text": "Depth estimation from a single image is a fundamental problem in computer vision. In this paper, we propose a simple yet effective convolutional spatial propagation network (CSPN) to learn the affinity matrix for depth prediction. Specifically, we adopt an efficient linear propagation model, where the propagation is performed with a manner of recurrent convolutional operation, and the affinity among neighboring pixels is learned through a deep convolutional neural network (CNN). We apply the designed CSPN to two depth estimation tasks given a single image: (1) Refine the depth output from existing state-of-the-art (SOTA) methods; (2) Convert sparse depth samples to a dense depth map by embedding the depth samples within the propagation procedure. The second task is inspired by the availability of LiDAR that provides sparse but accurate depth measurements. We experimented the proposed CSPN over the popular NYU v2 [1] and KITTI [2] datasets, where we show that our proposed approach improves not only quality (e.g., 30% more reduction in depth error), but also speed (e.g., 2 to 5× faster) of depth maps than previous SOTA methods.", "title": "" } ]
scidocsrr
2611b587d31078d109c9407e274b3b78
Multi-view Sentence Representation Learning
[ { "docid": "a4bb8b5b749fb8a95c06a9afab9a17bb", "text": "Many Natural Language Processing applications nowadays rely on pre-trained word representations estimated from large text corpora such as news collections, Wikipedia and Web Crawl. In this paper, we show how to train high-quality word vector representations by using a combination of known tricks that are however rarely used together. The main result of our work is the new set of publicly available pre-trained models that outperform the current state of the art by a large margin on a number of tasks.", "title": "" }, { "docid": "5664ca8d7f0f2f069d5483d4a334c670", "text": "In Semantic Textual Similarity, systems rate the degree of semantic equivalence between two text snippets. This year, the participants were challenged with new data sets for English, as well as the introduction of Spanish, as a new language in which to assess semantic similarity. For the English subtask, we exposed the systems to a diversity of testing scenarios, by preparing additional OntoNotesWordNet sense mappings and news headlines, as well as introducing new genres, including image descriptions, DEFT discussion forums, DEFT newswire, and tweet-newswire headline mappings. For Spanish, since, to our knowledge, this is the first time that official evaluations are conducted, we used well-formed text, by featuring sentences extracted from encyclopedic content and newswire. The annotations for both tasks leveraged crowdsourcing. The Spanish subtask engaged 9 teams participating with 22 system runs, and the English subtask attracted 15 teams with 38 system runs.", "title": "" }, { "docid": "ccbb7e753b974951bb658b63e91431bb", "text": "In Semantic Textual Similarity (STS), systems rate the degree of semantic equivalence, on a graded scale from 0 to 5, with 5 being the most similar. This year we set up two tasks: (i) a core task (CORE), and (ii) a typed-similarity task (TYPED). CORE is similar in set up to SemEval STS 2012 task with pairs of sentences from sources related to those of 2012, yet different in genre from the 2012 set, namely, this year we included newswire headlines, machine translation evaluation datasets and multiple lexical resource glossed sets. TYPED, on the other hand, is novel and tries to characterize why two items are deemed similar, using cultural heritage items which are described with metadata such as title, author or description. Several types of similarity have been defined, including similar author, similar time period or similar location. The annotation for both tasks leverages crowdsourcing, with relative high interannotator correlation, ranging from 62% to 87%. The CORE task attracted 34 participants with 89 runs, and the TYPED task attracted 6 teams with 14 runs.", "title": "" } ]
[ { "docid": "741efb8046bb888b944768784b87d70a", "text": "Entropy Search (ES) and Predictive Entropy Search (PES) are popular and empirically successful Bayesian Optimization techniques. Both rely on a compelling information-theoretic motivation, and maximize the information gained about the arg max of the unknown function; yet, both are plagued by the expensive computation for estimating entropies. We propose a new criterion, Max-value Entropy Search (MES), that instead uses the information about the maximum function value. We show relations of MES to other Bayesian optimization methods, and establish a regret bound. We observe that MES maintains or improves the good empirical performance of ES/PES, while tremendously lightening the computational burden. In particular, MES is much more robust to the number of samples used for computing the entropy, and hence more efficient for higher dimensional problems.", "title": "" }, { "docid": "c10bf551bdb3cb6ae25f0f8803ba6fe7", "text": "The purpose of this study is to propose a theoretical model to examine the antecedents of repurchase intention in online group-buying by integrating the perspective of DeLone & McLean IS success model and the literature of trust. The model was tested using the data collected from 253 customers of a group-buying website in Taiwan. The results show that satisfaction with website, satisfaction with sellers, and perceived quality of website have positive influences on repurchase intention, while perceived quality of website and perceived quality of sellers have significant impacts on satisfaction with website and satisfaction with sellers, respectively. The results also show that trust in website has positive influences on perceived quality of website and satisfaction with website, whereas trust in sellers influence perceived quality of sellers and satisfaction with sellers significantly. Finally, the results show that perceived size of website has positive influence on trust in website, while reputation of website and reputation of sellers significantly affect trust in website and trust in sellers, respectively. The implications for theory and practice and suggestions for future research are also discussed. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "6ddfb4631928eec4247adf2ac033129e", "text": "Facial micro-expression recognition is an upcoming area in computer vision research. Up until the recent emergence of the extensive CASMEII spontaneous micro-expression database, there were numerous obstacles faced in the elicitation and labeling of data involving facial micro-expressions. In this paper, we propose the Local Binary Patterns with Six Intersection Points (LBP-SIP) volumetric descriptor based on the three intersecting lines crossing over the center point. The proposed LBP-SIP reduces the redundancy in LBP-TOP patterns, providing a more compact and lightweight representation; leading to more efficient computational complexity. Furthermore, we also incorporated a Gaussian multi-resolution pyramid to our proposed approach by concatenating the patterns across all pyramid levels. Using an SVM classifier with leave-one-sample-out cross validation, we achieve the best recognition accuracy of 67.21%, surpassing the baseline performance with further computational efficiency.", "title": "" }, { "docid": "305f877227516eded75819bdf48ab26d", "text": "Deep generative models have been successfully applied to many applications. However, existing works experience limitations when generating large images (the literature usually generates small images, e.g. 32× 32 or 128× 128). In this paper, we propose a novel scheme, called deep tensor adversarial generative nets (TGAN), that generates large high-quality images by exploring tensor structures. Essentially, the adversarial process of TGAN takes place in a tensor space. First, we impose tensor structures for concise image representation, which is superior in capturing the pixel proximity information and the spatial patterns of elementary objects in images, over the vectorization preprocess in existing works. Secondly, we propose TGAN that integrates deep convolutional generative adversarial networks and tensor super-resolution in a cascading manner, to generate high-quality images from random distributions. More specifically, we design a tensor super-resolution process that consists of tensor dictionary learning and tensor coefficients learning. Finally, on three datasets, the proposed TGAN generates images with more realistic textures, compared with state-of-the-art adversarial autoencoders. The size of the generated images is increased by over 8.5 times, namely 374× 374 in PASCAL2.", "title": "" }, { "docid": "1dbaa72cd95c32d1894750357e300529", "text": "In recognizing the importance of educating aspiring scientists in the responsible conduct of research (RCR), the Office of Research Integrity (ORI) began sponsoring the creation of instructional resources to address this pressing need in 2002. The present guide on avoiding plagiarism and other inappropriate writing practices was created to help students, as well as professionals, identify and prevent such malpractices and to develop an awareness of ethical writing and authorship. This guide is one of the many products stemming from ORI’s effort to promote the RCR.", "title": "" }, { "docid": "8bd44a21a890e7c44fec4e56ddd39af2", "text": "This paper focuses on the problem of discovering users' topics of interest on Twitter. While previous efforts in modeling users' topics of interest on Twitter have focused on building a \"bag-of-words\" profile for each user based on his tweets, they overlooked the fact that Twitter users usually publish noisy posts about their lives or create conversation with their friends, which do not relate to their topics of interest. In this paper, we propose a novel framework to address this problem by introducing a modified author-topic model named twitter-user model. For each single tweet, our model uses a latent variable to indicate whether it is related to its author's interest. Experiments on a large dataset we crawled using Twitter API demonstrate that our model outperforms traditional methods in discovering user interest on Twitter.", "title": "" }, { "docid": "99f616b614d11993c387bb1b0ed1b7c6", "text": "Accurate assessment of nutrition information is an important part in the prevention and treatment of a multitude of diseases, but remains a challenging task. We present a novel mobile augmented reality application, which assists users in the nutrition assessment of their meals. Using the realtime camera image as a guide, the user overlays a 3D form of the food. Additionally the user selects the food type. The corresponding nutrition information is automatically computed. Thus accurate volume estimation is required for accurate nutrition information assessment. This work presents an evaluation of our mobile augmented reality approaches for portion estimation and offers a comparison to conventional portion estimation approaches. The comparison is performed on the basis of a user study (n=28). The quality of nutrition assessment is measured based on the error in energy units. In the results of the evaluation one of our mobile augmented reality approaches significantly outperforms all other methods. Additionally we present results on the efficiency and effectiveness of the approaches.", "title": "" }, { "docid": "be9fc2798c145abe70e652b7967c3760", "text": "Given semantic descriptions of object classes, zero-shot learning aims to accurately recognize objects of the unseen classes, from which no examples are available at the training stage, by associating them to the seen classes, from which labeled examples are provided. We propose to tackle this problem from the perspective of manifold learning. Our main idea is to align the semantic space that is derived from external information to the model space that concerns itself with recognizing visual features. To this end, we introduce a set of \"phantom\" object classes whose coordinates live in both the semantic space and the model space. Serving as bases in a dictionary, they can be optimized from labeled data such that the synthesized real object classifiers achieve optimal discriminative performance. We demonstrate superior accuracy of our approach over the state of the art on four benchmark datasets for zero-shot learning, including the full ImageNet Fall 2011 dataset with more than 20,000 unseen classes.", "title": "" }, { "docid": "1cdcb24b61926f37037fbb43e6d379b7", "text": "The Internet has undergone dramatic changes in the past 2 decades and now forms a global communication platform that billions of users rely on for their daily activities. While this transformation has brought tremendous benefits to society, it has also created new threats to online privacy, such as omnipotent governmental surveillance. As a result, public interest in systems for anonymous communication has drastically increased. In this work, we survey previous research on designing, developing, and deploying systems for anonymous communication. Our taxonomy and comparative assessment provide important insights about the differences between the existing classes of anonymous communication protocols.", "title": "" }, { "docid": "732eb96d39d250e6b1355f7f4d53feed", "text": "Determine blood type is essential before administering a blood transfusion, including in emergency situation. Currently, these tests are performed manually by technicians, which can lead to human errors. Various systems have been developed to automate these tests, but none is able to perform the analysis in time for emergency situations. This work aims to develop an automatic system to perform these tests in a short period of time, adapting to emergency situations. To do so, it uses the slide test and image processing techniques using the IMAQ Vision from National Instruments. The image captured after the slide test is processed and detects the occurrence of agglutination. Next the classification algorithm determines the blood type in analysis. Finally, all the information is stored in a database. Thus, the system allows determining the blood type in an emergency, eliminating transfusions based on the principle of universal donor and reducing transfusion reactions risks.", "title": "" }, { "docid": "cd31be485b4b914508a5a9e7c5445459", "text": "Deep learning has become increasingly popular in both academic and industrial areas in the past years. Various domains including pattern recognition, computer vision, and natural language processing have witnessed the great power of deep networks. However, current studies on deep learning mainly focus on data sets with balanced class labels, while its performance on imbalanced data is not well examined. Imbalanced data sets exist widely in real world and they have been providing great challenges for classification tasks. In this paper, we focus on the problem of classification using deep network on imbalanced data sets. Specifically, a novel loss function called mean false error together with its improved version mean squared false error are proposed for the training of deep networks on imbalanced data sets. The proposed method can effectively capture classification errors from both majority class and minority class equally. Experiments and comparisons demonstrate the superiority of the proposed approach compared with conventional methods in classifying imbalanced data sets on deep neural networks.", "title": "" }, { "docid": "d75b9005a0a861e29977fda36780b947", "text": "Classifying traffic signs is an indispensable part of Advanced Driver Assistant Systems. This strictly requires that the traffic sign classification model accurately classifies the images and consumes as few CPU cycles as possible to immediately release the CPU for other tasks. In this paper, we first propose a new ConvNet architecture. Then, we propose a new method for creating an optimal ensemble of ConvNets with highest possible accuracy and lowest number of ConvNets. Our experiments show that the ensemble of our proposed ConvNets (the ensemble is also constructed using our method) reduces the number of arithmetic operations 88 and $$73\\,\\%$$ 73 % compared with two state-of-art ensemble of ConvNets. In addition, our ensemble is $$0.1\\,\\%$$ 0.1 % more accurate than one of the state-of-art ensembles and it is only $$0.04\\,\\%$$ 0.04 % less accurate than the other state-of-art ensemble when tested on the same dataset. Moreover, ensemble of our compact ConvNets reduces the number of the multiplications 95 and $$88\\,\\%$$ 88 % , yet, the classification accuracy drops only 0.2 and $$0.4\\,\\%$$ 0.4 % compared with these two ensembles. Besides, we also evaluate the cross-dataset performance of our ConvNet and analyze its transferability power in different layers. We show that our network is easily scalable to new datasets with much more number of traffic sign classes and it only needs to fine-tune the weights starting from the last convolution layer. We also assess our ConvNet through different visualization techniques. Besides, we propose a new method for finding the minimum additive noise which causes the network to incorrectly classify the image by minimum difference compared with the highest score in the loss vector.", "title": "" }, { "docid": "e30ae0b5cd90d091223ab38596de3109", "text": "1 Abstract We describe a consistent hashing algorithm which performs multiple lookups per key in a hash table of nodes. It requires no additional storage beyond the hash table, and achieves a peak-to-average load ratio of 1 + ε with just 1 + 1 ε lookups per key.", "title": "" }, { "docid": "845398d098de3ae423f02ad43f255cbb", "text": "This document describes COBRA-ONT, an ontology for supporting pervasive context-aware systems. COBRA-ONT, expressed in the Web Ontology Language OWL, is a collection of ontologies for describing places, agents and events and their associated properties in an intelligent meeting-room domain. This ontology is developed as a part of the Context Broker Architecture (CoBrA), a broker-centric agent architecture that provides knowledge sharing, context reasoning and privacy protection supports for pervasive context-aware systems. We also describe an inference engine for reasoning with information expressed using the COBRA-ONT ontology and the ongoing research in using the DAML-Time ontology for context reasoning.", "title": "" }, { "docid": "f9eff7a4652f6242911f41ba180f75ed", "text": "The last ten years have seen a significant increase in computationally relevant research seeking to build models of narrative and its use. These efforts have focused in and/or drawn from a range of disciplines, including narrative theory Many of these research efforts have been informed by a focus on the development of an explicit model of narrative and its function. Computational approaches from artificial intelligence (AI) are particularly well-suited to such modeling tasks, as they typically involve precise definitions of aspects of some domain of discourse and well-defined algorithms for reasoning over those definitions. In the case of narrative modeling, there is a natural fit with AI techniques. AI approaches often concern themselves with representing and reasoning about some real world domain of discourse – a microworld where inferences must be made in order to draw conclusions about some higher order property of the world or to explain, predict, control or communicate about the microworld's dynamic state. In this regard, the fictional worlds created by storytellers and the ways that we communicate about them suggest promising and immediate analogs for application of existing AI methods. One of the most immediate analogs between AI research and narrative models lies in the area of reasoning about actions and plans. The goals and plans that characters form and act upon within a story are the primary elements of the story's plot. At first glance, story plans have many of the same features as knowledge representations developed by AI researchers to characterize the plans formed by industrial robots operating to assemble automobile parts on a factory floor or by autonomous vehicles traversing unknown physical landscapes. As we will discuss below, planning representations have offered significant promise in modeling plot structure. Equally as significantly, however, is their ability to be used by intelligent algorithms in the automatic creation of plot lines. Just as AI planning systems can produce new plans to achieve an agent's goals in the face of a unanticipated execution context, so too may planning systems work to produce the plans of a collection of characters as they scheme to obtain, thwart, overcome or succeed.", "title": "" }, { "docid": "cf413b8e64aabbf7f3c1714759eb2ec7", "text": "Many significant real-world classification tasks involve a large number of categories which are arranged in a hierarchical structure; for example, classifying documents into subject categories under the library of congress scheme, or classifying world-wide-web documents into topic hierarchies. We investigate the potential benefits of using a given hierarchy over base classes to learn accurate multi-category classifiers for these domains. First, we consider the possibility of exploiting a class hierarchy as prior knowledge that can help one learn a more accurate classifier. We explore the benefits of learning categorydiscriminants in a “hard” top-down fashion and compare this to a “soft” approach which shares training data among sibling categories. In doing so, we verify that hierarchies have the potential to improve prediction accuracy. But we argue that the reasons for this can be subtle. Sometimes, the improvement is only because using a hierarchy happens to constrain the expressiveness of a hypothesis class in an appropriate manner. However, various controlled experiments show that in other cases the performance advantage associated with using a hierarchy really does seem to be due to the “prior knowledge” it encodes.", "title": "" }, { "docid": "2759e52ca38436b7f07bd64e6092884f", "text": "This paper proposes a method of eye-model-based gaze estimation by RGB-D camera, Kinect sensor. Different from other methods, our method sets up a model to calibrate the eyeball center by gazing at a target in 3D space, not predefined. And then by detecting the pupil center, we can estimate the gaze direction. To achieve this algorithm, we first build a head model relying on Kinect sensor, then obtaining the 3D information of pupil center. As we need to know the eyeball center position in head model, we do a calibration by designing a target to gaze. Because the ray from eyeball center to target and the ray from eyeball center to pupil center should meet a relationship, we can have an equation to solve the real eyeball center position. After calibration, we can have a gaze estimation automatically at any time. Our method allows free head motion and it only needs a simple device, finally it also can run automatically in real-time. Experiments show that our method performs well and still has a room for improvement.", "title": "" }, { "docid": "4fb76fb4daa5490dca902c9177c9b465", "text": "An improved faster region-based convolutional neural network (R-CNN) [same object retrieval (SOR) faster R-CNN] is proposed to retrieve the same object in different scenes with few training samples. By concatenating the feature maps of shallow and deep convolutional layers, the ability of Regions of Interest (RoI) pooling to extract more detailed features is improved. In the training process, a pretrained CNN model is fine-tuned using a query image data set, so that the confidence score can identify an object proposal to the object level rather than the classification level. In the query process, we first select the ten images for which the object proposals have the closest confidence scores to the query object proposal. Then, the image for which the detected object proposal has the minimum cosine distance to the query object proposal is considered as the query result. The proposed SOR faster R-CNN is applied to our Coke cans data set and three public image data sets, i.e., Oxford Buildings 5k, Paris Buildings 6k, and INS 13. The experimental results confirm that SOR faster R-CNN has better identification performance than fine-tuned faster R-CNN. Moreover, SOR faster R-CNN achieves much higher accuracy for detecting low-resolution images than the fine-tuned faster R-CNN on the Coke cans (0.094 mAP higher), Oxford Buildings (0.043 mAP higher), Paris Buildings (0.078 mAP higher), and INS 13 (0.013 mAP higher) data sets.", "title": "" }, { "docid": "1ff5526e4a18c1e59b63a3de17101b11", "text": "Plug-in electric vehicles (PEVs) are equipped with onboard level-1 or level-2 chargers for home overnight or office daytime charging. In addition, off-board chargers can provide fast charging for traveling long distances. However, off-board high-power chargers are bulky, expensive, and require comprehensive evolution of charging infrastructures. An integrated onboard charger capable of fast charging of PEVs will combine the benefits of both the conventional onboard and off-board chargers, without additional weight, volume, and cost. In this paper, an innovative single-phase integrated charger, using the PEV propulsion machine and its traction converter, is introduced. The charger topology is capable of power factor correction and battery voltage/current regulation without any bulky add-on components. Ac machine windings are utilized as mutually coupled inductors, to construct a two-channel interleaved boost converter. The circuit analyses of the proposed technology, based on a permanent magnet synchronous machine (PMSM), are discussed in details. Experimental results of a 3-kW proof-of-concept prototype are carried out using a ${\\textrm{220-V}}_{{\\rm{rms}}}$, 3-phase, 8-pole PMSM. A nearly unity power factor and 3.96% total harmonic distortion of input ac current are acquired with a maximum efficiency of 93.1%.", "title": "" }, { "docid": "6821d4c1114e007453578dd90600db15", "text": "Our goal is to assess the strategic and operational benefits of electronic integration for industrial procurement. We conduct a field study with an industrial supplier and examine the drivers of performance of the procurement process. Our research quantifies both the operational and strategic impacts of electronic integration in a B2B procurement environment for a supplier. Additionally, we show that the customer also obtains substantial benefits from efficient procurement transaction processing. We isolate the performance impact of technology choice and ordering processes on both the trading partners. A significant finding is that the supplier derives large strategic benefits when the customer initiates the system and the supplier enhances the system’s capabilities. With respect to operational benefits, we find that when suppliers have advanced electronic linkages, the order-processing system significantly increases benefits to both parties. (Business Value of IT; Empirical Assessment; Electronic Integration; Electronic Procurement; B2B; Strategic IT Impact; Operational IT Impact)", "title": "" } ]
scidocsrr
5482469ec3f304c0e5052cf269e6e52e
Velocity and Acceleration Cones for Kinematic and Dynamic Constraints on Omni-Directional Mobile Robots
[ { "docid": "b09dd4fee4d7cdce61c153a822eadb65", "text": "A dynamic model is presented for omnidirectional wheeled mobile robots, including wheel/motion surface slip. We derive the dynamics model, experimentally measure friction coefficients, and measure the force to cause slip (to validate our friction model). Dynamic simulation examples are presented to demonstrate omnidirectional motion with slip. After developing an improved friction model, compared to our initial model, the simulation results agree well with experimentally-measured trajectory data with slip. Initially, we thought that only high robot velocity and acceleration governed the resulting slipping motion. However, we learned that the rigid material existing in the discontinuities between omnidirectional wheel rollers plays an equally important role in determining omnidirectional mobile robot dynamic slip motion, even at low rates and accelerations.", "title": "" } ]
[ { "docid": "62fa4f8712a4fcc1a3a2b6148bd3589b", "text": "In this paper we discuss the development and application of a large formal ontology to the semantic web. The Suggested Upper Merged Ontology (SUMO) (Niles & Pease, 2001) (SUMO, 2002) is a “starter document” in the IEEE Standard Upper Ontology effort. This upper ontology is extremely broad in scope and can serve as a semantic foundation for search, interoperation, and communication on the semantic web.", "title": "" }, { "docid": "c8a2ba8f47266d0a63281a5abb5fa47f", "text": "Hair plays an important role in human appearance. However, hair segmentation is still a challenging problem partially due to the lack of an effective model to handle its arbitrary shape variations. In this paper, we present a part-based model robust to hair shape and environment variations. The key idea of our method is to identify local parts by promoting the effectiveness of the part-based model. To this end, we propose a measurable statistic, called Subspace Clustering Dependency (SC-Dependency), to estimate the co-occurrence probabilities between local shapes. SC-Dependency guarantees output reasonability and allows us to evaluate the effectiveness of part-wise constraints in an information-theoretic way. Then we formulate the part identification problem as an MRF that aims to optimize the effectiveness of the potential functions. Experiments are performed on a set of consumer images and show our algorithm's capability and robustness to handle hair shape variations and extreme environment conditions.", "title": "" }, { "docid": "bfd834ddda77706264fa458302549325", "text": "Deep learning has emerged as a new methodology with continuous interests in artificial intelligence, and it can be applied in various business fields for better performance. In fashion business, deep learning, especially Convolutional Neural Network (CNN), is used in classification of apparel image. However, apparel classification can be difficult due to various apparel categories and lack of labeled image data for each category. Therefore, we propose to pre-train the GoogLeNet architecture on ImageNet dataset and fine-tune on our fine-grained fashion dataset based on design attributes. This will complement the small size of dataset and reduce the training time. After 10-fold experiments, the average final test accuracy results 62%.", "title": "" }, { "docid": "317b7998eb27384c1655dd9f4dca1787", "text": "Composite rhytidectomy added the repositioning of the orbicularis oculi muscle to the deep plane face lift to achieve a more harmonious appearance of the face by adding periorbital rejuvenation. By not separating the orbicularis oculi from the zygomaticus minor and by extending the dissection under medial portions of the zygomaticus major and minor muscles, a more significant improvement in composite rhytidectomy can now be achieved. A thin nonrestrictive mesentery between the deep plane face lift dissection and the zygorbicular dissection still allows vertical movement of the composite face lift flap without interrupting the intimate relationship between the platysma, cheek fat, and orbicularis oculi muscle. This modification eliminates the occasional prolonged edema and occasional temporary dystonia previously observed. It allows the continuation of the use of the arcus marginalis release, which has also been modified by resetting the septum orbitale over the orbital rim. These two modifications allow a more predictable and impressive result. They reinforce the concept of periorbital rejuvenation as an integral part of facial rejuvenation, which not only produces a more harmonious immediate result but prevents the possible unfavorable sequelae of conventional rhytidectomy and lower blepharoplasty.", "title": "" }, { "docid": "9b37cc1d96d9a24e500c572fa2cb339a", "text": "Site-based or topic-specific search engines work with mixed success because of the general difficulty of the information retrieval task, and the lack of good link information to allow authorities to be identified. We are advocating an open source approach to the problem due to its scope and need for software components. We have adopted a topic-based search engine because it represents the next generation of capability. This paper outlines our scalable system for site-based or topic-specific search, and demonstrates the developing system on a small 250,000 document collection of EU and UN web pages.", "title": "" }, { "docid": "02d8c55750904b7f4794139bcfa51693", "text": "BACKGROUND\nMore than one-third of deaths during the first five years of life are attributed to undernutrition, which are mostly preventable through economic development and public health measures. To alleviate this problem, it is necessary to determine the nature, magnitude and determinants of undernutrition. However, there is lack of evidence in agro-pastoralist communities like Bule Hora district. Therefore, this study assessed magnitude and factors associated with undernutrition in children who are 6-59 months of age in agro-pastoral community of Bule Hora District, South Ethiopia.\n\n\nMETHODS\nA community based cross-sectional study design was used to assess the magnitude and factors associated with undernutrition in children between 6-59 months. A structured questionnaire was used to collect data from 796 children paired with their mothers. Anthropometric measurements and determinant factors were collected. SPSS version 16.0 statistical software was used for analysis. Bivariate and multivariate logistic regression analyses were conducted to identify factors associated to nutritional status of the children Statistical association was declared significant if p-value was less than 0.05.\n\n\nRESULTS\nAmong study participants, 47.6%, 29.2% and 13.4% of them were stunted, underweight, and wasted respectively. Presence of diarrhea in the past two weeks, male sex, uneducated fathers and > 4 children ever born to a mother were significantly associated with being underweight. Presence of diarrhea in the past two weeks, male sex and pre-lacteal feeding were significantly associated with stunting. Similarly, presence of diarrhea in the past two weeks, age at complementary feed was started and not using family planning methods were associated to wasting.\n\n\nCONCLUSION\nUndernutrition is very common in under-five children of Bule Hora district. Factors associated to nutritional status of children in agro-pastoralist are similar to the agrarian community. Diarrheal morbidity was associated with all forms of Protein energy malnutrition. Family planning utilization decreases the risk of stunting and underweight. Feeding practices (pre-lacteal feeding and complementary feeding practice) were also related to undernutrition. Thus, nutritional intervention program in Bule Hora district in Ethiopia should focus on these factors.", "title": "" }, { "docid": "dd92ee7d7f38cda187bfb26e9d4d258b", "text": "Crowdsourcing” is a relatively recent concept that encompasses many practices. This diversity leads to the blurring of the limits of crowdsourcing that may be identified virtually with any type of Internet-based collaborative activity, such as co-creation or user innovation. Varying definitions of crowdsourcing exist and therefore, some authors present certain specific examples of crowdsourcing as paradigmatic, while others present the same examples as the opposite. In this paper, existing definitions of crowdsourcing are analyzed to extract common elements and to establish the basic characteristics of any crowdsourcing initiative. Based on these existing definitions, an exhaustive and consistent definition for crowdsourcing is presented and contrasted in eleven cases.", "title": "" }, { "docid": "03a55678d5f25f710274323abf71f48c", "text": "Ontologies are an explicit specification of a conceptualization, that is understood to be an abstract and simplified version of the world to be represented. In recent years, ontologies have been used in Ubiquitous Computing, especially for the development of context-aware applications. In this paper, we offer a taxonomy for classifying ontologies used in Ubiquitous Computing, in which two main categories are distinguished: Domain ontologies, created to represent and communicate agreed knowledge within some sub-domain of Ubiquitous Computing; and Ontologies as software artifacts, when ontologies play the role of an additional type of artifact in ubiquitous computing applications. The latter category is subdivided according with the moment in that ontologies are used: at development time or at run time. Also, we analyze and classify (based on this taxonomy) some recently published works.", "title": "" }, { "docid": "72f3800a072c2844f6ec145788c0749e", "text": "In Augmented Reality (AR), interfaces consist of a blend of both real and virtual content. In this paper we examine existing gaming styles played in the real world or on computers. We discuss the strengths and weaknesses of these mediums within an informal model of gaming experience split into four aspects; physical, mental, social and emotional. We find that their strengths are mostly complementary, and argue that games built in AR can blend them to enhance existing game styles and open up new ones. To illustrate these ideas, we present our work on AR Worms, a re-implementation of the classic computer game Worms using Augmented Reality. We discuss how AR has enabled us to start exploring interfaces for gaming, and present informal observations of players at several demonstrations. Finally, we present some ideas for AR games in the area of strategy and role playing games.", "title": "" }, { "docid": "98b603ed5be37165cc22da7650023d7d", "text": "One reason that word learning presents a challenge for children is because pairings between word forms and meanings are arbitrary conventions that children must learn via observation - e.g., the fact that \"shovel\" labels shovels. The present studies explore cases in which children might bypass observational learning and spontaneously infer new word meanings: By exploiting the fact that many words are flexible and systematically encode multiple, related meanings. For example, words like shovel and hammer are nouns for instruments, and verbs for activities involving those instruments. The present studies explored whether 3- to 5-year-old children possess semantic generalizations about lexical flexibility, and can use these generalizations to infer new word meanings: Upon learning that dax labels an activity involving an instrument, do children spontaneously infer that dax can also label the instrument itself? Across four studies, we show that at least by age four, children spontaneously generalize instrument-activity flexibility to new words. Together, our findings point to a powerful way in which children may build their vocabulary, by leveraging the fact that words are linked to multiple meanings in systematic ways.", "title": "" }, { "docid": "d71040311b8753299377b02023ba5b4c", "text": "Learning based methods have shown very promising results for the task of depth estimation in single images. However, most existing approaches treat depth prediction as a supervised regression problem and as a result, require vast quantities of corresponding ground truth depth data for training. Just recording quality depth data in a range of environments is a challenging problem. In this paper, we innovate beyond existing approaches, replacing the use of explicit depth data during training with easier-to-obtain binocular stereo footage. We propose a novel training objective that enables our convolutional neural network to learn to perform single image depth estimation, despite the absence of ground truth depth data. Ex-ploiting epipolar geometry constraints, we generate disparity images by training our network with an image reconstruction loss. We show that solving for image reconstruction alone results in poor quality depth images. To overcome this problem, we propose a novel training loss that enforces consistency between the disparities produced relative to both the left and right images, leading to improved performance and robustness compared to existing approaches. Our method produces state of the art results for monocular depth estimation on the KITTI driving dataset, even outperforming supervised methods that have been trained with ground truth depth.", "title": "" }, { "docid": "cc8b634daad1088aa9f4c43222fab279", "text": "In this paper, a comparision between the conventional LSTM network and the one-dimensional grid LSTM network applied on single word speech recognition is conducted. The performance of the networks are measured in terms of accuracy and training time. The conventional LSTM model is the current state of the art method to model speech recognition. However, the grid LSTM architecture has proven to be successful in solving other emperical tasks such as translation and handwriting recognition. When implementing the two networks in the same training framework with the same training data of single word audio files, the conventional LSTM network yielded an accuracy rate of 64.8 % while the grid LSTM network yielded an accuracy rate of 65.2 %. Statistically, there was no difference in the accuracy rate between the models. In addition, the conventional LSTM network took 2 % longer to train. However, this difference in training time is considered to be of little significance when tralnslating it to absolute time. Thus, it can be concluded that the one-dimensional grid LSTM model performs just as well as the conventional one.", "title": "" }, { "docid": "157c084aa6622c74449f248f98314051", "text": "A magnetically-tuned multi-mode VCO featuring an ultra-wide frequency tuning range is presented. By changing the magnetic coupling coefficient between the primary and secondary coils in the transformer tank, the frequency tuning range of a dual-band VCO is greatly increased to continuously cover the whole E-band. Fabricated in a 65-nm CMOS process, the presented VCO measures a tuning range of 44.2% from 57.5 to 90.1 GHz while consuming 7mA to 9mA at 1.2V supply. The measured phase noises at 10MHz offset from carrier frequencies of 72.2, 80.5 and 90.1 GHz are -111.8, -108.9 and -105 dBc/Hz, respectively, which corresponds to a FOMT between -192.2 and -184.2dBc/Hz.", "title": "" }, { "docid": "14d9343bbe4ad2dd4c2c27cb5d6795cd", "text": "In the paper a method of translation applied in a new system TGT is discussed. TGT translates texts written in Polish into corresponding utterances in the Polish sign language. Discussion is focused on text-into-text translation phase. Proper translation is done on the level of a predicative representation of the sentence. The representation is built on the basis of syntactic graph that depicts the composition and mutual connections of syntactic groups, which exist in the sentence and are identified at the syntactic analysis stage. An essential element of translation process is complementing the initial predicative graph with nodes, which correspond to lacking sentence members. The method acts for primitive sentences as well as for compound ones, with some limitations, however. A translation example is given which illustrates main transformations done on the linguistic level. It is complemented by samples of images generated by the animating part of the system.", "title": "" }, { "docid": "2438a082eac9852d3dbcea22aa0402b2", "text": "Importance\nDietary modification remains key to successful weight loss. Yet, no one dietary strategy is consistently superior to others for the general population. Previous research suggests genotype or insulin-glucose dynamics may modify the effects of diets.\n\n\nObjective\nTo determine the effect of a healthy low-fat (HLF) diet vs a healthy low-carbohydrate (HLC) diet on weight change and if genotype pattern or insulin secretion are related to the dietary effects on weight loss.\n\n\nDesign, Setting, and Participants\nThe Diet Intervention Examining The Factors Interacting with Treatment Success (DIETFITS) randomized clinical trial included 609 adults aged 18 to 50 years without diabetes with a body mass index between 28 and 40. The trial enrollment was from January 29, 2013, through April 14, 2015; the date of final follow-up was May 16, 2016. Participants were randomized to the 12-month HLF or HLC diet. The study also tested whether 3 single-nucleotide polymorphism multilocus genotype responsiveness patterns or insulin secretion (INS-30; blood concentration of insulin 30 minutes after a glucose challenge) were associated with weight loss.\n\n\nInterventions\nHealth educators delivered the behavior modification intervention to HLF (n = 305) and HLC (n = 304) participants via 22 diet-specific small group sessions administered over 12 months. The sessions focused on ways to achieve the lowest fat or carbohydrate intake that could be maintained long-term and emphasized diet quality.\n\n\nMain Outcomes and Measures\nPrimary outcome was 12-month weight change and determination of whether there were significant interactions among diet type and genotype pattern, diet and insulin secretion, and diet and weight loss.\n\n\nResults\nAmong 609 participants randomized (mean age, 40 [SD, 7] years; 57% women; mean body mass index, 33 [SD, 3]; 244 [40%] had a low-fat genotype; 180 [30%] had a low-carbohydrate genotype; mean baseline INS-30, 93 μIU/mL), 481 (79%) completed the trial. In the HLF vs HLC diets, respectively, the mean 12-month macronutrient distributions were 48% vs 30% for carbohydrates, 29% vs 45% for fat, and 21% vs 23% for protein. Weight change at 12 months was -5.3 kg for the HLF diet vs -6.0 kg for the HLC diet (mean between-group difference, 0.7 kg [95% CI, -0.2 to 1.6 kg]). There was no significant diet-genotype pattern interaction (P = .20) or diet-insulin secretion (INS-30) interaction (P = .47) with 12-month weight loss. There were 18 adverse events or serious adverse events that were evenly distributed across the 2 diet groups.\n\n\nConclusions and Relevance\nIn this 12-month weight loss diet study, there was no significant difference in weight change between a healthy low-fat diet vs a healthy low-carbohydrate diet, and neither genotype pattern nor baseline insulin secretion was associated with the dietary effects on weight loss. In the context of these 2 common weight loss diet approaches, neither of the 2 hypothesized predisposing factors was helpful in identifying which diet was better for whom.\n\n\nTrial Registration\nclinicaltrials.gov Identifier: NCT01826591.", "title": "" }, { "docid": "bb43c98d05f3844354862d39f6fa1d2d", "text": "There are always frustrations for drivers in finding parking spaces and being protected from auto theft. In this paper, to minimize the drivers' hassle and inconvenience, we propose a new intelligent secure privacy-preserving parking scheme through vehicular communications. The proposed scheme is characterized by employing parking lot RSUs to surveil and manage the whole parking lot and is enabled by communication between vehicles and the RSUs. Once vehicles that are equipped with wireless communication devices, which are also known as onboard units, enter the parking lot, the RSUs communicate with them and provide the drivers with real-time parking navigation service, secure intelligent antitheft protection, and friendly parking information dissemination. In addition, the drivers' privacy is not violated. Performance analysis through extensive simulations demonstrates the efficiency and practicality of the proposed scheme.", "title": "" }, { "docid": "bee4d4ba947d87b86abc02852c39d2b3", "text": "Aim\nThe study assessed the documentation of nursing care before, during and after the Standardized Nursing Language Continuing Education Programme (SNLCEP). It evaluates the differences in documentation of nursing care in different nursing specialty areas and assessed the influence of work experience on the quality of documentation of nursing care with a view to provide information on documentation of nursing care. The instrument used was an adapted scoring guide for nursing diagnosis, nursing intervention and nursing outcome (Q-DIO).\n\n\nDesign\nRetrospective record reviews design was used.\n\n\nMethods\nA total of 270 nursing process booklets formed the sample size. From each ward, 90 booklets were selected in this order: 30 booklets before the SNLCEP, 30 booklets during SNLCEP and 30 booklets after SNLCEP.\n\n\nResults\nOverall, the study concluded that the SNLCEP had a significant effect on the quality of documentation of nursing care using Standardized Nursing Languages.", "title": "" }, { "docid": "938e44b4c03823584d9f9fb9209a9b1e", "text": "The competitive MNIST handwritten digit recognition benchmark has a long history of broken records since 1998. The most recent substantial improvement by others dates back 7 years (error rate 0.4%) . Recently we were able to significantly improve this result, using graphics cards to greatly speed up training of simple but deep MLPs, which achieved 0.35%, outperforming all the previous more complex methods. Here we report another substantial improvement: 0.31% obtained using a committee of MLPs.", "title": "" }, { "docid": "fe687739626916780ff22d95cf89f758", "text": "In this paper, we address the problem of jointly summarizing large sets of Flickr images and YouTube videos. Starting from the intuition that the characteristics of the two media types are different yet complementary, we develop a fast and easily-parallelizable approach for creating not only high-quality video summaries but also novel structural summaries of online images as storyline graphs. The storyline graphs can illustrate various events or activities associated with the topic in a form of a branching network. The video summarization is achieved by diversity ranking on the similarity graphs between images and video frames. The reconstruction of storyline graphs is formulated as the inference of sparse time-varying directed graphs from a set of photo streams with assistance of videos. For evaluation, we collect the datasets of 20 outdoor activities, consisting of 2.7M Flickr images and 16K YouTube videos. Due to the large-scale nature of our problem, we evaluate our algorithm via crowdsourcing using Amazon Mechanical Turk. In our experiments, we demonstrate that the proposed joint summarization approach outperforms other baselines and our own methods using videos or images only.", "title": "" }, { "docid": "0b61d0ffe709d29e133ead6d6211a003", "text": "The hypothesis that Enterococcus faecalis resists common intracanal medications by forming biofilms was tested. E. faecalis colonization of 46 extracted, medicated roots was observed with scanning electron microscopy (SEM) and scanning confocal laser microscopy. SEM detected colonization of root canals medicated with calcium hydroxide points and the positive control within 2 days. SEM detected biofilms in canals medicated with calcium hydroxide paste in an average of 77 days. Scanning confocal laser microscopy analysis of two calcium hydroxide paste medicated roots showed viable colonies forming in a root canal infected for 86 days, whereas in a canal infected for 160 days, a mushroom-shape typical of a biofilm was observed. Analysis by sodium dodecyl sulfate polyacrylamide gel electrophoresis showed no differences between the protein profiles of bacteria in free-floating (planktonic) and inoculum cultures. Analysis of biofilm bacteria was inconclusive. These observations support potential E. faecalis biofilm formation in vivo in medicated root canals.", "title": "" } ]
scidocsrr
1e0b95ca31bb557a980e9560c4e479c5
Trilinear Tensor: The Fundamental Construct of Multiple-view Geometry and Its Applications
[ { "docid": "5aa5ebf7727ea1b5dcf4d8f74b13cb29", "text": "Visual object recognition requires the matching of an image with a set of models stored in memory. In this paper, we propose an approach to recognition in which a 3-D object is represented by the linear combination of 2-D images of the object. IfJLk{M1,.” .Mk} is the set of pictures representing a given object and P is the 2-D image of an object to be recognized, then P is considered to be an instance of M if P= C~=,aiMi for some constants (pi. We show that this approach handles correctly rigid 3-D transformations of objects with sharp as well as smooth boundaries and can also handle nonrigid transformations. The paper is divided into two parts. In the first part, we show that the variety of views depicting the same object under different transformations can often be expressed as the linear combinations of a small number of views. In the second part, we suggest how this linear combination property may be used in the recognition process.", "title": "" } ]
[ { "docid": "2476c8b7f6fe148ab20c29e7f59f5b23", "text": "A high temperature, wire-bondless power electronics module with a double-sided cooling capability is proposed and successfully fabricated. In this module, a low-temperature co-fired ceramic (LTCC) substrate was used as the dielectric and chip carrier. Conducting vias were created on the LTCC carrier to realize the interconnection. The absent of a base plate reduced the overall thermal resistance and also improved the fatigue life by eliminating a large-area solder layer. Nano silver paste was used to attach power devices to the DBC substrate as well as to pattern the gate connection. Finite element simulations were used to compare the thermal performance to several reported double-sided power modules. Electrical measurements of a SiC MOSFET and SiC diode switching position demonstrated the functionality of the module.", "title": "" }, { "docid": "65ed76ddd6f7fd0aea717d2e2643dd16", "text": "In semi-supervised learning, a number of labeled examples are usually required for training an initial weakly useful predictor which is in turn used for exploiting the unlabeled examples. However, in many real-world applications there may exist very few labeled training examples, which makes the weakly useful predictor difficult to generate, and therefore these semisupervised learning methods cannot be applied. This paper proposes a method working under a two-view setting. By taking advantages of the correlations between the views using canonical component analysis, the proposed method can perform semi-supervised learning with only one labeled training example. Experiments and an application to content-based image retrieval validate the effectiveness of the proposed method.", "title": "" }, { "docid": "8e0b61e82179cc39b4df3d06448a3d14", "text": "The antibacterial activity and antioxidant effect of the compounds α-terpineol, linalool, eucalyptol and α-pinene obtained from essential oils (EOs), against pathogenic and spoilage forming bacteria were determined. The antibacterial activities of these compounds were observed in vitro on four Gram-negative and three Gram-positive strains. S. putrefaciens was the most resistant bacteria to all tested components, with MIC values of 2% or higher, whereas E. coli O157:H7 was the most sensitive strain among the tested bacteria. Eucalyptol extended the lag phase of S. Typhimurium, E. coli O157:H7 and S. aureus at the concentrations of 0.7%, 0.6% and 1%, respectively. In vitro cell growth experiments showed the tested compounds had toxic effects on all bacterial species with different level of potency. Synergistic and additive effects were observed at least one dose pair of combination against S. Typhimurium, E. coli O157:H7 and S. aureus, however antagonistic effects were not found in these combinations. The results of this first study are encouraging for further investigations on mechanisms of antimicrobial activity of these EO components.", "title": "" }, { "docid": "204ad3064d559c345caa2c6d1a140582", "text": "In this paper, a face recognition method based on Convolution Neural Network (CNN) is presented. This network consists of three convolution layers, two pooling layers, two full-connected layers and one Softmax regression layer. Stochastic gradient descent algorithm is used to train the feature extractor and the classifier, which can extract the facial features and classify them automatically. The Dropout method is used to solve the over-fitting problem. The Convolution Architecture For Feature Extraction framework (Caffe) is used during the training and testing process. The face recognition rate of the ORL face database and AR face database based on this network is 99.82% and 99.78%.", "title": "" }, { "docid": "c8948a93e138ca0ac8cae3247dc9c81a", "text": "Sharpness is an important determinant in visual assessment of image quality. The human visual system is able to effortlessly detect blur and evaluate sharpness of visual images, but the underlying mechanism is not fully understood. Existing blur/sharpness evaluation algorithms are mostly based on edge width, local gradient, or energy reduction of global/local high frequency content. Here we understand the subject from a different perspective, where sharpness is identified as strong local phase coherence (LPC) near distinctive image features evaluated in the complex wavelet transform domain. Previous LPC computation is restricted to be applied to complex coefficients spread in three consecutive dyadic scales in the scale-space. Here we propose a flexible framework that allows for LPC computation in arbitrary fractional scales. We then develop a new sharpness assessment algorithm without referencing the original image. We use four subject-rated publicly available image databases to test the proposed algorithm, which demonstrates competitive performance when compared with state-of-the-art algorithms.", "title": "" }, { "docid": "34bd41f7384d6ee4d882a39aec167b3e", "text": "This paper presents a robust feedback controller for ball and beam system (BBS). The BBS is a nonlinear system in which a ball has to be balanced on a particular beam position. The proposed nonlinear controller designed for the BBS is based upon Backstepping control technique which guarantees the boundedness of tracking error. To tackle the unknown disturbances, an external disturbance estimator (EDE) has been employed. The stability analysis of the overall closed loop robust control system has been worked out in the sense of Lyapunov theory. Finally, the simulation studies have been done to demonstrate the suitability of proposed scheme.", "title": "" }, { "docid": "bc8950644ded24618a65c4fcef302044", "text": "Child maltreatment is a pervasive problem in our society that has long-term detrimental consequences to the development of the affected child such as future brain growth and functioning. In this paper, we surveyed empirical evidence on the neuropsychological effects of child maltreatment, with a special emphasis on emotional, behavioral, and cognitive process–response difficulties experienced by maltreated children. The alteration of the biochemical stress response system in the brain that changes an individual’s ability to respond efficiently and efficaciously to future stressors is conceptualized as the traumatic stress response. Vulnerable brain regions include the hypothalamic–pituitary–adrenal axis, the amygdala, the hippocampus, and prefrontal cortex and are linked to children’s compromised ability to process both emotionally-laden and neutral stimuli in the future. It is suggested that information must be garnered from varied literatures to conceptualize a research framework for the traumatic stress response in maltreated children. This research framework suggests an altered developmental trajectory of information processing and emotional dysregulation, though much debate still exists surrounding the correlational nature of empirical studies, the potential of resiliency following childhood trauma, and the extent to which early interventions may facilitate recovery.", "title": "" }, { "docid": "f4baeef21537029511a59edbbe7f2741", "text": "Software testing requires the use of a model to guide such efforts as test selection and test verification. Often, such models are implicit, existing only in the head of a human tester, applying test inputs in an ad hoc fashion. The mental model testers build encapsulates application behavior, allowing testers to understand the application’s capabilities and more effectively test its range of possible behaviors. When these mental models are written down, they become sharable, reusable testing artifacts. In this case, testers are performing what has become to be known as model-based testing. Model-based testing has recently gained attention with the popularization of models (including UML) in software design and development. There are a number of models of software in use today, a few of which make good models for testing. This paper introduces model-based testing and discusses its tasks in general terms with finite state models (arguably the most popular software models) as examples. In addition, advantages, difficulties, and shortcoming of various model-based approaches are concisely presented. Finally, we close with a discussion of where model-based testing fits in the present and future of software engineering.", "title": "" }, { "docid": "dc2e98a7fbaf8b3dedd6eaf34730a9d3", "text": "Cultural issues impact on health care, including individuals’ health care behaviours and beliefs. Hasidic Jews, with their strict religious observance, emphasis on kabbalah, cultural insularity and spiritual leader, their Rebbe, comprise a distinct cultural group. The reviewed studies reveal that Hasidic Jews may seek spiritual healing and incorporate religion in their explanatory models of illness; illness attracts stigma; psychiatric patients’ symptomatology may have religious content; social and cultural factors may challenge health care delivery. The extant research has implications for clinical practice. However, many studies exhibited methodological shortcomings with authors providing incomplete analyses of the extent to which findings are authentically Hasidic. High-quality research is required to better inform the provision of culturally competent care to Hasidic patients.", "title": "" }, { "docid": "17b66811d671fbe77a935a9028c954ce", "text": "Research in management information systems often examines computer literacy as an independent variable. Study subjects may be asked to self-report their computer literacy and that literacy is then utilized as a research variable. However, it is not known whether self-reported computer literacy is a valid measure of a subject’s actual computer literacy. The research presented in this paper examined the question of whether self-reported computer literacy can be a reliable indication of actual computer literacy and therefore valid for use in empirical research. Study participants were surveyed and asked to self-report their level of computer literacy. Following, subjects were tested to determine an objective measure of computer literacy. The data analysis determined that self-reported computer literacy is not reliable. Results of this research are important for academic programs, for businesses, and for future empirical studies in management information systems.", "title": "" }, { "docid": "ac29c2091012ccfac993cc706eadbf3c", "text": "In this study 40 genotypes in a randomized complete block design with three replications for two years were planted in the region of Ardabil. The yield related data and its components over the years of the analysis of variance were combined.Results showed that there was a significant difference between genotypes and genotype interaction in the environment. MLR and ANN methods were used to predict yield in barley. The fitted model in a yield predicting linear regression method was as follows: Reg = 1.75 + 0.883 X1 + 0.05017X2 +1.984X3 Also, yield prediction based on multi-layer neural network (ANN) using the Matlab Perceptron type software with one hidden layer including 15 neurons and using algorithm after error propagation learning method and hyperbolic tangent function was implemented, in both methods absolute values of relative error as a deviation index in order to estimate and using duad t test of mean deviation index of the two estimates was examined. Results showed that in the ANN technique the mean deviation index of estimation significantly was one-third (1 / 3) of its rate in the MLR, because there was a significant interaction between genotype and environment and its impact on estimation by MLR method.Therefore, when the genotype environment interaction is significant, in the yield prediction in instead of the regression is recommended of a neural network approach due to high yield and more velocity in the estimation to be used.", "title": "" }, { "docid": "3a6a97b2705d90b031ab1e065281465b", "text": "Common (Cinnamomum verum, C. zeylanicum) and cassia (C. aromaticum) cinnamon have a long history of use as spices and flavouring agents. A number of pharmacological and clinical effects have been observed with their use. The objective of this study was to systematically review the scientific literature for preclinical and clinical evidence of safety, efficacy, and pharmacological activity of common and cassia cinnamon. Using the principles of evidence-based practice, we searched 9 electronic databases and compiled data according to the grade of evidence found. One pharmacological study on antioxidant activity and 7 clinical studies on various medical conditions were reported in the scientific literature including type 2 diabetes (3), Helicobacter pylori infection (1), activation of olfactory cortex of the brain (1), oral candidiasis in HIV (1), and chronic salmonellosis (1). Two of 3 randomized clinical trials on type 2 diabetes provided strong scientific evidence that cassia cinnamon demonstrates a therapeutic effect in reducing fasting blood glucose by 10.3%–29%; the third clinical trial did not observe this effect. Cassia cinnamon, however, did not have an effect at lowering glycosylated hemoglobin (HbA1c). One randomized clinical trial reported that cassia cinnamon lowered total cholesterol, low-density lipoprotein cholesterol, and triglycerides; the other 2 trials, however, did not observe this effect. There was good scientific evidence that a species of cinnamon was not effective at eradicating H. pylori infection. Common cinnamon showed weak to very weak evidence of efficacy in treating oral candidiasis in HIV patients and chronic", "title": "" }, { "docid": "e971fd6eac427df9a68f10cad490b2db", "text": "We present a corpus of 5,000 richly annotated abstracts of medical articles describing clinical randomized controlled trials. Annotations include demarcations of text spans that describe the Patient population enrolled, the Interventions studied and to what they were Compared, and the Outcomes measured (the 'PICO' elements). These spans are further annotated at a more granular level, e.g., individual interventions within them are marked and mapped onto a structured medical vocabulary. We acquired annotations from a diverse set of workers with varying levels of expertise and cost. We describe our data collection process and the corpus itself in detail. We then outline a set of challenging NLP tasks that would aid searching of the medical literature and the practice of evidence-based medicine.", "title": "" }, { "docid": "a55224bcd659f67314e7ef31e0fd0756", "text": "Dopamine neurons located in the midbrain play a role in motivation that regulates approach behavior (approach motivation). In addition, activation and inactivation of dopamine neurons regulate mood and induce reward and aversion, respectively. Accumulating evidence suggests that such motivational role of dopamine neurons is not limited to those located in the ventral tegmental area, but also in the substantia nigra. The present paper reviews previous rodent work concerning dopamine's role in approach motivation and the connectivity of dopamine neurons, and proposes two working models: One concerns the relationship between extracellular dopamine concentration and approach motivation. High, moderate and low concentrations of extracellular dopamine induce euphoric, seeking and aversive states, respectively. The other concerns circuit loops involving the cerebral cortex, basal ganglia, thalamus, epithalamus, and midbrain through which dopaminergic activity alters approach motivation. These models should help to generate hypothesis-driven research and provide insights for understanding altered states associated with drugs of abuse and affective disorders.", "title": "" }, { "docid": "af836023436eaa65ef55f9928312e73f", "text": "We present a probabilistic approach to learning a Gaussian Process classifier in the presence of unlabeled data. Our approach involves a “null category noise model” (NCNM) inspired by ordered categorical noise models. The noise model reflects an assumption that the data density is lower between the class-conditional densities. We illustrate our approach on a toy problem and present comparative results for the semi-supervised classification of handwritten digits.", "title": "" }, { "docid": "43f2dcf2f2260ff140e20380d265105b", "text": "As ontologies are the backbone of the Semantic Web, they attract much attention from researchers and engineers in many domains. This results in an increasing number of ontologies and semantic web applications. The number and complexity of such ontologies makes it hard for developers of ontologies and tools to decide which ontologies to use and reuse. To simplify the problem, a modularization algorithm can be used to partition ontologies into sets of modules. In order to evaluate the quality of modularization, we propose a new evaluation metric that quantifies the goodness of ontology modularization. In particular, we investigate the ontology module homogeneity, which assesses module cohesion, and the ontology module heterogeneity, which appraises module coupling. The experimental results demonstrate that the proposed metric is effective.", "title": "" }, { "docid": "d74131a431ca54f45a494091e576740c", "text": "In today’s highly competitive business environments with shortened product and technology life cycle, it is critical for software industry to continuously innovate. This goal can be achieved by developing a better understanding and control of the activities and determinants of innovation. Innovation measurement initiatives assess innovation capability, output and performance to help develop such an understanding. This study explores various aspects relevant to innovation measurement ranging from definitions, measurement frameworks and metrics that have been proposed in literature and used in practice. A systematic literature review followed by an online questionnaire and interviews with practitioners and academics were employed to identify a comprehensive definition of innovation that can be used in software industry. The metrics for the evaluation of determinants, inputs, outputs and performance were also aggregated and categorised. Based on these findings, a conceptual model of the key measurable elements of innovation was constructed from the findings of the systematic review. The model was further refined after feedback from academia and industry through interviews.", "title": "" }, { "docid": "8a32bdadcaa2c94f83e95c19e400835b", "text": "Create a short summary of your paper (200 words), double-spaced. Your summary will say something like: In this action research study of my classroom of 7 grade mathematics, I investigated ______. I discovered that ____________. As a result of this research, I plan to ___________. You now begin your paper. Pages should be numbered, with the first page of text following the abstract as page one. (In Microsoft Word: after your abstract, rather than inserting a “page break” insert a “section break” to start on the next page; this will allow you to start the 3 page being numbered as page 1). You should divide this report of your research into sections. We should be able to identity the following sections and you may use these headings (headings should be bold, centered, and capitalized). Consider the page length to be a minimum.", "title": "" }, { "docid": "c0a51f27931d8314b73a7de969bdfb08", "text": "Organizations need practical security benchmarking tools in order to plan effective security strategies. This paper explores a number of techniques that can be used to measure security within an organization. It proposes a benchmarking methodology that produces results that are of strategic importance to both decision makers and technology implementers.", "title": "" }, { "docid": "27c2c015c6daaac99b34d00845ec646c", "text": "Virtual worlds, such as Second Life and Everquest, have grown into virtual game communities that have economic potential. In such communities, virtual items are bought and sold between individuals for real money. The study detailed in this paper aims to identify, model and test the individual determinants for the decision to purchase virtual items within virtual game communities. A comprehensive understanding of these key determinants will enable researchers to further the understanding of player behavior towards virtual item transactions, which are an important aspect of the economic system within virtual games and often raise one of the biggest challenges for game community operators. A model will be developed via a mixture of new constructs and established theories, including the theory of planned behavior (TPB), the technology acceptance model (TAM), trust theory and unified theory of acceptance and use of technology (UTAUT). For this purpose the research uses a sequential, multi-method approach in two phases: combining the use of inductive, qualitative data from focus groups and expert interviews in phase one; and deductive, quantitative survey data in phase two. The final model will hopefully provide an impetus to further research in the area of virtual game community transaction behavior. The paper rounds off with a discussion of further research challenges in this area over the next seven years.", "title": "" } ]
scidocsrr
b73d93873caf89e0be871c66a216b066
38 GHz and 60 GHz angle-dependent propagation for cellular & peer-to-peer wireless communications
[ { "docid": "c67010d61ec7f9ea839bbf7d2dce72a1", "text": "Almost all cellular mobile communications including first generation analog systems, second generation digital systems, third generation WCDMA, and fourth generation OFDMA systems use Ultra High Frequency (UHF) band of radio spectrum with frequencies in the range of 300MHz-3GHz. This band of spectrum is becoming increasingly crowded due to spectacular growth in mobile data and other related services. More recently, there have been proposals to explore mmWave spectrum (3-300GHz) for commercial mobile applications due to its unique advantages such as spectrum availability and small component sizes. In this paper, we discuss system design aspects such as antenna array design, base station and mobile station requirements. We also provide system performance and SINR geometry results to demonstrate the feasibility of an outdoor mmWave mobile broadband communication system. We note that with adaptive antenna array beamforming, multi-Gbps data rates can be supported for mobile cellular deployments.", "title": "" } ]
[ { "docid": "36460eda2098bdcf3810828f54ee7d2b", "text": "[This corrects the article on p. 662 in vol. 60, PMID: 27729694.].", "title": "" }, { "docid": "7ad0a3e21de90ae5578626c12b42e666", "text": "Social media are a primary means for travelers to connect with each other and plan trips. They can also help tourism suppliers (e.g., by providing relevant information), thus overcoming the shortcomings of traditional information sources. User-generated content from social media has already been used in many studies as a primary information source. However, the quality of information derived thus far remains largely unclear. This study assesses the quality of macro-level information on the spatio-temporal distribution of tourism derived from online travel reviews in social media in terms of completeness, timeliness, and accuracy. We found that information quality increased from 2000 to 2009 as online travel reviews increasingly covered more countries, became available earlier than statistics reported by the United Nations World Tourism Organization (UNWTO), were highly correlated with the UNWTO statistics. We conclude that social media are a good information source for macro-level spatio-temporal tourism information and could be used, for example, to estimate tourism figures.", "title": "" }, { "docid": "3e3fd0a457f9469e490de9ea40c04c61", "text": "Thousands of historically revealing cuneiform clay tablets, which were inscribed in Mesopotamia millenia ago, still exist today. Visualizing cuneiform writing is important when deciphering what is written on the tablets. It is also important when reproducing the tablets in papers and books. Unfortunately, scholars have found photographs to be an inadequate visualization tool, for two reasons. First, the text wraps around the sides of some tablets, so a single viewpoint is insufficient. Second, a raking light will illuminate some textual features, but will leave others shadowed or invisible because they are either obscured by features on the tablet or are nearly aligned with the lighting direction. We present solutions to these problems by first creating a high-resolution 3D computer model from laser range data, then unwrapping and flattening the inscriptions on the model to a plane, allowing us to represent them as a scalar displacement map, and finally, rendering this map non-photorealistically using accessibility and curvature coloring. The output of this semiautomatic process enables all of a tablet’s text to be perceived in a single concise image. Our technique can also be applied to other types of inscribed surfaces, including bas-reliefs.", "title": "" }, { "docid": "c4be39977487cdebc8127650c8eda433", "text": "Unfavorable wake and separated flow from the hull might cause a dramatic decay of the propeller performance in single-screw propelled vessels such as tankers, bulk carriers and containers. For these types of vessels, special attention has to be paid to the design of the stern region, the occurrence of a good flow towards the propeller and rudder being necessary to avoid separation and unsteady loads on the propeller blades and, thus, to minimize fuel consumption and the risk for cavitation erosion and vibrations. The present work deals with the analysis of the propeller inflow in a single-screw chemical tanker vessel affected by massive flow separation in the stern region. Detailed flow measurements by Laser Doppler Velocimetry (LDV) were performed in the propeller region at model scale, in the Large Circulating Water Channel of CNR-INSEAN. Tests were undertaken with and without propeller in order to investigate its effect on the inflow characteristics and the separation mechanisms. In this regard, the study concerned also a phase locked analysis of the propeller perturbation at different distances upstream of the propulsor. The study shows the effectiveness of the 3 order statistical moment (i.e. skewness) for describing the topology of the wake and accurately identifying the portion affected by the detached flow.", "title": "" }, { "docid": "9d273b1118940525d564edec073a9dfa", "text": "A set of 1.4 million biomedical papers was analyzed with regards to how often articles are mentioned on Twitter or saved by users on Mendeley. While Twitter is a microblogging platform used by a general audience to distribute information, Mendeley is a reference manager targeted at an academic user group to organize scholarly literature. Both platforms are used as sources for so-called “altmetrics” to measure a new kind of research impact. This analysis shows in how far they differ and compare to traditional citation impact metrics based on a large set of PubMed papers.", "title": "" }, { "docid": "bb2ad600e0e90a1a349e39ce0f097277", "text": "Tongue drive system (TDS) is a tongue-operated, minimally invasive, unobtrusive, and wireless assistive technology (AT) that infers users' intentions by detecting their voluntary tongue motion and translating them into user-defined commands. Here we present the new intraoral version of the TDS (iTDS), which has been implemented in the form of a dental retainer. The iTDS system-on-a-chip (SoC) features a configurable analog front-end (AFE) that reads the magnetic field variations inside the mouth from four 3-axial magnetoresistive sensors located at four corners of the iTDS printed circuit board (PCB). A dual-band transmitter (Tx) on the same chip operates at 27 and 432 MHz in the Industrial/Scientific/Medical (ISM) band to allow users to switch in the presence of external interference. The Tx streams the digitized samples to a custom-designed TDS universal interface, built from commercial off-the-shelf (COTS) components, which delivers the iTDS data to other devices such as smartphones, personal computers (PC), and powered wheelchairs (PWC). Another key block on the iTDS SoC is the power management integrated circuit (PMIC), which provides individually regulated and duty-cycled 1.8 V supplies for sensors, AFE, Tx, and digital control blocks. The PMIC also charges a 50 mAh Li-ion battery with constant current up to 4.2 V, and recovers data and clock to update its configuration register through a 13.56 MHz inductive link. The iTDS SoC has been implemented in a 0.5-μm standard CMOS process and consumes 3.7 mW on average.", "title": "" }, { "docid": "672c11254309961fe02bc48827f8949e", "text": "HIV-1 integration into the host genome favors actively transcribed genes. Prior work indicated that the nuclear periphery provides the architectural basis for integration site selection, with viral capsid-binding host cofactor CPSF6 and viral integrase-binding cofactor LEDGF/p75 contributing to selection of individual sites. Here, by investigating the early phase of infection, we determine that HIV-1 traffics throughout the nucleus for integration. CPSF6-capsid interactions allow the virus to bypass peripheral heterochromatin and penetrate the nuclear structure for integration. Loss of interaction with CPSF6 dramatically alters virus localization toward the nuclear periphery and integration into transcriptionally repressed lamina-associated heterochromatin, while loss of LEDGF/p75 does not significantly affect intranuclear HIV-1 localization. Thus, CPSF6 serves as a master regulator of HIV-1 intranuclear localization by trafficking viral preintegration complexes away from heterochromatin at the periphery toward gene-dense chromosomal regions within the nuclear interior.", "title": "" }, { "docid": "31338a16eca7c0f60b789c38f2774816", "text": "As a promising area in artificial intelligence, a new learning paradigm, called Small Sample Learning (SSL), has been attracting prominent research attention in the recent years. In this paper, we aim to present a survey to comprehensively introduce the current techniques proposed on this topic. Specifically, current SSL techniques can be mainly divided into two categories. The first category of SSL approaches can be called “concept learning”, which emphasizes learning new concepts from only few related observations. The purpose is mainly to simulate human learning behaviors like recognition, generation, imagination, synthesis and analysis. The second category is called “experience learning”, which usually co-exists with the large sample learning manner of conventional machine learning. This category mainly focuses on learning with insufficient samples, and can also be called small data learning in some literatures. More extensive surveys on both categories of SSL techniques are introduced and some neuroscience evidences are provided to clarify the rationality of the entire SSL regime, and the relationship with human learning process. Some discussions on the main challenges and possible future research directions along this line are also presented.", "title": "" }, { "docid": "22b86cdb894eb6a4118d574822b8f952", "text": "This paper addresses view-invariant object detection and pose estimation from a single image. While recent work focuses on object-centered representations of point-based object features, we revisit the viewer-centered framework, and use image contours as basic features. Given training examples of arbitrary views of an object, we learn a sparse object model in terms of a few view-dependent shape templates. The shape templates are jointly used for detecting object occurrences and estimating their 3D poses in a new image. Instrumental to this is our new mid-level feature, called bag of boundaries (BOB), aimed at lifting from individual edges toward their more informative summaries for identifying object boundaries amidst the background clutter. In inference, BOBs are placed on deformable grids both in the image and the shape templates, and then matched. This is formulated as a convex optimization problem that accommodates invariance to non-rigid, locally affine shape deformations. Evaluation on benchmark datasets demonstrates our competitive results relative to the state of the art.", "title": "" }, { "docid": "2e9b98fbb1fa15020b374dbd48fb5adc", "text": "Recently, bipolar fuzzy sets have been studied and applied a bit enthusiastically and a bit increasingly. In this paper we prove that bipolar fuzzy sets and [0,1](2)-sets (which have been deeply studied) are actually cryptomorphic mathematical notions. Since researches or modelings on real world problems often involve multi-agent, multi-attribute, multi-object, multi-index, multi-polar information, uncertainty, or/and limit process, we put forward (or highlight) the notion of m-polar fuzzy set (actually, [0,1] (m)-set which can be seen as a generalization of bipolar fuzzy set, where m is an arbitrary ordinal number) and illustrate how many concepts have been defined based on bipolar fuzzy sets and many results which are related to these concepts can be generalized to the case of m-polar fuzzy sets. We also give examples to show how to apply m-polar fuzzy sets in real world problems.", "title": "" }, { "docid": "ad6dc9f74e0fa3c544c4123f50812e14", "text": "An ultra-wideband transition from microstrip to stripline in PCB technology is presented applying only through via holes for simple fabrication. The design is optimized using full-wave EM simulations. A prototype is manufactured and measured achieving a return loss better than 8.7dB and an insertion loss better than 1.2 dB in the FCC frequency range. A meander-shaped delay line in stripline technique is presented as an example of application.", "title": "" }, { "docid": "c5113ff741d9e656689786db10484a07", "text": "Pulmonary administration of drugs presents several advantages in the treatment of many diseases. Considering local and systemic delivery, drug inhalation enables a rapid and predictable onset of action and induces fewer side effects than other routes of administration. Three main inhalation systems have been developed for the aerosolization of drugs; namely, nebulizers, pressurized metered-dose inhalers (MDIs) and dry powder inhalers (DPIs). The latter are currently the most convenient alternative as they are breath-actuated and do not require the use of any propellants. The deposition site in the respiratory tract and the efficiency of inhaled aerosols are critically influenced by the aerodynamic diameter, size distribution, shape and density of particles. In the case of DPIs, since micronized particles are generally very cohesive and exhibit poor flow properties, drug particles are usually blended with coarse and fine carrier particles. This increases particle aerodynamic behavior and flow properties of the drugs and ensures accurate dosage of active ingredients. At present, particles with controlled properties are obtained by milling, spray drying or supercritical fluid techniques. Several excipients such as sugars, lipids, amino acids, surfactants, polymers and absorption enhancers have been tested for their efficacy in improving drug pulmonary administration. The purpose of this article is to describe various observations that have been made in the field of inhalation product development, especially for the dry powder inhalation formulation, and to review the use of various additives, their effectiveness and their potential toxicity for pulmonary administration.", "title": "" }, { "docid": "fc32d0734ea83a4252339c6a2f98b0ee", "text": "The security of Android depends on the timely delivery of updates to fix critical vulnerabilities. In this paper we map the complex network of players in the Android ecosystem who must collaborate to provide updates, and determine that inaction by some manufacturers and network operators means many handsets are vulnerable to critical vulnerabilities. We define the FUM security metric to rank the performance of device manufacturers and network operators, based on their provision of updates and exposure to critical vulnerabilities. Using a corpus of 20 400 devices we show that there is significant variability in the timely delivery of security updates across different device manufacturers and network operators. This provides a comparison point for purchasers and regulators to determine which device manufacturers and network operators provide security updates and which do not. We find that on average 87.7% of Android devices are exposed to at least one of 11 known critical vulnerabilities and, across the ecosystem as a whole, assign a FUM security score of 2.87 out of 10. In our data, Nexus devices do considerably better than average with a score of 5.17; and LG is the best manufacturer with a score of 3.97.", "title": "" }, { "docid": "c5dd31facf6d1f7709d58e7b0ddc0bab", "text": "Website fingerprinting attacks allow a local, passive eavesdropper to identify a web browsing client’s destination web page by extracting noticeable and unique features from her traffic. Such attacks magnify the gap between privacy and security — a client who encrypts her communication traffic may still have her browsing behaviour exposed to lowcost eavesdropping. Previous authors have shown that privacysensitive clients who use anonymity technologies such as Tor are susceptible to website fingerprinting attacks, and some attacks have been shown to outperform others in specific experimental conditions. However, as these attacks differ in data collection, feature extraction and experimental setup, they cannot be compared directly. On the other side of the coin, proposed website fingerprinting defenses (countermeasures) are generally designed and tested only against specific attacks. Some defenses have been shown to fail against more advanced attacks, and it is unclear which defenses would be effective against all attacks. In this paper, we propose a feature-based comparative methodology that allows us to systematize attacks and defenses in order to compare them. We analyze attacks for their sensitivity to different packet sequence features, and analyze the effect of proposed defenses on these features by measuring whether or not the features are hidden. If a defense fails to hide a feature that an attack is sensitive to, then the defense will not work against this attack. Using this methodology, we propose a new network layer defense that can more effectively hide all of the features we consider.", "title": "" }, { "docid": "b959bce5ea9db71d677586eb1b6f023e", "text": "We consider autonomous racing of two cars and present an approach to formulate the decision making as a non-cooperative non-zero-sum game. The game is formulated by restricting both players to fulfill static track constraints as well as collision constraints which depend on the combined actions of the two players. At the same time the players try to maximize their own progress. In the case where the action space of the players is finite, the racing game can be reformulated as a bimatrix game. For this bimatrix game, we show that the actions obtained by a sequential maximization approach where only the follower considers the action of the leader are identical to a Stackelberg and a Nash equilibrium in pure strategies. Furthermore, we propose a game promoting blocking, by additionally rewarding the leading car for staying ahead at the end of the horizon. We show that this changes the Stackelberg equilibrium, but has a minor influence on the Nash equilibria. For an online implementation, we propose to play the games in a moving horizon fashion, and we present two methods for guaranteeing feasibility of the resulting coupled repeated games. Finally, we study the performance of the proposed approaches in simulation for a set-up that replicates the miniature race car tested at the Automatic Control Laboratory of ETH Zürich. The simulation study shows that the presented games can successfully model different racing behaviors and generate interesting racing situations.", "title": "" }, { "docid": "fd8a677dffe737d61ebd0e30b91595e9", "text": "Despite outstanding success in vision amongst other domains, many of the recent deep learning approaches have evident drawbacks for robots. This manuscript surveys recent work in the literature that pertain to applying deep learning systems to the robotics domain, either as means of estimation or as a tool to resolve motor commands directly from raw percepts. These recent advances are only a piece to the puzzle. We suggest that deep learning as a tool alone is insufficient in building a unified framework to acquire general intelligence. For this reason, we complement our survey with insights from cognitive development and refer to ideas from classical control theory, producing an integrated direction for a lifelong learning architecture.", "title": "" }, { "docid": "73b239e6449d82c0d9b1aaef0e9e1d23", "text": "While navigating in an environment, a vision system has to be able to recognize where it is and what the main objects in the scene are. In this paper we present a contextbased vision system for place and object recognition. The goal is to identify familiar locations (e.g., office 610, conference room 941, Main Street), to categorize new environments (office, corridor, street) and to use that information to provide contextual priors for object recognition (e.g., tables are more likely in an office than a street). We present a low-dimensional global image representation that provides relevant information for place recognition and categorization, and show how such contextual information introduces strong priors that simplify object recognition. We have trained the system to recognize over 60 locations (indoors and outdoors) and to suggest the presence and locations of more than 20 different object types. The algorithm has been integrated into a mobile system that provides realtime feedback to the user.", "title": "" }, { "docid": "2878ed8d0da40bd3363f7b8eabb79faf", "text": "In this chapter, we present the current knowledge on de novo assembly, growth, and dynamics of striated myofibrils, the functional architectural elements developed in skeletal and cardiac muscle. The data were obtained in studies of myofibrils formed in cultures of mouse skeletal and quail myotubes, in the somites of living zebrafish embryos, and in mouse neonatal and quail embryonic cardiac cells. The comparative view obtained revealed that the assembly of striated myofibrils is a three-step process progressing from premyofibrils to nascent myofibrils to mature myofibrils. This process is specified by the addition of new structural proteins, the arrangement of myofibrillar components like actin and myosin filaments with their companions into so-called sarcomeres, and in their precise alignment. Accompanying the formation of mature myofibrils is a decrease in the dynamic behavior of the assembling proteins. Proteins are most dynamic in the premyofibrils during the early phase and least dynamic in mature myofibrils in the final stage of myofibrillogenesis. This is probably due to increased interactions between proteins during the maturation process. The dynamic properties of myofibrillar proteins provide a mechanism for the exchange of older proteins or a change in isoforms to take place without disassembling the structural integrity needed for myofibril function. An important aspect of myofibril assembly is the role of actin-nucleating proteins in the formation, maintenance, and sarcomeric arrangement of the myofibrillar actin filaments. This is a very active field of research. We also report on several actin mutations that result in human muscle diseases.", "title": "" }, { "docid": "f09f1d074b1d9c72628b8eb90bce4904", "text": "Compressive Sensing, as an emerging technique in signal processing is reviewed in this paper together with its’ common applications. As an alternative to the traditional signal sampling, Compressive Sensing allows a new acquisition strategy with significantly reduced number of samples needed for accurate signal reconstruction. The basic ideas and motivation behind this approach are provided in the theoretical part of the paper. The commonly used algorithms for missing data reconstruction are presented. The Compressive Sensing applications have gained significant attention leading to an intensive growth of signal processing possibilities. Hence, some of the existing practical applications assuming different types of signals in real-world scenarios are described and analyzed as well.", "title": "" }, { "docid": "b902e6a423f6703be8ef06f77a246990", "text": "The predictive value of a comprehensive model with personality characteristics, stressor related cognitions, coping and social support was tested in a sample of 187 nonpregnant women. The emotional response to the unsuccessful treatment was predicted out of vulnerability factors assessed before the start of the treatment. The results indicated the importance of neuroticism as a vulnerability factor in emotional response to a severe stressor. They also underlined the importance of helplessness and marital dissatisfaction as additional risk factors, and acceptance and perceived social support as additional protective factors, in the development of anxiety and depression after a failed fertility treatment. From clinical point of view, these results suggest fertility-related cognitions and social support should receive attention when counselling women undergoing IVF or ICSI treatment.", "title": "" } ]
scidocsrr
018fa56d63f6b3cc429b38b9385a4aa9
A Survey on Facial Expression Recognition Techniques
[ { "docid": "ee58216dd7e3a0d8df8066703b763187", "text": "Extraction of discriminative features from salient facial patches plays a vital role in effective facial expression recognition. The accurate detection of facial landmarks improves the localization of the salient patches on face images. This paper proposes a novel framework for expression recognition by using appearance features of selected facial patches. A few prominent facial patches, depending on the position of facial landmarks, are extracted which are active during emotion elicitation. These active patches are further processed to obtain the salient patches which contain discriminative features for classification of each pair of expressions, thereby selecting different facial patches as salient for different pair of expression classes. One-against-one classification method is adopted using these features. In addition, an automated learning-free facial landmark detection technique has been proposed, which achieves similar performances as that of other state-of-art landmark detection methods, yet requires significantly less execution time. The proposed method is found to perform well consistently in different resolutions, hence, providing a solution for expression recognition in low resolution images. Experiments on CK+ and JAFFE facial expression databases show the effectiveness of the proposed system.", "title": "" } ]
[ { "docid": "48a8790474498af81f662f8195925570", "text": "Synthetic biology is a rapidly expanding discipline at the interface between engineering and biology. Much research in this area has focused on gene regulatory networks that function as biological switches and oscillators. Here we review the state of the art in the design and construction of oscillators, comparing the features of each of the main networks published to date, the models used for in silico design and validation and, where available, relevant experimental data. Trends are apparent in the ways that network topology constrains oscillator characteristics and dynamics. Also, noise and time delay within the network can both have constructive and destructive roles in generating oscillations, and stochastic coherence is commonplace. This review can be used to inform future work to design and implement new types of synthetic oscillators or to incorporate existing oscillators into new designs.", "title": "" }, { "docid": "93a8b45a6bd52f1838b1052d1fca22fc", "text": "LSHTC is a series of challenges which aims to assess the performance of classification systems in large-scale classification in a a large number of classes (up to hundreds of thousands). This paper describes the dataset that have been released along the LSHTC series. The paper details the construction of the datsets and the design of the tracks as well as the evaluation measures that we implemented and a quick overview of the results. All of these datasets are available online and runs may still be submitted on the online server of the challenges.", "title": "" }, { "docid": "498c217fb910a5b4ca6bcdc83f98c11b", "text": "Theodor Wilhelm Engelmann (1843–1909), who had a creative life in music, muscle physiology, and microbiology, developed a sensitive method for tracing the photosynthetic oxygen production of unicellular plants by means of bacterial aerotaxis (chemotaxis). He discovered the absorption spectrum of bacteriopurpurin (bacteriochlorophyll a) and the scotophobic response, photokinesis, and photosynthesis of purple bacteria.", "title": "" }, { "docid": "1991322dce13ee81885f12322c0e0f79", "text": "The quality of the interpretation of the sentiment in the online buzz in the social media and the online news can determine the predictability of financial markets and cause huge gains or losses. That is why a number of researchers have turned their full attention to the different aspects of this problem lately. However, there is no well-rounded theoretical and technical framework for approaching the problem to the best of our knowledge. We believe the existing lack of such clarity on the topic is due to its interdisciplinary nature that involves at its core both behavioral-economic topics as well as artificial intelligence. We dive deeper into the interdisciplinary nature and contribute to the formation of a clear frame of discussion. We review the related works that are about market prediction based on onlinetext-mining and produce a picture of the generic components that they all have. We, furthermore, compare each system with the rest and identify their main differentiating factors. Our comparative analysis of the systems expands onto the theoretical and technical foundations behind each. This work should help the research community to structure this emerging field and identify the exact aspects which require further research and are of special significance. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "089343ba0d94a96d6a583f1becfd7b46", "text": "In this paper we study fundamental properties of minimum inter-event times in event-triggered control systems, both in the absence and presence of external disturbances. This analysis reveals, amongst others, that for several popular event-triggering mechanisms no positive minimum inter-event time can be guaranteed in the presence of arbitrary small external disturbances. This clearly shows that it is essential to include the effects of external disturbances in the analysis of the computation/communication properties of event-triggered control systems. In fact, this paper also identifies event-triggering mechanisms that do exhibit these important event-separation properties.", "title": "" }, { "docid": "27101c9dcb89149b68d3ad47b516db69", "text": "A brain-computer interface (BCI) is a hardware and software communications system that permits cerebral activity alone to control computers or external devices. The immediate goal of BCI research is to provide communications capabilities to severely disabled people who are totally paralyzed or 'locked in' by neurological neuromuscular disorders, such as amyotrophic lateral sclerosis, brain stem stroke, or spinal cord injury. Here, we review the state-of-the-art of BCIs, looking at the different steps that form a standard BCI: signal acquisition, preprocessing or signal enhancement, feature extraction, classification and the control interface. We discuss their advantages, drawbacks, and latest advances, and we survey the numerous technologies reported in the scientific literature to design each step of a BCI. First, the review examines the neuroimaging modalities used in the signal acquisition step, each of which monitors a different functional brain activity such as electrical, magnetic or metabolic activity. Second, the review discusses different electrophysiological control signals that determine user intentions, which can be detected in brain activity. Third, the review includes some techniques used in the signal enhancement step to deal with the artifacts in the control signals and improve the performance. Fourth, the review studies some mathematic algorithms used in the feature extraction and classification steps which translate the information in the control signals into commands that operate a computer or other device. Finally, the review provides an overview of various BCI applications that control a range of devices.", "title": "" }, { "docid": "63f2acd6dd82e0aa5b414c2658da44d5", "text": "La importancia creciente de la Administración de la Producción/Operaciones está relacionada con la superación del enfoque racionalizador y centralizador de la misión de esta área de las organizaciones. El análisis, el diagnóstico y la visión estratégica de la Dirección de Operaciones permiten a la empresa acomodarse a los cambios que exige la economía moderna. Una efectiva gestión, con un flujo constante de la información, una organización del trabajo adecuada y una estructura que fomente la participación, son instrumentos imprescindibles para que las Operaciones haga su trabajo.", "title": "" }, { "docid": "f9119710fb15af38bc823e25eec5653b", "text": "The emergence of knowledge-based economies has placed an importance on effective management of knowledge. The effective management of knowledge has been described as a critical ingredient for organisation seeking to ensure sustainable strategic competitive advantage. This paper reviews literature in the area of knowledge management to bring out the importance of knowledge management in organisation. The paper is able to demonstrate that knowledge management is a key driver of organisational performance and a critical tool for organisational survival, competitiveness and profitability. Therefore creating, managing, sharing and utilizing knowledge effectively is vital for organisations to take full advantage of the value of knowledge. The paper also contributes that, in order for organisations to manage knowledge effectively, attention must be paid on three key components people, processes and technology. In essence, to ensure organisation’s success, the focus should be to connect people, processes, and technology for the purpose of leveraging knowledge.", "title": "" }, { "docid": "e9cc899155bd5f88ae1a3d5b88de52af", "text": "This article reviews research evidence showing to what extent the chronic care model can improve the management of chronic conditions (using diabetes as an example) and reduce health care costs. Thirty-two of 39 studies found that interventions based on chronic care model components improved at least 1 process or outcome measure for diabetic patients. Regarding whether chronic care model interventions can reduce costs, 18 of 27 studies concerned with 3 examples of chronic conditions (congestive heart failure, asthma, and diabetes) demonstrated reduced health care costs or lower use of health care services. Even though the chronic care model has the potential to improve care and reduce costs, several obstacles hinder its widespread adoption.", "title": "" }, { "docid": "2089349f4f1dae4d07dfec8481ba748e", "text": "A signiicant limitation of neural networks is that the representations they learn are usually incomprehensible to humans. We present a novel algorithm, Trepan, for extracting comprehensible, symbolic representations from trained neural networks. Our algorithm uses queries to induce a decision tree that approximates the concept represented by a given network. Our experiments demonstrate that Trepan is able to produce decision trees that maintain a high level of delity to their respective networks while being com-prehensible and accurate. Unlike previous work in this area, our algorithm is general in its applicability and scales well to large networks and problems with high-dimensional input spaces.", "title": "" }, { "docid": "39bf7e3a8e75353a3025e2c0f18768f9", "text": "Ligament reconstruction is the current standard of care for active patients with an anterior cruciate ligament (ACL) rupture. Although the majority of ACL reconstruction (ACLR) surgeries successfully restore the mechanical stability of the injured knee, postsurgical outcomes remain widely varied. Less than half of athletes who undergo ACLR return to sport within the first year after surgery, and it is estimated that approximately 1 in 4 to 1 in 5 young, active athletes who undergo ACLR will go on to a second knee injury. The outcomes after a second knee injury and surgery are significantly less favorable than outcomes after primary injuries. As advances in graft reconstruction and fixation techniques have improved to consistently restore passive joint stability to the preinjury level, successful return to sport after ACLR appears to be predicated on numerous postsurgical factors. Importantly, a secondary ACL injury is most strongly related to modifiable postsurgical risk factors. Biomechanical abnormalities and movement asymmetries, which are more prevalent in this cohort than previously hypothesized, can persist despite high levels of functional performance, and also represent biomechanical and neuromuscular control deficits and imbalances that are strongly associated with secondary injury incidence. Decreased neuromuscular control and high-risk movement biomechanics, which appear to be heavily influenced by abnormal trunk and lower extremity movement patterns, not only predict first knee injury risk but also reinjury risk. These seminal findings indicate that abnormal movement biomechanics and neuromuscular control profiles are likely both residual to, and exacerbated by, the initial injury. Evidence-based medicine (EBM) strategies should be used to develop effective, efficacious interventions targeted to these impairments to optimize the safe return to high-risk activity. In this Current Concepts article, the authors present the latest evidence related to risk factors associated with ligament failure or a secondary (contralateral) injury in athletes who return to sport after ACLR. From these data, they propose an EBM paradigm shift in postoperative rehabilitation and return-to-sport training after ACLR that is focused on the resolution of neuromuscular deficits that commonly persist after surgical reconstruction and standard rehabilitation of athletes.", "title": "" }, { "docid": "2ff08c8505e7d68304b63c6942feb837", "text": "This paper presents a Retrospective Event Detection algorithm, called Eventy-Topic Detection (ETD), which automatically generates topics that describe events in a large, temporal text corpus. Our approach leverages the structure of the topic modeling framework, specifically the Latent Dirichlet Allocation (LDA), to generate topics which are then later labeled as Eventy-Topics or non-Eventy-Topics. The system first runs daily LDA topic models, then calculates the cosine similarity between the topics of the daily topic models, and then runs our novel Bump-Detection algorithm. Similar topics labeled as an Eventy-Topic are then grouped together. The algorithm is demonstrated on two Terabyte sized corpuses a Reuters News corpus and a Twitter corpus. Our method is evaluated on a human annotated test set. Our algorithm demonstrates its ability to accurately describe and label events in a temporal text corpus.", "title": "" }, { "docid": "928e7a7abf63b8e1da14976d030f38b8", "text": "A novel Vivaldi antenna structure is proposed to broaden the bandwidth of the conventional ones. The theory of the equivalent circuit is adopted, and it is deduced that the bandwidth of the antenna can be enhanced by the high chip resistor and short pin in the new structure. An antenna of 62 mm (length) times 70 mm (width)times 0.5 mm (thickness) is designed and fabricated. The measure results show that the bandwidth is 1~ 20 GHz ( VSWR les2), while the gain varies between 0.9 and 7.8 dB. It is indicated that the antenna can be reduced to about a half of the conventional ones.", "title": "" }, { "docid": "2c73318b59e5d7101884f2563dd700b5", "text": "BACKGROUND\nEffective control of (upright) body posture requires a proper representation of body orientation. Stroke patients with pusher syndrome were shown to suffer from severely disturbed perception of own body orientation. They experience their body as oriented 'upright' when actually tilted by nearly 20 degrees to the ipsilesional side. Thus, it can be expected that postural control mechanisms are impaired accordingly in these patients. Our aim was to investigate pusher patients' spontaneous postural responses of the non-paretic leg and of the head during passive body tilt.\n\n\nMETHODS\nA sideways tilting motion was applied to the trunk of the subject in the roll plane. Stroke patients with pusher syndrome were compared to stroke patients not showing pushing behaviour, patients with acute unilateral vestibular loss, and non brain damaged subjects.\n\n\nRESULTS\nCompared to all groups without pushing behaviour, the non-paretic leg of the pusher patients showed a constant ipsiversive tilt across the whole tilt range for an amount which was observed in the non-pusher subjects when they were tilted for about 15 degrees into the ipsiversive direction.\n\n\nCONCLUSION\nThe observation that patients with acute unilateral vestibular loss showed no alterations of leg posture indicates that disturbed vestibular afferences alone are not responsible for the disordered leg responses seen in pusher patients. Our results may suggest that in pusher patients a representation of body orientation is disturbed that drives both conscious perception of body orientation and spontaneous postural adjustment of the non-paretic leg in the roll plane. The investigation of the pusher patients' leg-to-trunk orientation thus could serve as an additional bedside tool to detect pusher syndrome in acute stroke patients.", "title": "" }, { "docid": "ae534b0d19b95dcee87f06ed279fc716", "text": "In this paper, comparative study of p type and n type solar cells are described using two popular solar cell analyzing software AFORS HET and PC1D. We use SiNx layer as Antireflection Coating and a passivated layer Al2O3 .The variation of reflection, absorption, I-V characteristics, and internal and external quantum efficiency have been done by changing the thickness of passivated layer and ARC layer, and front and back surface recombination velocities. The same analysis is taken by imposing surface charge at front of n-type solar Cell and we get 20.13%-20.15% conversion efficiency.", "title": "" }, { "docid": "5eb9e759ec8fc9ad63024130f753d136", "text": "A 3-10 GHz broadband CMOS T/R switch for ultra-wideband (UWB) transceiver is presented. The broadband CMOS T/R switch is fabricated based on the 0.18 mu 1P6M standard CMOS process. On-chip measurement of the CMOS T/R switch is performed. The insertion loss of the proposed CMOS T/R Switch is about 3.1plusmn1.3dB. The return losses at both input and output terminals are higher than 14 dB. It is also characterized with 25-34dB isolation and 18-20 dBm input P1dB. The broadband CMOS T/R switch shows highly linear phase and group delay of 20plusmn10 ps from 10MHz to 15GHz. It can be easily integrated with other CMOS RFICs to form on-chip transceivers for various UWB applications", "title": "" }, { "docid": "a1d6ec19be444705fd6c339d501bce10", "text": "The transmission properties of a guide consisting of a dielectric rod of rectangular cross-section surrounded by dielectrics of smaller refractive indices are determined. This guide is the basic component in a new technology called integrated optical circuitry. The directional coupler, a particularly useful device, made of two of those guides closely spaced is also analyzed. [The SCI indicates that this paper has been cited over 145 times since 1969.]", "title": "" }, { "docid": "dc473939f83bb4752f11b9ebe37ee474", "text": "With the pervasive use of mobile devices with location sensing and positioning functions, such as Wi-Fi and GPS, people now are able to acquire present locations and collect their movement. As the availability of trajectory data prospers, mining activities hidden in raw trajectories becomes a hot research problem. Given a set of trajectories, prior works either explore density-based approaches to extract regions with high density of GPS data points or utilize time thresholds to identify users’ stay points. However, users may have different activities along with trajectories. Prior works only can extract one kind of activity by specifying thresholds, such as spatial density or temporal time threshold. In this paper, we explore both spatial and temporal relationships among data points of trajectories to extract semantic regions that refer to regions in where users are likely to have some kinds of activities. In order to extract semantic regions, we propose a sequential clustering approach to discover clusters as the semantic regions from individual trajectory according to the spatial-temporal density. Based on semantic region discovery, we develop a shared nearest neighbor (SNN) based clustering algorithm to discover the frequent semantic region where the moving object often stay, which consists of a group of similar semantic regions from multiple trajectories. Experimental results demonstrate that our techniques are more accurate than existing clustering schemes.", "title": "" }, { "docid": "f38854d7c788815d8bc6d20db284e238", "text": "This paper presents the development of a Sinhala Speech Recognition System to be deployed in an Interactive Voice Response (IVR) system of a telecommunication service provider. The main objectives are to recognize Sinhala digits and names of Sinhala songs to be set up as ringback tones. Sinhala being a phonetic language, its features are studied to develop a list of 47 phonemes. A continuous speech recognition system is developed based on Hidden Markov Model (HMM). The acoustic model is trained using the voice through mobile phone. The outcome is a speaker independent speech recognition system which is capable of recognizing 10 digits and 50 Sinhala songs. A word error rate (WER) of 11.2% using a speech corpus of 0.862 hours and a sentence error rate (SER) of 5.7% using a speech corpus of 1.388 hours are achieved for digits and songs respectively.", "title": "" } ]
scidocsrr
b69bc3b38e8c8f61db42d9f80d23e885
Study and analysis of various task scheduling algorithms in the cloud computing environment
[ { "docid": "5d56b018a1f980607d74fd5865784e1b", "text": "In this paper, we present an optimization model for task scheduling for minimizing energy consumption in cloud-computing data centers. The proposed approach was formulated as an integer programming problem to minimize the cloud-computing data center energy consumption by scheduling tasks to a minimum numbers of servers while keeping the task response time constraints. We prove that the average task response time and the number of active servers needed to meet such time constraints are bounded through the use of a greedy task-scheduling scheme. In addition, we propose the most-efficient server- first task-scheduling scheme to minimize energy expenditure as a practical scheduling scheme. We model and simulate the proposed scheduling scheme for a data center with heterogeneous tasks. The simulation results show that the proposed task-scheduling scheme reduces server energy consumption on average over 70 times when compared to the energy consumed under a (not-optimized) random-based task-scheduling scheme. We show that energy savings are achieved by minimizing the allocated number of servers.", "title": "" }, { "docid": "c039d0b6b049e3beb1fcea7595d86625", "text": "Cloud computing is known as a provider of dynamic services using very large scalable and virtualized resources over the Internet. Due to novelty of cloud computing field, there is no many standard task scheduling algorithm used in cloud environment. Especially that in cloud, there is a high communication cost that prevents well known task schedulers to be applied in large scale distributed environment. Today, researchers attempt to build job scheduling algorithms that are compatible and applicable in Cloud Computing environment Job scheduling is most important task in cloud computing environment because user have to pay for resources used based upon time. Hence efficient utilization of resources must be important and for that scheduling plays a vital role to get maximum benefit from the resources. In this paper we are studying various scheduling algorithm and issues related to them in cloud computing.", "title": "" } ]
[ { "docid": "af0bfcd39271d2c6b5734c9665f758e6", "text": "The architecture of the subterranean nests of the ant Odontomachus brunneus (Patton) (Hymenoptera: Formicidae) was studied by means of casts with dental plaster or molten metal. The entombed ants were later recovered by dissolution of plaster casts in hot running water. O. brunneus excavates simple nests, each consisting of a single, vertical shaft connecting more or less horizontal, simple chambers. Nests contained between 11 and 177 workers, from 2 to 17 chambers, and 28 to 340 cm(2) of chamber floor space and reached a maximum depth of 18 to 184 cm. All components of nest size increased simultaneously during nest enlargement, number of chambers, mean chamber size, and nest depth, making the nest shape (proportions) relatively size-independent. Regardless of nest size, all nests had approximately 2 cm(2) of chamber floor space per worker. Chambers were closer together near the top and the bottom of the nest than in the middle, and total chamber area was greater near the bottom. Colonies occasionally incorporated cavities made by other animals into their nests.", "title": "" }, { "docid": "f162f44a0a8d6e5251c731cd5259afcf", "text": "This paper proposes a method for controlling a Robotic arm using an application build in the android platform. The android phone and raspberry piboard is connected through Wi-Fi. As the name suggests the robotic arm is designed as it performs the same activity as a human hand works. A signal is generated from the android app which will be received by the raspberry pi board and the robotic arm works according to the predefined program. The android application is the command centre of the robotic arm. The program is written in the python language in the raspberry board. the different data will control the arm rotation.", "title": "" }, { "docid": "8d9f65aadba86c29cb19cd9e6eecec5a", "text": "To achieve privacy requirements, IoT application providers may need to spend a lot of money to replace existing IoT devices. To address this problem, this study proposes the Blockchain Connected Gateways (BC Gateways) to protect users from providing personal data to IoT devices without user consent. In addition, the gateways store user privacy preferences on IoT devices in the blockchain network. Therefore, this study can utilize the blockchain technology to resolve the disputes of privacy issues. In conclusion, this paper can contribute to improving user privacy and trust in IoT applications with legacy IoT devices.", "title": "" }, { "docid": "b039a40e0822408cf86b4ae3a356519a", "text": "Sortagging is a versatile method for site-specific modification of proteins as applied to a variety of in vitro reactions. Here, we explore possibilities of adapting the sortase method for use in living cells. For intracellular sortagging, we employ the Ca²⁺-independent sortase A transpeptidase (SrtA) from Streptococcus pyogenes. Substrate proteins were equipped with the C-terminal sortase-recognition motif (LPXTG); we used proteins with an N-terminal (oligo)glycine as nucleophiles. We show that sortase-dependent protein ligation can be achieved in Saccharomyces cerevisiae and in mammalian HEK293T cells, both in the cytosol and in the lumen of the endoplasmic reticulum (ER). ER luminal sortagging enables secretion of the reaction products, among which circular polypeptides. Protein ligation of substrate and nucleophile occurs within 30 min of translation. The versatility of the method is shown by protein ligation of multiple substrates with green fluorescent protein-based nucleophiles in different intracellular compartments.", "title": "" }, { "docid": "e62daef8b5273096e0f174c73e3674a8", "text": "A wide range of human-robot collaborative applications in diverse domains such as manufacturing, search-andrescue, health care, the entertainment industry, and social interactions, require an autonomous robot to follow its human companion. Different working environments and applications pose diverse challenges by adding constraints on the choice of sensors, the degree of autonomy, and dynamics of the person-following robot. Researchers have addressed these challenges in many ways and contributed to the development of a large body of literature. This paper provides a comprehensive overview of the literature by categorizing different aspects of person-following by autonomous robots. Also, the corresponding operational challenges are identified based on various design choices for ground, underwater, and aerial scenarios. In addition, state-of-the-art methods for perception, planning, control, and interaction are elaborately discussed and their applicability in varied operational scenarios are presented. Then, qualitative evaluations of some of the prominent methods are performed, corresponding practicalities are illustrated, and their feasibility is analyzed in terms of standard metrics. Furthermore, several prospective application areas are identified, and open problems are highlighted for future research.", "title": "" }, { "docid": "e43cb8fefc7735aeab0fa40ad44a2e15", "text": "Support vector machine (SVM) is an optimal margin based classification technique in machine learning. SVM is a binary linear classifier which has been extended to non-linear data using Kernels and multi-class data using various techniques like one-versus-one, one-versus-rest, Crammer Singer SVM, Weston Watkins SVM and directed acyclic graph SVM (DAGSVM) etc. SVM with a linear Kernel is called linear SVM and one with a non-linear Kernel is called non-linear SVM. Linear SVM is an efficient technique for high dimensional data applications like document classification, word-sense disambiguation, drug design etc. because under such data applications, test accuracy of linear SVM is closer to non-linear SVM while its training is much faster than non-linear SVM. SVM is continuously evolving since its inception and researchers have proposed many problem formulations, solvers and strategies for solving SVM. Moreover, due to advancements in the technology, data has taken the form of ‘Big Data’ which have posed a challenge for Machine Learning to train a classifier on this large-scale data. In this paper, we have presented a review on evolution of linear support vector machine classification, its solvers, strategies to improve solvers, experimental results, current challenges and research directions.", "title": "" }, { "docid": "ff952443eef41fb430ff2831b5ee33d5", "text": "The increasing activity in the Intelligent Transportation Systems (ITS) area faces a strong limitation: the slow pace at which the automotive industry is making cars \"smarter\". On the contrary, the smartphone industry is advancing quickly. Existing smartphones are endowed with multiple wireless interfaces and high computational power, being able to perform a wide variety of tasks. By combining smartphones with existing vehicles through an appropriate interface we are able to move closer to the smart vehicle paradigm, offering the user new functionalities and services when driving. In this paper we propose an Android-based application that monitors the vehicle through an On Board Diagnostics (OBD-II) interface, being able to detect accidents. Our proposed application estimates the G force experienced by the passengers in case of a frontal collision, which is used together with airbag triggers to detect accidents. The application reacts to positive detection by sending details about the accident through either e-mail or SMS to pre-defined destinations, immediately followed by an automatic phone call to the emergency services. Experimental results using a real vehicle show that the application is able to react to accident events in less than 3 seconds, a very low time, validating the feasibility of smartphone based solutions for improving safety on the road.", "title": "" }, { "docid": "491d98644c62c6b601657e235cb48307", "text": "The purpose of this study was to investigate the use of three-dimensional display formats for judgments of spatial information using an exocentric frame of reference. Eight subjects judged the azimuth and elevation that separated two computer-generated objects using either a perspective or stereoscopic display. Errors, which consisted of the difference in absolute value between the estimated and actual azimuth or elevation, were analyzed as the response variable. The data indicated that the stereoscopic display resulted in more accurate estimates of elevation, especially for images aligned approximately orthogonally to the viewing vector. However, estimates of relative azimuth direction were not improved by use of the stereoscopic display. Furthermore, it was shown that the effect of compression resulting from a 45-deg computer graphics eye point elevation produced a response bias that was symmetrical around the horizontal plane of the reference cube, and that the depth cue of binocular disparity provided by the stereoscopic display reduced the magnitude of the compression errors. Implications of the results for the design of spatial displays are discussed.", "title": "" }, { "docid": "2c87f9ef35795c89de6b60e1ceff18c8", "text": "The paper presents a fusion-tracker and pedestrian classifier for color and thermal cameras. The tracker builds a background model as a multi-modal distribution of colors and temperatures. It is constructed as a particle filter that makes a number of informed reversible transformations to sample the model probability space in order to maximize posterior probability of the scene model. Observation likelihoods of moving objects account their 3D locations with respect to the camera and occlusions by other tracked objects as well as static obstacles. After capturing the coordinates and dimensions of moving objects we apply a pedestrian classifier based on periodic gait analysis. To separate humans from other moving objects, such as cars, we detect, in human gait, a symmetrical double helical pattern, that can then be analyzed using the Frieze Group theory. The results of tracking on color and thermal sequences demonstrate that our algorithm is robust to illumination noise and performs well in the outdoor environments.", "title": "" }, { "docid": "641bc7bfd28f3df41dd0eaef0543832a", "text": "Monitoring parameters characterizing water quality, such as temperature, pH, and concentrations of heavy metals in natural waters, is often followed by transmitting the data to remote receivers using telemetry systems. Such systems are commonly powered by batteries, which can be inconvenient at times because batteries have a limited lifetime and must be recharged or replaced periodically to ensure that sufficient energy is available to power the electronics. To avoid these inconveniences, a microbial fuel cell was designed to power electrochemical sensors and small telemetry systems to transmit the data acquired by the sensors to remote receivers. The microbial fuel cell was combined with low-power, high-efficiency electronic circuitry providing a stable power source for wireless data transmission. To generate enough power for the telemetry system, energy produced by the microbial fuel cell was stored in a capacitor and used in short bursts when needed. Since commercial electronic circuits require a minimum 3.3 V input and our cell was able to deliver a maximum of 2.1 V, a DC-DC converter was used to boost the potential. The DC-DC converter powered a transmitter, which gathered the data from the sensor and transmitted it wirelessly to a remote receiver. To demonstrate the utility of the system, temporal variations in temperature were measured, and the data were wirelessly transmitted to a remote receiver.", "title": "" }, { "docid": "92628edcee9908713607a0dd36591194", "text": "OBJECTIVE\nTo describe the methodology utilized to calculate reliability and the generation of norms for 10 neuropsychological tests for children in Spanish-speaking countries.\n\n\nMETHOD\nThe study sample consisted of over 4,373 healthy children from nine countries in Latin America (Chile, Cuba, Ecuador, Guatemala, Honduras, Mexico, Paraguay, Peru, and Puerto Rico) and Spain. Inclusion criteria for all countries were to have between 6 to 17 years of age, an Intelligence Quotient of≥80 on the Test of Non-Verbal Intelligence (TONI-2), and score of <19 on the Children's Depression Inventory. Participants completed 10 neuropsychological tests. Reliability and norms were calculated for all tests.\n\n\nRESULTS\nTest-retest analysis showed excellent or good- reliability on all tests (r's>0.55; p's<0.001) except M-WCST perseverative errors whose coefficient magnitude was fair. All scores were normed using multiple linear regressions and standard deviations of residual values. Age, age2, sex, and mean level of parental education (MLPE) were included as predictors in the models by country. The non-significant variables (p > 0.05) were removed and the analysis were run again.\n\n\nCONCLUSIONS\nThis is the largest Spanish-speaking children and adolescents normative study in the world. For the generation of normative data, the method based on linear regression models and the standard deviation of residual values was used. This method allows determination of the specific variables that predict test scores, helps identify and control for collinearity of predictive variables, and generates continuous and more reliable norms than those of traditional methods.", "title": "" }, { "docid": "473eebca6dccf4e242c87bbabfd4b8a5", "text": "Text analytics systems often rely heavily on detecting and linking entity mentions in documents to knowledge bases for downstream applications such as sentiment analysis, question answering and recommender systems. A major challenge for this task is to be able to accurately detect entities in new languages with limited labeled resources. In this paper we present an accurate and lightweight, multilingual named entity recognition (NER) and linking (NEL) system. The contributions of this paper are three-fold: 1) Lightweight named entity recognition with competitive accuracy; 2) Candidate entity retrieval that uses search click-log data and entity embeddings to achieve high precision with a low memory footprint; and 3) efficient entity disambiguation. Our system achieves state-of-the-art performance on TAC KBP 2013 multilingual data and on English AIDA CONLL data.", "title": "" }, { "docid": "a50151963608bccdcb53b3f390db6918", "text": "In order to obtain more value added products, a product quality control is essentially required Many studies show that quality of agriculture products may be reduced from many causes. One of the most important factors of such quality plant diseases. Consequently, minimizing plant diseases allows substantially improving quality of the product Suitable diagnosis of crop disease in the field is very critical for the increased production. Foliar is the major important fungal disease of cotton and occurs in all growing Indian cotton regions. In this paper I express Technological Strategies uses mobile captured symptoms of Cotton Leaf Spot images and categorize the diseases using support vector machine. The classifier is being trained to achieve intelligent farming, including early detection of disease in the groves, selective fungicide application, etc. This proposed work is based on Segmentation techniques in which, the captured images are processed for enrichment first. Then texture and color Feature extraction techniques are used to extract features such as boundary, shape, color and texture for the disease spots to recognize diseases.", "title": "" }, { "docid": "29eebb40973bdfac9d1f1941d4c7c889", "text": "This paper explains a procedure for getting models of robot kinematics and dynamics that are appropriate for robot control design. The procedure consists of the following steps: 1) derivation of robot kinematic and dynamic models and establishing correctness of their structures; 2) experimental estimation of the model parameters; 3) model validation; and 4) identification of the remaining robot dynamics, not covered with the derived model. We give particular attention to the design of identification experiments and to online reconstruction of state coordinates, as these strongly influence the quality of the estimation process. The importance of correct friction modeling and the estimation of friction parameters are illuminated. The models of robot kinematics and dynamics can be used in model-based nonlinear control. The remaining dynamics cannot be ignored if high-performance robot operation with adequate robustness is required. The complete procedure is demonstrated for a direct-drive robotic arm with three rotational joints.", "title": "" }, { "docid": "6f5afc38b09fa4fd1e47d323cfe850c9", "text": "In the past several years there has been extensive research into honeypot technologies, primarily for detection and information gathering against external threats. However, little research has been done for one of the most dangerous threats, the advance insider, the trusted individual who knows your internal organization. These individuals are not after your systems, they are after your information. This presentation discusses how honeypot technologies can be used to detect, identify, and gather information on these specific threats.", "title": "" }, { "docid": "b317f33d159bddce908df4aa9ba82cf9", "text": "Point cloud source data for surface reconstruction is usually contaminated with noise and outliers. To overcome this deficiency, a density-based point cloud denoising method is presented to remove outliers and noisy points. First, particle-swam optimization technique is employed for automatically approximating optimal bandwidth of multivariate kernel density estimation to ensure the robust performance of density estimation. Then, mean-shift based clustering technique is used to remove outliers through a thresholding scheme. After removing outliers from the point cloud, bilateral mesh filtering is applied to smooth the remaining points. The experimental results show that this approach, comparably, is robust and efficient.", "title": "" }, { "docid": "72e9ed1d81f8dfce9492f5bb30fc91a1", "text": "A key component to the success of deep learning is the availability of massive amounts of training data. Building and annotating large datasets for solving medical image classification problems is today a bottleneck for many applications. Recently, capsule networks were proposed to deal with shortcomings of Convolutional Neural Networks (ConvNets). In this work, we compare the behavior of capsule networks against ConvNets under typical datasets constraints of medical image analysis, namely, small amounts of annotated data and class-imbalance. We evaluate our experiments on MNIST, Fashion-MNIST and medical (histological and retina images) publicly available datasets. Our results suggest that capsule networks can be trained with less amount of data for the same or better performance and are more robust to an imbalanced class distribution, which makes our approach very promising for the medical imaging community.", "title": "" }, { "docid": "291b8dc672341fbc286e89eefc46a1b1", "text": "We present an introduction to and a tutorial on the properties of the recently discovered ideal circuit element, a memristor. By definition, a memristor M relates the charge q and the magnetic flux φ in a circuit and complements a resistor R, a capacitor C and an inductor L as an ingredient of ideal electrical circuits. The properties of these three elements and their circuits are a part of the standard curricula. The existence of the memristor as the fourth ideal circuit element was predicted in 1971 based on symmetry arguments, but was clearly experimentally demonstrated just last year. We present the properties of a single memristor, memristors in series and parallel, as well as ideal memristor–capacitor (MC), memristor–inductor (ML) and memristor– capacitor–inductor (MCL) circuits. We find that the memristor has hysteretic current–voltage characteristics. We show that the ideal MC (ML) circuit undergoes non-exponential charge (current) decay with two time scales and that by switching the polarity of the capacitor, an ideal MCL circuit can be tuned from overdamped to underdamped. We present simple models which show that these unusual properties are closely related to the memristor’s internal dynamics. This tutorial complements the pedagogy of ideal circuit elements (R,C and L) and the properties of their circuits, and is aimed at undergraduate physics and electrical engineering students. (Some figures in this article are in colour only in the electronic version)", "title": "" }, { "docid": "f7c427f1bf94aa37c726a40254e9638c", "text": "Document classification for text, images and other applicable entities has long been a focus of research in academia and also finds application in many industrial settings. Amidst a plethora of approaches to solve such problems, machine-learning techniques have found success in a variety of scenarios. In this paper we discuss the design of a machine learning-based semi-supervised job title classification system for the online job recruitment domain currently in production at CareerBuilder.com and propose enhancements to it. The system leverages a varied collection of classification as well clustering algorithms. These algorithms are encompassed in an architecture that facilitates leveraging existing off-the-shelf machine learning tools and techniques while keeping into consideration the challenges of constructing a scalable classification system for a large taxonomy of categories. As a continuously evolving system that is still under development we first discuss the existing semi-supervised classification system which is composed of both clustering and classification components in a proximity-based classifier setup and results of which are already used across numerous products at CareerBuilder. We then elucidate our long-term goals for job title classification and propose enhancements to the existing system in the form of a two-stage coarse and fine level classifier augmentation to construct a cascade of hierarchical vertical classifiers. Preliminary results are presented using experimental evaluation on real world industrial data.", "title": "" }, { "docid": "8cda36e81db2bce7f9b648a20c0a55a5", "text": "Scalable and effective analysis of large text corpora remains a challenging problem as our ability to collect textual data continues to increase at an exponential rate. To help users make sense of large text corpora, we present a novel visual analytics system, Parallel-Topics, which integrates a state-of-the-art probabilistic topic model Latent Dirichlet Allocation (LDA) with interactive visualization. To describe a corpus of documents, ParallelTopics first extracts a set of semantically meaningful topics using LDA. Unlike most traditional clustering techniques in which a document is assigned to a specific cluster, the LDA model accounts for different topical aspects of each individual document. This permits effective full text analysis of larger documents that may contain multiple topics. To highlight this property of the model, ParallelTopics utilizes the parallel coordinate metaphor to present the probabilistic distribution of a document across topics. Such representation allows the users to discover single-topic vs. multi-topic documents and the relative importance of each topic to a document of interest. In addition, since most text corpora are inherently temporal, ParallelTopics also depicts the topic evolution over time. We have applied ParallelTopics to exploring and analyzing several text corpora, including the scientific proposals awarded by the National Science Foundation and the publications in the VAST community over the years. To demonstrate the efficacy of ParallelTopics, we conducted several expert evaluations, the results of which are reported in this paper.", "title": "" } ]
scidocsrr
00511163313974cf801a2e7e11333717
Channel coordination in green supply chain management
[ { "docid": "3bbbce07c492a3e870df4f71a7f42b5c", "text": "The supply chain has been traditionally defined as a one-way, integrated manufacturing process wherein raw materials are converted into final products, then delivered to customers. Under this definition, the supply chain includes only those activities associated with manufacturing, from raw material acquisition to final product delivery. However, due to recent changing environmental requirements affecting manufacturing operations, increasing attention is given to developing environmental management (EM) strategies for the supply chain. This research: (1) investigates the environmental factors leading to the development of an extended environmental supply chain, (2) describes the elemental differences between the extended supply chain and the traditional supply chain, (3) describes the additional challenges presented by the extension, (4) presents performance measures appropriate for the extended supply chain, and (5) develops a general procedure towards achieving and maintaining the green supply chain.", "title": "" } ]
[ { "docid": "ac529a455bcefa58abafa6c679bec2b4", "text": "This article presents near-optimal guarantees for stable and robust image recovery from undersampled noisy measurements using total variation minimization. In particular, we show that from O(s log(N)) nonadaptive linear measurements, an image can be reconstructed to within the best s-term approximation of its gradient up to a logarithmic factor, and this factor can be removed by taking slightly more measurements. Along the way, we prove a strengthened Sobolev inequality for functions lying in the null space of suitably incoherent matrices.", "title": "" }, { "docid": "0209132c7623c540c125a222552f33ac", "text": "This paper reviews the criticism on the 4Ps Marketing Mix framework, the most popular tool of traditional marketing management, and categorizes the main objections of using the model as the foundation of physical marketing. It argues that applying the traditional approach, based on the 4Ps paradigm, is also a poor choice in the case of virtual marketing and identifies two main limitations of the framework in online environments: the drastically diminished role of the Ps and the lack of any strategic elements in the model. Next to identifying the critical factors of the Web marketing, the paper argues that the basis for successful E-Commerce is the full integration of the virtual activities into the company’s physical strategy, marketing plan and organisational processes. The four S elements of the Web-Marketing Mix framework present a sound and functional conceptual basis for designing, developing and commercialising Business-to-Consumer online projects. The model was originally developed for educational purposes and has been tested and refined by means of field projects; two of them are presented as case studies in the paper.  2002 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "72173ef38d5fd62f73de467e722f970e", "text": "This study uses data collected from adult U.S. residents in 2004 and 2005 to examine whether loneliness and life satisfaction are associated with time spent at home on various Internet activities. Cross-sectional models reveal that time spent browsing the web is positively related to loneliness and negatively related to life satisfaction. Some of the relationships revealed by cross-sectional models persist even when considering the same individuals over time in fixed-effects models that account for time-invariant, individual-level characteristics. Our results vary according to how the time use data were collected, indicating that survey design can have important consequences for research in this area. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "dcc0237d174b6d41d4a4bcd4e00d172e", "text": "Meander line antenna (MLA) is an electrically small antenna which poses several performance related issues such as narrow bandwidth, high VSWR, low gain and high cross polarization levels. This paper describe the design ,simulation and development of meander line microstrip antenna at wireless band, the antenna was modeled using microstrip lines and S parameter for the antenna was obtained. The properties of the antenna such as bandwidth, beamwidth, gain, directivity, return loss and polarization were obtained.", "title": "" }, { "docid": "66432ab91b459c3de8e867c8214029d8", "text": "Distributional hypothesis lies in the root of most existing word representation models by inferring word meaning from its external contexts. However, distributional models cannot handle rare and morphologically complex words very well and fail to identify some finegrained linguistic regularity as they are ignoring the word forms. On the contrary, morphology points out that words are built from some basic units, i.e., morphemes. Therefore, the meaning and function of such rare words can be inferred from the words sharing the same morphemes, and many syntactic relations can be directly identified based on the word forms. However, the limitation of morphology is that it cannot infer the relationship between two words that do not share any morphemes. Considering the advantages and limitations of both approaches, we propose two novel models to build better word representations by modeling both external contexts and internal morphemes in a jointly predictive way, called BEING and SEING. These two models can also be extended to learn phrase representations according to the distributed morphology theory. We evaluate the proposed models on similarity tasks and analogy tasks. The results demonstrate that the proposed models can outperform state-of-the-art models significantly on both word and phrase representation learning.", "title": "" }, { "docid": "d8cc9c70034b484a066d1dc74724eaab", "text": "An enhanced but simple triple band circular ring patch antenna with a new slotting technique is presented, which is most suitable for X-band, Ku-band and K-band applications. This compact micro strip antenna is obtained by inserting small rectangular strip in a circular ring patch antenna. The antenna has been designed and simulated on an FR4 substrate with dielectric constant of 4.4 and thickness of 2mm. The design is analysed by Finite Element Method based HFSS Simulator Software (version 14.0), The simulated return losses obtained are -35.80dB, -42.39dB, and -44.98dB at 8.96 GHz, 14.44 GHz, 18.97 GHz respectively. Therefore, this antenna can be applicable for X-band, Ku-band and K-band applications respectively.", "title": "" }, { "docid": "cfec098f84e157a2e12f0ff40551c977", "text": "In this paper, an online news recommender system for the popular social network, Facebook, is described. This system provides daily newsletters for communities on Facebook. The system fetches the news articles and filters them based on the community description to prepare the daily news digest. Explicit survey feedback from the users show that most users found the application useful and easy to use. They also indicated that they could get some community specific articles that they would not have got otherwise.", "title": "" }, { "docid": "3f1161fa81b19a15b0d4ff882b99b60a", "text": "INTRODUCTION\nDupilumab is a fully human IgG4 monoclonal antibody directed against the α subunit of the interleukin (IL)-4 receptor (IL-4Rα). Since the activation of IL-4Rα is utilized by both IL-4 and IL-13 to mediate their pathophysiological effects, dupilumab behaves as a dual antagonist of these two sister cytokines, which blocks IL-4/IL-13-dependent signal transduction. Areas covered: Herein, the authors review the cellular and molecular pathways activated by IL-4 and IL-13, which are relevant to asthma pathobiology. They also review: the mechanism of action of dupilumab, the phase I, II and III studies evaluating the pharmacokinetics as well as the safety, tolerability and clinical efficacy of dupilumab in asthma therapy. Expert opinion: Supported by a strategic mechanism of action, as well as by convincing preliminary clinical results, dupilumab currently appears to be a very promising biological drug for the treatment of severe uncontrolled asthma. It also may have benefits to comorbidities of asthma including atopic dermatitis, chronic sinusitis and nasal polyposis.", "title": "" }, { "docid": "bad0f688ae12916688e8a3a8d96a5565", "text": "This paper presents a method for creating coherently animated line drawings that include strong abstraction and stylization effects. These effects are achieved with active strokes: 2D contours that approximate and track the lines of an animated 3D scene. Active strokes perform two functions: they connect and smooth unorganized line samples, and they carry coherent parameterization to support stylized rendering. Line samples are approximated and tracked using active contours (\"snakes\") that automatically update their arrangment and topology to match the animation. Parameterization is maintained by brush paths that follow the snakes but are independent, permitting substantial shape abstraction without compromising fidelity in tracking. This approach renders complex models in a wide range of styles at interactive rates, making it suitable for applications like games and interactive illustrations.", "title": "" }, { "docid": "67476959e7b75e52b4e33776b8a10bb9", "text": "The volume of energy loss that Brazilian electrical utilities have to deal with has been ever increasing. Electricity distribution companies have suffered significant and increasing losses in the last years, due to theft, measurement errors and other irregularities. Therefore there is a great concern to identify the profile of irregular customers, in order to reduce the volume of such losses. This paper presents a combined approach of a neural networks committee and a neuro-fuzzy hierarchical system intended to increase the level of accuracy in the identification of irregularities among low voltage consumers. The data used to test the proposed system are from Light S.A., the distribution company of Rio de Janeiro. The results obtained presented a significant increase in the identification of irregular customers when compared to the current methodology employed by the company. Keywords— neural nets, hierarchical neuro-fuzzy systems, binary space partition, electricity distribution, fraud detection.", "title": "" }, { "docid": "101af3fab1f8abb4e2b75a067031048a", "text": "Although research on trust in an organizational context has advanced considerably in recent years, the literature has yet to produce a set of generalizable propositions that inform our understanding of the organization and coordination of work. We propose that conceptualizing trust as an organizing principle is a powerful way of integrating the diverse trust literature and distilling generalizable implications for how trust affects organizing. We develop the notion of trust as an organizing principle by specifying structuring and mobilizing as two sets of causal pathways through which trust influences several important properties of organizations. We further describe specific mechanisms within structuring and mobilizing that influence interaction patterns and organizational processes. The principal aim of the framework is to advance the literature by connecting the psychological and sociological micro-foundations of trust with the macro-bases of organizing. The paper concludes by demonstrating how the framework can be applied to yield novel insights into traditional views of organizations and to stimulate original and innovative avenues of organizational research that consider both the benefits and downsides of trust. (Trust; Organizing Principle; Structuring; Mobilizing) Introduction In the introduction to this special issue we observed that empirical research on trust was not keeping pace with theoretical developments in the field. We viewed this as a significant limitation and surmised that a special issue devoted to empirical research on trust would serve as a valuable vehicle for advancing the literature. In addition to the lack of empirical research, we would also make the observation that theories and evidence accumulating on trust in organizations is not well integrated and that the literature as a whole lacks coherence. At a general level, extant research provides “accumulating evidence that trust has a number of important benefits for organizations and their members” (Kramer 1999, p. 569). More specifically, Dirks and Ferrin’s (2001) review of the literature points to two distinct means through which trust generates these benefits. The dominant approach emphasizes the direct effects that trust has on important organizational phenomena such as: communication, conflict management, negotiation processes, satisfaction, and performance (both individual and unit). A second, less well studied, perspective points to the enabling effects of trust, whereby trust creates or enhances the conditions, such as positive interpretations of another’s behavior, that are conducive to obtaining organizational outcomes like cooperation and higher performance. The identification of these two perspectives provides a useful way of organizing the literature and generating insight into the mechanisms through which trust influences organizational outcomes. However, we are still left with a set of findings that have yet to be integrated on a theoretical level in a way that yields a set of generalizable propositions about the effects of trust on organizing. We believe this is due to the fact that research has, for the most part, embedded trust into existing theories. As a result, trust has been studied in a variety of different ways to address a wide range of organizational questions. This has yielded a diverse and eclectic body of knowledge about the relationship between trust and various organizational outcomes. At the same time, this approach has resulted in a somewhat fragmented view of the role of trust in an organizational context as a whole. In the remainder of this paper we begin to address the challenge of integrating the fragmented trust literature. While it is not feasible to develop a comprehensive framework that synthesizes the vast and diverse trust literature in a single paper, we draw together several key strands that relate to the organizational context. In particular, our paper aims to advance the literature by connecting the psychological and sociological microfoundations of trust with the macro-bases of organizing. BILL MCEVILY, VINCENZO PERRONE, AND AKBAR ZAHEER Trust as an Organizing Principle 92 ORGANIZATION SCIENCE/Vol. 14, No. 1, January–February 2003 Specifically, we propose that reconceptualizing trust as an organizing principle is a fruitful way of viewing the role of trust and comprehending how research on trust advances our understanding of the organization and coordination of economic activity. While it is our goal to generate a framework that coalesces our thinking about the processes through which trust, as an organizing principle, affects organizational life, we are not Pollyannish: trust indubitably has a down side, which has been little researched. We begin by elaborating on the notion of an organizing principle and then move on to conceptualize trust from this perspective. Next, we describe a set of generalizable causal pathways through which trust affects organizing. We then use that framework to identify some exemplars of possible research questions and to point to possible downsides of trust. Organizing Principles As Ouchi (1980) discusses, a fundamental purpose of organizations is to attain goals that require coordinated efforts. Interdependence and uncertainty make goal attainment more difficult and create the need for organizational solutions. The subdivision of work implies that actors must exchange information and rely on others to accomplish organizational goals without having complete control over, or being able to fully monitor, others’ behaviors. Coordinating actions is further complicated by the fact that actors cannot assume that their interests and goals are perfectly aligned. Consequently, relying on others is difficult when there is uncertainty about their intentions, motives, and competencies. Managing interdependence among individuals, units, and activities in the face of behavioral uncertainty constitutes a key organizational challenge. Organizing principles represent a way of solving the problem of interdependence and uncertainty. An organizing principle is the logic by which work is coordinated and information is gathered, disseminated, and processed within and between organizations (Zander and Kogut 1995). An organizing principle represents a heuristic for how actors interpret and represent information and how they select appropriate behaviors and routines for coordinating actions. Examples of organizing principles include: market, hierarchy, and clan (Ouchi 1980). Other have referred to these organizing principles as authority, price, and norms (Adler 2001, Bradach and Eccles 1989, Powell 1990). Each of these principles operates on the basis of distinct mechanisms that orient, enable, and constrain economic behavior. For instance, authority as an organizing principle solves the problem of coordinating action in the face of interdependence and uncertainty by reallocating decision-making rights (Simon 1957, Coleman 1990). Price-based organizing principles revolve around the idea of making coordination advantageous for each party involved by aligning incentives (Hayek 1948, Alchian and Demsetz 1972). Compliance to internalized norms and the resulting self-control of the clan form is another organizing principle that has been identified as a means of achieving coordinated action (Ouchi 1980). We propose that trust is also an organizing principle and that conceptualizing trust in this way provides a powerful means of integrating the disparate research on trust and distilling generalizable implications for how trust affects organizing. We view trust as most closely related to the clan organizing principle. By definition clans rely on trust (Ouchi 1980). However, trust can and does occur in organizational contexts outside of clans. For instance, there are a variety of organizational arrangements where cooperation in mixed-motive situations depends on trust, such as in repeated strategic alliances (Gulati 1995), buyer-supplier relationships (Dyer and Chu this issue), and temporary groups in organizations (Meyerson et al. 1996). More generally, we believe that trust frequently operates in conjunction with other organizing principles. For instance, Dirks (2000) found that while authority is important for behaviors that can be observed or controlled, trust is important when there exists performance ambiguity or behaviors that cannot be observed or controlled. Because most organizations have a combination of behaviors that can and cannot be observed or controlled, authority and trust co-occur. More generally, we believe that mixed or plural forms are the norm, consistent with Bradach and Eccles (1989). In some situations, however, trust may be the primary organizing principle, such as when monitoring and formal controls are difficult and costly to use. In these cases, trust represents an efficient choice. In other situations, trust may be relied upon due to social, rather than efficiency, considerations. For instance, achieving a sense of personal belonging within a collectivity (Podolny and Barron 1997) and the desire to develop and maintain rewarding social attachments (Granovetter 1985) may serve as the impetus for relying on trust as an organizing principle. Trust as an Organizing Principle At a general level trust is the willingness to accept vulnerability based on positive expectations about another’s intentions or behaviors (Mayer et al. 1995, Rousseau et al. 1998). Because trust represents a positive assumption BILL MCEVILY, VINCENZO PERRONE, AND AKBAR ZAHEER Trust as an Organizing Principle ORGANIZATION SCIENCE/Vol. 14, No. 1, January–February 2003 93 about the motives and intentions of another party, it allows people to economize on information processing and safeguarding behaviors. By representing an expectation that others will act in a way that serves, or at least is not inimical to, one’s interests (Gambetta 1988), trust as a heuristic is a frame of reference that al", "title": "" }, { "docid": "ff2b53e0cecb849d1cbb503300f1ab9a", "text": "Receiving rapid, accurate and comprehensive knowledge about the conditions of damaged buildings after earthquake strike and other natural hazards is the basis of many related activities such as rescue, relief and reconstruction. Recently, commercial high-resolution satellite imagery such as IKONOS and QuickBird is becoming more powerful data resource for disaster management. In this paper, a method for automatic detection and classification of damaged buildings using integration of high-resolution satellite imageries and vector map is proposed. In this method, after extracting buildings position from vector map, they are located in the pre-event and post-event satellite images. By measuring and comparing different textural features for extracted buildings in both images, buildings conditions are evaluated through a Fuzzy Inference System. Overall classification accuracy of 74% and kappa coefficient of 0.63 were acquired. Results of the proposed method, indicates the capability of this method for automatic determination of damaged buildings from high-resolution satellite imageries.", "title": "" }, { "docid": "72c164c281e98386a054a25677c21065", "text": "The rapid digitalisation of the hospitality industry over recent years has brought forth many new points of attack for consideration. The hasty implementation of these systems has created a reality in which businesses are using the technical solutions, but employees have very little awareness when it comes to the threats and implications that they might present. This gap in awareness is further compounded by the existence of preestablished, often rigid, cultures that drive how hospitality businesses operate. Potential attackers are recognising this and the last two years have seen a huge increase in cyber-attacks within the sector.Attempts at addressing the increasing threats have taken the form of technical solutions such as encryption, access control, CCTV, etc. However, a high majority of security breaches can be directly attributed to human error. It is therefore necessary that measures for addressing the rising trend of cyber-attacks go beyond just providing technical solutions and make provision for educating employees about how to address the human elements of security. Inculcating security awareness amongst hospitality employees will provide a foundation upon which a culture of security can be created to promote the seamless and secured interaction of hotel users and technology.One way that the hospitality industry has tried to solve the awareness issue is through their current paper-based training. This is unengaging, expensive and presents limited ways to deploy, monitor and evaluate the impact and effectiveness of the content. This leads to cycles of constant training, making it very hard to initiate awareness, particularly within those on minimum waged, short-term job roles.This paper presents a structured approach for eliciting industry requirement for developing and implementing an immersive Cyber Security Awareness learning platform. It used a series of over 40 interviews and threat analysis of the hospitality industry to identify the requirements for designing and implementing cyber security program which encourage engagement through a cycle of reward and recognition. In particular, the need for the use of gamification elements to provide an engaging but gentle way of educating those with little or no desire to learn was identified and implemented. Also presented is a method for guiding and monitoring the impact of their employee’s progress through the learning management system whilst monitoring the levels of engagement and positive impact the training is having on the business.", "title": "" }, { "docid": "d781c28e343d63babafb0fd1353ae62c", "text": "The present study evaluated the personality characteristics and psychopathology of internet sex offenders (ISOs) using the Minnesota Multiphasic Personality Inventory, Second Edition (MMPI-2) to determine whether ISO personality profiles are different to those of general sex offenders (GSOs; e.g. child molesters and rapists). The ISOs consisted of 48 convicted males referred to a private sex offender treatment facility for a psychosexual risk assessment. The GSOs consisted of 104 incarcerated non-internet or general sex offenders. Findings indicated that ISOs scored significantly lower on the following scales: L, F, Pd and Sc. A comparison of the MMPI-2 scores of the ISO and GSO groups indicated that ISOs are a heterogeneous group with considerable withingroup differences. Current findings are consistent with the existing literature on the limited utility of the MMPI-2 in differentiating between subtypes of sex offenders.", "title": "" }, { "docid": "04bc7757006176cd1307874d19b11dc6", "text": "AIMS\nCompare vaginal resting pressure (VRP), pelvic floor muscle (PFM) strength, and endurance in women with and without diastasis recti abdominis at gestational week 21 and at 6 weeks, 6 months, and 12 months postpartum. Furthermore, to compare prevalence of urinary incontinence (UI) and pelvic organ prolapse (POP) in the two groups at the same assessment points.\n\n\nMETHODS\nThis is a prospective cohort study following 300 nulliparous pregnant women giving birth at a public university hospital. VRP, PFM strength, and endurance were measured with vaginal manometry. ICIQ-UI-SF questionnaire and POP-Q were used to assess UI and POP. Diastasis recti abdominis was diagnosed with palpation of  ≥2 fingerbreadths 4.5 cm above, at, or 4.5 cm below the umbilicus.\n\n\nRESULTS\nAt gestational week 21 women with diastasis recti abdominis had statistically significant greater VRP (mean difference 3.06 cm H2 O [95%CI: 0.70; 5.42]), PFM strength (mean difference 5.09 cm H2 O [95%CI: 0.76; 9.42]) and PFM muscle endurance (mean difference 47.08 cm H2 O sec [95%CI: 15.18; 78.99]) than women with no diastasis. There were no statistically significant differences between women with and without diastasis in any PFM variables at 6 weeks, 6 months, and 12 months postpartum. No significant difference was found in prevalence of UI in women with and without diastasis at any assessment points. Six weeks postpartum 15.9% of women without diastasis had POP versus 4.1% in the group with diastasis (P = 0.001).\n\n\nCONCLUSIONS\nWomen with diastasis were not more likely to have weaker PFM or more UI or POP. Neurourol. Urodynam. 36:716-721, 2017. © 2016 Wiley Periodicals, Inc.", "title": "" }, { "docid": "9ce5377315e50c70337aa4b7d6512de0", "text": "This paper discusses two main software engineering methodologies to system development, the waterfall model and the objectoriented approach. A review of literature reveals that waterfall model uses linear approach and is only suitable for sequential or procedural design. In waterfall, errors can only be detected at the end of the whole process and it may be difficult going back to repeat the entire process because the processes are sequential. Also, software based on waterfall approach is difficult to maintain and upgrade due to lack of integration between software components. On the other hand, the Object Oriented approach enables software systems to be developed as integration of software objects that work together to make a holistic and functional system. The software objects are independent of each other, allowing easy upgrading and maintenance of software codes. The paper also highlighted the merits and demerits of each of the approaches. This work concludes with the appropriateness of each approach in relation to the complexity of the problem domain.", "title": "" }, { "docid": "d6a6ee23cd1d863164c79088f75ece30", "text": "In our work, 3D objects classification has been dealt with convolutional neural networks which is a common paradigm recently in image recognition. In the first phase of experiments, 3D models in ModelNet10 and ModelNet40 data sets were voxelized and scaled with certain parameters. Classical CNN and 3D Dense CNN architectures were designed for training the pre-processed data. In addition, the two trained CNNs were ensembled and the results of them were observed. A success rate of 95.37% achieved on ModelNet10 by using 3D dense CNN, a success rate of 91.24% achieved with ensemble of two CNNs on ModelNet40.", "title": "" }, { "docid": "f66ebffa2efda9a4728a85c0b3a94fc7", "text": "The vulnerability of face recognition systems is a growing concern that has drawn the interest from both academic and research communities. Despite the availability of a broad range of face presentation attack detection (PAD) (or countermeasure or antispoofing) schemes, there exists no superior PAD technique due to evolution of sophisticated presentation attacks (or spoof attacks). In this paper, we present a new perspective for face presentation attack detection by introducing light field camera (LFC). Since the use of a LFC can record the direction of each incoming ray in addition to the intensity, it exhibits an unique characteristic of rendering multiple depth (or focus) images in a single capture. Thus, we present a novel approach that involves exploring the variation of the focus between multiple depth (or focus) images rendered by the LFC that in turn can be used to reveal the presentation attacks. To this extent, we first collect a new face artefact database using LFC that comprises of 80 subjects. Face artefacts are generated by simulating two widely used attacks, such as photo print and electronic screen attack. Extensive experiments carried out on the light field face artefact database have revealed the outstanding performance of the proposed PAD scheme when benchmarked with various well established state-of-the-art schemes.", "title": "" }, { "docid": "1b151d173825de2a2b43df8057d1a09d", "text": "An organisation can significantly improve its performance by observing how their business operations are currently being carried out. A great way to derive evidence-based process improvement insights is to compare the behaviour and performance of processes for different process cohorts by utilising the information recorded in event logs. A process cohort is a coherent group of process instances that has one or more shared characteristics. Such process performance comparisons can highlight positive or negative variations that can be evident in a particular cohort, thus enabling a tailored approach to process improvement. Although existing process mining techniques can be used to calculate various statistics from event logs for performance analysis, most techniques calculate and display the statistics for each cohort separately. Furthermore, the numerical statistics and simple visualisations may not be intuitive enough to allow users to compare the performance of various cohorts efficiently and effectively. We developed a novel visualisation framework for log-based process performance comparison to address these issues. It enables analysts to quickly identify the performance differences between cohorts. The framework supports the selection of cohorts and a three-dimensional visualisation to compare the cohorts using a variety of performance metrics. The approach has been implemented as a set of plug-ins within the open source process mining framework ProM and has been evaluated using two real-life datasets from the insurance domain to assess the usefulness of such a tool. This paper also derives a set of design principles from our approach which provide guidance for the development of new approaches to process cohort performance comparison. © 2017 Elsevier B.V. All rights reserved.", "title": "" } ]
scidocsrr
230c500bcde6657aeab12d5f85d8fc03
An Introduction to Physiological Player Metrics for Evaluating Games
[ { "docid": "f21b0f519f4bf46cb61b2dc2861014df", "text": "Player experience is difficult to evaluate and report, especially using quantitative methodologies in addition to observations and interviews. One step towards tying quantitative physiological measures of player arousal to player experience reports are Biometric Storyboards (BioSt). They can visualise meaningful relationships between a player's physiological changes and game events. This paper evaluates the usefulness of BioSt to the game industry. We presented the Biometric Storyboards technique to six game developers and interviewed them about the advantages and disadvantages of this technique.", "title": "" } ]
[ { "docid": "51d579a4d0d1fa3ea0be1ccfd3bb92a9", "text": "ÐThis paper describes a method for partitioning 3D surface meshes into useful segments. The proposed method generalizes morphological watersheds, an image segmentation technique, to 3D surfaces. This surface segmentation uses the total curvature of the surface as an indication of region boundaries. The surface is segmented into patches, where each patch has a relatively consistent curvature throughout, and is bounded by areas of higher, or drastically different, curvature. This algorithm has applications for a variety of important problems in visualization and geometrical modeling including 3D feature extraction, mesh reduction, texture mapping 3D surfaces, and computer aided design. Index TermsÐSurfaces, surface segmentation, watershed algorithm, curvature-based methods.", "title": "" }, { "docid": "4653bc89f67e1015919684d5ca732d8e", "text": "Visitors in Ragunan Zoo, often difficulties when trying to look for animals that want to visit. This difficulty will not happen if there is android -based mobile application that can guide visitors. Global Positioning System application such as Google Maps or GPS is used as an application that can inform our position on earth. Applications that are created not just to know \"where we are\" but has moved toward a more advanced system that can exploit the information for the convenience of users. GPS applications has been transformed into a pleasant traveling companion. Moreover, when visiting a city or a place that has never yet been visited, GPS can easily map a place that can drive well, so do not worry about getting lost in the city. Based on the idea of using GPS applications, create GPS application that can show the way and mapping of animal cages as well as information about the knowledge of each animal. This application is made to overcome the problems to occur and to further increase the number of visitors Ragunan Zoo.", "title": "" }, { "docid": "c1a76ba2114ec856320651489ee9b28b", "text": "The boost of available digital media has led to a significant increase in derivative work. With tools for manipulating objects becoming more and more mature, it can be very difficult to determine whether one piece of media was derived from another one or tampered with. As derivations can be done with malicious intent, there is an urgent need for reliable and easily usable tampering detection methods. However, even media considered semantically untampered by humans might have already undergone compression steps or light post-processing, making automated detection of tampering susceptible to false positives. In this paper, we present the PSBattles dataset which is gathered from a large community of image manipulation enthusiasts and provides a basis for media derivation and manipulation detection in the visual domain. The dataset consists of 102’028 images grouped into 11’142 subsets, each containing the original image as well as a varying number of manipulated derivatives.", "title": "" }, { "docid": "051603c7ee83c49b31428ce611de06c2", "text": "The Internet of Things (IoT) will feature pervasive sensing and control capabilities via a massive deployment of machine-type communication (MTC) devices. The limited hardware, low-complexity, and severe energy constraints of MTC devices present unique communication and security challenges. As a result, robust physical-layer security methods that can supplement or even replace lightweight cryptographic protocols are appealing solutions. In this paper, we present an overview of low-complexity physical-layer security schemes that are suitable for the IoT. A local IoT deployment is modeled as a composition of multiple sensor and data subnetworks, with uplink communications from sensors to controllers, and downlink communications from controllers to actuators. The state of the art in physical-layer security for sensor networks is reviewed, followed by an overview of communication network security techniques. We then pinpoint the most energy-efficient and low-complexity security techniques that are best suited for IoT sensing applications. This is followed by a discussion of candidate low-complexity schemes for communication security, such as on-off switching and space-time block codes. The paper concludes by discussing open research issues and avenues for further work, especially the need for a theoretically well-founded and holistic approach for incorporating complexity constraints in physical-layer security designs.", "title": "" }, { "docid": "ff09a72b95fbf3522d4df0f275fb5c3a", "text": "This paper provides a general overview of solid waste data and management practices employed in Turkey during the last decade. Municipal solid waste statistics and management practices including waste recovery and recycling initiatives have been evaluated. Detailed data on solid waste management practices including collection, recovery and disposal, together with the results of cost analyses, have been presented. Based on these evaluations basic cost estimations on collection and sorting of recyclable solid waste in Turkey have been provided. The results indicate that the household solid waste generation in Turkey, per capita, is around 0.6 kg/year, whereas municipal solid waste generation is close to 1 kg/year. The major constituents of municipal solid waste are organic in nature and approximately 1/4 of municipal solid waste is recyclable. Separate collection programmes for recyclable household waste by more than 60 municipalities, continuing in excess of 3 years, demonstrate solid evidence for public acceptance and continuing support from the citizens. Opinion polls indicate that more than 80% of the population in the project regions is ready and willing to participate in separate collection programmes. The analysis of output data of the Material Recovery Facilities shows that, although paper, including cardboard, is the main constituent, the composition of recyclable waste varies strongly by the source or the type of collection point.", "title": "" }, { "docid": "807a94db483f0ca72d3096e4897d2c76", "text": "A typical scene contains many different objects that, because of the limited processing capacity of the visual system, compete for neural representation. The competition among multiple objects in visual cortex can be biased by both bottom-up sensory-driven mechanisms and top-down influences, such as selective attention. Functional brain imaging studies reveal that, both in the absence and in the presence of visual stimulation, biasing signals due to selective attention can modulate neural activity in visual cortex in several ways. Although the competition among stimuli for representation is ultimately resolved within visual cortex, the source of top-down biasing signals derives from a network of areas in frontal and parietal cortex.", "title": "" }, { "docid": "5e2eee141595ae58ca69ee694dc51c8a", "text": "Evidence-based dietary information represented as unstructured text is a crucial information that needs to be accessed in order to help dietitians follow the new knowledge arrives daily with newly published scientific reports. Different named-entity recognition (NER) methods have been introduced previously to extract useful information from the biomedical literature. They are focused on, for example extracting gene mentions, proteins mentions, relationships between genes and proteins, chemical concepts and relationships between drugs and diseases. In this paper, we present a novel NER method, called drNER, for knowledge extraction of evidence-based dietary information. To the best of our knowledge this is the first attempt at extracting dietary concepts. DrNER is a rule-based NER that consists of two phases. The first one involves the detection and determination of the entities mention, and the second one involves the selection and extraction of the entities. We evaluate the method by using text corpora from heterogeneous sources, including text from several scientifically validated web sites and text from scientific publications. Evaluation of the method showed that drNER gives good results and can be used for knowledge extraction of evidence-based dietary recommendations.", "title": "" }, { "docid": "3a1bbaea6dae7f72a5276a32326884fe", "text": "Statistics suggests that there are around 40 cases per million of quadriplegia every year. Great people like Stephen Hawking have been suffering from this phenomenon. Our project attempts to make lives of the people suffering from this phenomenon simple by helping them move around on their own and not being a burden on others. The idea is to create an Eye Controlled System which enables the movement of the patient’s wheelchair depending on the movements of eyeball. A person suffering from quadriplegia can move his eyes and partially tilt his head, thus giving is an opportunity for detecting these movements. There are various kinds of interfaces developed for powered wheelchair and also there are various new techniques invented but these are costly and not affordable to the poor and needy people. In this paper, we have proposed the simpler and cost effective method of developing wheelchair. We have created a system wherein a person sitting on this automated Wheel Chair with a camera mounted on it, is able to move in a direction just by looking in that direction by making eye movements. The captured camera signals are then send to PC and controlled MATLAB, which will then be send to the Arduino circuit over the Serial Interface which in turn will control motors and allow the wheelchair to move in a particular direction. The system is affordable and hence can be used by patients spread over a large economy range. KeywordsAutomatic wheelchair, Iris Movement Detection, Servo Motor, Daugman’s algorithm, Arduino.", "title": "" }, { "docid": "7b25d1c4d20379a8a0fabc7398ea2c28", "text": "In this paper we introduce an efficient and stable implicit SPH method for the physically-based simulation of incompressible fluids. In the area of computer graphics the most efficient SPH approaches focus solely on the correction of the density error to prevent volume compression. However, the continuity equation for incompressible flow also demands a divergence-free velocity field which is neglected by most methods. Although a few methods consider velocity divergence, they are either slow or have a perceivable density fluctuation.\n Our novel method uses an efficient combination of two pressure solvers which enforce low volume compression (below 0.01%) and a divergence-free velocity field. This can be seen as enforcing incompressibility both on position level and velocity level. The first part is essential for realistic physical behavior while the divergence-free state increases the stability significantly and reduces the number of solver iterations. Moreover, it allows larger time steps which yields a considerable performance gain since particle neighborhoods have to be updated less frequently. Therefore, our divergence-free SPH (DFSPH) approach is significantly faster and more stable than current state-of-the-art SPH methods for incompressible fluids. We demonstrate this in simulations with millions of fast moving particles.", "title": "" }, { "docid": "80bb8f4af70a6c0b6dc5fd149c128154", "text": "The skin care product market is growing due to the threat of ultraviolet (UV) radiation caused by the destruction of the ozone layer, increasing demand for tanning, and the tendency to wear less clothing. Accordingly, there is a potential demand for a personalized UV monitoring device, which can play a fundamental role in skin cancer prevention by providing measurements of UV radiation intensities and corresponding recommendations. This paper highlights the development and initial validation of a wireless and portable embedded device for personalized UV monitoring which is based on a novel software architecture, a high-end UV sensor, and conventional PDA (or a cell phone). In terms of short-term applications, by calculating the UV index, it informs the users about their maximum recommended sun exposure time by taking their skin type and sun protection factor (SPF) of the applied sunscreen into consideration. As for long-term applications, given that the damage caused by UV light is accumulated over days, it displays the amount of UV received over a certain course of time, from a single day to a month.", "title": "" }, { "docid": "9a0530ae13507d14b66ee74ec05c43bd", "text": "The paper investigates the role of the government and self-regulatory reputation mechanisms to internalise externalities of market operation. If it pays off for companies to invest in a good reputation by an active policy of corporate social responsibility (CSR), external effects of the market will be (partly) internalised by the market itself. The strength of the reputation mechanism depends on the functioning of non governmental organisations (NGOs), the transparency of the company, the time horizon of the company, and on the behaviour of employees, consumers and investors. On the basis of an extensive study of the empirical literature on these topics, we conclude that in general the working of the reputation mechanism is rather weak. Especially the transparency of companies is a bottleneck. If the government would force companies to be more transparent, it could initiate a self-enforcing spiral that would improve the working of the reputation mechanism. We also argue that the working of the reputation mechanism will be weaker for smaller companies and for both highly competitive and monopolistic markets. We therefore conclude that government regulation is still necessary, especially for small companies. Tijdschrift voor Economie en Management Vol. XLIX, 2, 2004", "title": "" }, { "docid": "cd058902ed470efc022c328765a40b34", "text": "Secure signal authentication is arguably one of the most challenging problems in the Internet of Things (IoT), due to the large-scale nature of the system and its susceptibility to man-in-the-middle and data-injection attacks. In this paper, a novel watermarking algorithm is proposed for dynamic authentication of IoT signals to detect cyber-attacks. The proposed watermarking algorithm, based on a deep learning long short-term memory structure, enables the IoT devices (IoTDs) to extract a set of stochastic features from their generated signal and dynamically watermark these features into the signal. This method enables the IoT gateway, which collects signals from the IoTDs, to effectively authenticate the reliability of the signals. Moreover, in massive IoT scenarios, since the gateway cannot authenticate all of the IoTDs simultaneously due to computational limitations, a game-theoretic framework is proposed to improve the gateway’s decision making process by predicting vulnerable IoTDs. The mixed-strategy Nash equilibrium (MSNE) for this game is derived, and the uniqueness of the expected utility at the equilibrium is proven. In the massive IoT system, due to the large set of available actions for the gateway, the MSNE is shown to be analytically challenging to derive, and thus, a learning algorithm that converges to the MSNE is proposed. Moreover, in order to handle incomplete information scenarios, in which the gateway cannot access the state of the unauthenticated IoTDs, a deep reinforcement learning algorithm is proposed to dynamically predict the state of unauthenticated IoTDs and allow the gateway to decide on which IoTDs to authenticate. Simulation results show that with an attack detection delay of under 1 s, the messages can be transmitted from IoTDs with an almost 100% reliability. The results also show that by optimally predicting the set of vulnerable IoTDs, the proposed deep reinforcement learning algorithm reduces the number of compromised IoTDs by up to 30%, compared to an equal probability baseline.", "title": "" }, { "docid": "d842f25d20a85f19c63546501bc6699a", "text": "Microservices have been one of the fastest-rising trends in the development of enterprise applications and enterprise application landscapes. Even though various mapping studies investigated the open challenges around microservices from literature, it is difficult to have a clear view of existing challenges in designing, developing, and maintaining systems based on microservices architecture as it is perceived by practitioners. In this paper, we present the results of an empirical survey to assess the current state of practice and collect challenges in microservices architecture. Therefore, we synthesize the 25 collected results and produce a clear overview for answering our research questions. The result of our study can be a basis for planning future research and applications of microservices architecture.", "title": "" }, { "docid": "0a37fcb6c1fba747503fc4e3b5540680", "text": "In this paper we introduce the problem of predicting action progress in videos. We argue that this is an extremely important task because, on the one hand, it can be valuable for a wide range of applications and, on the other hand, it facilitates better action detection results. To solve this problem we introduce a novel approach, named ProgressNet, capable of predicting when an action takes place in a video, where it is located within the frames, and how far it has progressed during its execution. Motivated by the recent success obtained from the interaction of Convolutional and Recurrent Neural Networks, our model is based on a combination of the Faster R-CNN framework, to make framewise predictions, and LSTM networks, to estimate action progress through time. After introducing two evaluation protocols for the task at hand, we demonstrate the capability of our model to effectively predict action progress on the UCF-101 and J-HMDB datasets. Additionally, we show that exploiting action progress it is also possible to improve spatio-temporal localization.", "title": "" }, { "docid": "bd817e69a03da1a97e9c412b5e09eb33", "text": "The emergence of carbapenemase producing bacteria, especially New Delhi metallo-β-lactamase (NDM-1) and its variants, worldwide, has raised amajor public health concern. NDM-1 hydrolyzes a wide range of β-lactam antibiotics, including carbapenems, which are the last resort of antibiotics for the treatment of infections caused by resistant strain of bacteria. In this review, we have discussed bla NDM-1variants, its genetic analysis including type of specific mutation, origin of country and spread among several type of bacterial species. Wide members of enterobacteriaceae, most commonly Escherichia coli, Klebsiella pneumoniae, Enterobacter cloacae, and gram-negative non-fermenters Pseudomonas spp. and Acinetobacter baumannii were found to carry these markers. Moreover, at least seventeen variants of bla NDM-type gene differing into one or two residues of amino acids at distinct positions have been reported so far among different species of bacteria from different countries. The genetic and structural studies of these variants are important to understand the mechanism of antibiotic hydrolysis as well as to design new molecules with inhibitory activity against antibiotics. This review provides a comprehensive view of structural differences among NDM-1 variants, which are a driving force behind their spread across the globe.", "title": "" }, { "docid": "3c5e8575ca6c35c3f19c5c2b1a61565f", "text": "In this paper, a 77-GHz automotive radar sensor transceiver front-end module is packaged with a novel embedded wafer level packaging (EMWLP) technology. The bare transceiver die and the pre-fabricated through silicon via (TSV) chip are reconfigured to form a molded wafer through a compression molding process. The TSVs built on a high resistivity wafer serve as vertical interconnects, carrying radio-frequency (RF) signals up to 77 GHz. The RF path transitions are carefully designed to minimize the insertion loss in the frequency band of concern. The proposed EMWLP module also provides a platform to design integrated passive components. A substrate-integrated waveguide resonator is implemented with TSVs as the via fences, and it is later used to design a second-order 77-GHz high performance bandpass filter. Both the resonator and the bandpass filter are fabricated and measured, and the measurement results match with the simulation results very well.", "title": "" }, { "docid": "de7331c328ba54b7ddd8a542aec3b19f", "text": "Predicting the next location a user tends to visit is an important task for applications like location-based advertising, traffic planning, and tour recommendation. We consider the next location prediction problem for semantic trajectory data, wherein each GPS record is attached with a text message that describes the user's activity. In semantic trajectories, the confluence of spatiotemporal transitions and textual messages indicates user intents at a fine granularity and has great potential in improving location prediction accuracies. Nevertheless, existing methods designed for GPS trajectories fall short in capturing latent user intents for such semantics-enriched trajectory data. We propose a method named semantics-enriched recurrent model (SERM). SERM jointly learns the embeddings of multiple factors (user, location, time, keyword) and the transition parameters of a recurrent neural network in a unified framework. Therefore, it effectively captures semantics-aware spatiotemporal transition regularities to improve location prediction accuracies. Our experiments on two real-life semantic trajectory datasets show that SERM achieves significant improvements over state-of-the-art methods.", "title": "" }, { "docid": "1d53b01ee1a721895a17b7d0f3535a28", "text": "We present a suite of algorithms for self-organization of wireless sensor networks, in which there is a scalably large number of mainly static nodes with highly constrained energy resources. The protocols further support slow mobility by a subset of the nodes, energy-efficient routing, and formation of ad hoc subnetworks for carrying out cooperative signal processing functions among a set of the nodes. † This research is supported by DARPA contract number F04701-97-C-0010, and was presented in part at the 37 Allerton Conference on Communication, Computing and Control, September 1999. ‡ Corresponding author.", "title": "" }, { "docid": "bb41e52004782f8ead3549fb9d746e6d", "text": "A method to generate stable transconductance (gm) without using precise external components is presented. The off-chip resistor in a conventional constant-gm bias circuit is replaced with a variable on-chip resistor. A MOSFET biased in triode region is used as a variable resistor. The resistance of the MOSFET is tuned by a background tuning scheme to achieve the stable transconductance that is immune to process, voltage and temperature variation. The transconductance generated by the constant-gm bias circuit designed in 0.18mum CMOS process with 1.5F supply displays less than 1% variation for a 20% change in power supply voltage and less than plusmn1.5% variation for a 60degC change in temperature. The whole circuit draws approximately 850muA from a supply", "title": "" }, { "docid": "3228d57f3d74f56444ce7fb9ed18e042", "text": "Gaussian process (GP) models are widely used to perform Bayesian nonlinear regression and classification — tasks that are central to many machine learning problems. A GP is nonparametric, meaning that the complexity of the model grows as more data points are received. Another attractive feature is the behaviour of the error bars. They naturally grow in regions away from training data where we have high uncertainty about the interpolating function. In their standard form GPs have several limitations, which can be divided into two broad categories: computational difficulties for large data sets, and restrictive modelling assumptions for complex data sets. This thesis addresses various aspects of both of these problems. The training cost for a GP hasO(N3) complexity, whereN is the number of training data points. This is due to an inversion of the N × N covariance matrix. In this thesis we develop several new techniques to reduce this complexity to O(NM2), whereM is a user chosen number much smaller thanN . The sparse approximation we use is based on a set of M ‘pseudo-inputs’ which are optimised together with hyperparameters at training time. We develop a further approximation based on clustering inputs that can be seen as a mixture of local and global approximations. Standard GPs assume a uniform noise variance. We use our sparse approximation described above as a way of relaxing this assumption. By making a modification of the sparse covariance function, we can model input dependent noise. To handle high dimensional data sets we use supervised linear dimensionality reduction. As another extension of the standard GP, we relax the Gaussianity assumption of the process by learning a nonlinear transformation of the output space. All these techniques further increase the applicability of GPs to real complex data sets. We present empirical comparisons of our algorithms with various competing techniques, and suggest problem dependent strategies to follow in practice.", "title": "" } ]
scidocsrr
447e0cd3b3155c45bc6a3c37b7b65ed7
Recurrent Network Models of Sequence Generation and Memory
[ { "docid": "2065faf3e72a8853dd6cbba1daf9c64a", "text": "One of a good overview all the output neurons. The fixed point attractors have resulted in order to the attractor furthermore. As well as memory classification and all the basic ideas. Introducing the form of strange attractors or licence agreement may be fixed point! The above with input produces and the techniques brought from one of cognitive processes. The study of cpgs is the, global dynamics as nearest neighbor classifiers. Attractor networks encode knowledge of the, network will be ergodic so. These synapses will be applicable exploring one interesting and neural networks other technology professionals.", "title": "" } ]
[ { "docid": "9ca90172c5beff5922b4f5274ef61480", "text": "In the past decade, Convolutional Neural Networks (CNNs) have demonstrated state-of-the-art performance in various Artificial Intelligence tasks. To accelerate the experimentation and development of CNNs, several software frameworks have been released, primarily targeting power-hungry CPUs and GPUs. In this context, reconfigurable hardware in the form of FPGAs constitutes a potential alternative platform that can be integrated in the existing deep-learning ecosystem to provide a tunable balance between performance, power consumption, and programmability. In this article, a survey of the existing CNN-to-FPGA toolflows is presented, comprising a comparative study of their key characteristics, which include the supported applications, architectural choices, design space exploration methods, and achieved performance. Moreover, major challenges and objectives introduced by the latest trends in CNN algorithmic research are identified and presented. Finally, a uniform evaluation methodology is proposed, aiming at the comprehensive, complete, and in-depth evaluation of CNN-to-FPGA toolflows.", "title": "" }, { "docid": "b59e90e5d1fa3f58014dedeea9d5b6e4", "text": "The results of vitrectomy in 240 consecutive cases of ocular trauma were reviewed. Of these cases, 71.2% were war injuries. Intraocular foreign bodies were present in 155 eyes, of which 74.8% were metallic and 61.9% ferromagnetic. Multivariate analysis identified the prognostic factors predictive of poor visual outcome, which included: (1) presence of an afferent pupillary defect; (2) double perforating injuries; and (3) presence of intraocular foreign bodies. Association of vitreous hemorrhage with intraocular foreign bodies was predictive of a poor prognosis. Eyes with foreign bodies retained in the anterior segment and vitreous had a better prognosis than those with foreign bodies embedded in the retina. Timing of vitrectomy and type of trauma had no significant effect on the final visual results. Prophylactic scleral buckling reduced the incidence of retinal detachment after surgery. Injuries confined to the cornea had a better prognosis than scleral injuries.", "title": "" }, { "docid": "60f6e3345aae1f91acb187ba698f073b", "text": "A Cube-Satellite (CubeSat) is a small satellite weighing no more than one kilogram. CubeSats are used for space research, but their low-rate communication capability limits functionality. As greater payload and instrumentation functions are sought, increased data rate is needed. Since most CubeSats currently transmit at a 437 MHz frequency, several directional antenna types were studied for a 2.45 GHz, larger bandwidth transmission. This higher frequency provides the bandwidth needed for increasing the data rate. A deployable antenna mechanism maybe needed because most directional antennas are bigger than the CubeSat size constraints. From the study, a deployable hemispherical helical antenna prototype was built. Transmission between two prototype antenna equipped transceivers at varying distances tested the helical performance. When comparing the prototype antenna's maximum transmission distance to the other commercial antennas, the prototype outperformed all commercial antennas, except the patch antenna. The root cause was due to the helical antenna's narrow beam width. Future work can be done in attaining a more accurate alignment with the satellite's directional antenna to downlink with a terrestrial ground station.", "title": "" }, { "docid": "238b1a142b406a7e736126582675ba67", "text": "It was hypothesized that relative group status and endorsement of ideologies that legitimize group status differences moderate attributions to discrimination in intergroup encounters. According to the status-legitimacy hypothesis, the more members of low-status groups endorse the ideology of individual mobility, the less likely they are to attribute negative outcomes from higher status group members to discrimination. In contrast, the more members of high-status groups endorse individual mobility, the more likely they are to attribute negative outcomes from lower status group members to discrimination. Results from 3 studies using 2 different methodologies provide support for this hypothesis among members of different high-status (European Americans and men) and low-status (African Americans, Latino Americans, and women) groups.", "title": "" }, { "docid": "4e4f19bbec96e8d0e94fb488d17af6dd", "text": "Covering: 2012 to 2016Metabolic engineering using systems biology tools is increasingly applied to overproduce secondary metabolites for their potential industrial production. In this Highlight, recent relevant metabolic engineering studies are analyzed with emphasis on host selection and engineering approaches for the optimal production of various prokaryotic secondary metabolites: native versus heterologous hosts (e.g., Escherichia coli) and rational versus random approaches. This comparative analysis is followed by discussions on systems biology tools deployed in optimizing the production of secondary metabolites. The potential contributions of additional systems biology tools are also discussed in the context of current challenges encountered during optimization of secondary metabolite production.", "title": "" }, { "docid": "d2d7595f04af96d7499d7b7c06ba2608", "text": "Deep Neural Network (DNN) is a widely used deep learning technique. How to ensure the safety of DNN-based system is a critical problem for the research and application of DNN. Robustness is an important safety property of DNN. However, existing work of verifying DNN’s robustness is timeconsuming and hard to scale to large-scale DNNs. In this paper, we propose a boosting method for DNN robustness verification, aiming to find counter-examples earlier. Our observation is DNN’s different inputs have different possibilities of existing counter-examples around them, and the input with a small difference between the largest output value and the second largest output value tends to be the achilles’s heel of the DNN. We have implemented our method and applied it on Reluplex, a state-ofthe-art DNN verification tool, and four DNN attacking methods. The results of the extensive experiments on two benchmarks indicate the effectiveness of our boosting method.", "title": "" }, { "docid": "d59a2c1673d093584c5f19212d6ba520", "text": "Introduction and Motivation Today, a majority of data is fundamentally distributed in nature. Data for almost any task is collected over a broad area, and streams in at a much greater rate than ever before. In particular, advances in sensor technology and miniaturization have led to the concept of the sensor network: a (typically wireless) collection of sensing devices collecting detailed data about their surroundings. A fundamental question arises: how to query and monitor this rich new source of data? Similar scenarios emerge within the context of monitoring more traditional, wired networks, and in other emerging models such as P2P networks and grid-based computing. The prevailing paradigm in database systems has been understanding management of centralized data: how to organize, index, access, and query data that is held centrally on a single machine or a small number of closely linked machines. In these distributed scenarios, the axiom is overturned: now, data typically streams into remote sites at high rates. Here, it is not feasible to collect the data in one place: the volume of data collection is too high, and the capacity for data communication relatively low. For example, in battery-powered wireless sensor networks, the main drain on battery life is communication, which is orders of magnitude more expensive than computation or sensing. This establishes a fundamental concept for distributed stream monitoring: if we can perform more computational work within the network to reduce the communication needed, then we can significantly improve the value of our network, by increasing its useful life and extending the range of computation possible over the network. We consider two broad classes of approaches to such in-network query processing, by analogy to query types in traditional DBMSs. In the one shot model, a query is issued by a user at some site, and must be answered based on the current state of data in the network. We identify several possible approaches to this problem. For simple queries, partial computation of the result over a tree can reduce the data transferred significantly. For “holistic” queries, such as medians, count distinct and so on, clever composable summaries give a compact way to accurately approximate query answers. Lastly, careful modeling of correlations between measurements and other trends in the data can further reduce the number of sensors probed. In the continuous model, a query is placed by a user which re-", "title": "" }, { "docid": "63115b12e4a8192fdce26eb7e2f8989a", "text": "Theorems and techniques to form different types of transformationally invariant processing and to produce the same output quantitatively based on either transformationally invariant operators or symmetric operations have recently been introduced by the authors. In this study, we further propose to compose a geared rotationally identical CNN system (GRI-CNN) with a small angle increment by connecting networks of participated processes at the first flatten layer. Using an ordinary CNN structure as a base, requirements for constructing a GRI-CNN include the use of either symmetric input vector or kernels with an angle increment that can form a complete cycle as a \"gearwheel\". Four basic GRI-CNN structures were studied. Each of them can produce quantitatively identical output results when a rotation angle of the input vector is evenly divisible by the increment angle of the gear. Our study showed when a rotated input vector does not match to a gear angle, the GRI-CNN can also produce a highly consistent result. With an ultrafine increment angle (e.g., 1 or 0.1), a virtually isotropic CNN system can be constructed.", "title": "" }, { "docid": "11d06fb5474df44a6bc733bd5cd1263d", "text": "Understanding how materials that catalyse the oxygen evolution reaction (OER) function is essential for the development of efficient energy-storage technologies. The traditional understanding of the OER mechanism on metal oxides involves four concerted proton-electron transfer steps on metal-ion centres at their surface and product oxygen molecules derived from water. Here, using in situ 18O isotope labelling mass spectrometry, we provide direct experimental evidence that the O2 generated during the OER on some highly active oxides can come from lattice oxygen. The oxides capable of lattice-oxygen oxidation also exhibit pH-dependent OER activity on the reversible hydrogen electrode scale, indicating non-concerted proton-electron transfers in the OER mechanism. Based on our experimental data and density functional theory calculations, we discuss mechanisms that are fundamentally different from the conventional scheme and show that increasing the covalency of metal-oxygen bonds is critical to trigger lattice-oxygen oxidation and enable non-concerted proton-electron transfers during OER.", "title": "" }, { "docid": "00be65f8f46d245d8629a1faa30772d7", "text": "Concretization is one of the most labor-intensive phases of the modelbased testing process. This study concentrates on concretization of the abstract tests generated from the test models. The purpose of the study is to design and implement a structure to automate this phase which can reduce the required effort specially in every system update. The structure is completed and discussed as an extension on a modelbased testing tool named ModelJUnit using adaptation approach. In this structure, the focus is mainly on bridging the gap in data-level between the SUT and the model.", "title": "" }, { "docid": "d99d4bdf1af85c14653c7bbde10eca7b", "text": "Plants endure a variety of abiotic and biotic stresses, all of which cause major limitations to production. Among abiotic stressors, heavy metal contamination represents a global environmental problem endangering humans, animals, and plants. Exposure to heavy metals has been documented to induce changes in the expression of plant proteins. Proteins are macromolecules directly responsible for most biological processes in a living cell, while protein function is directly influenced by posttranslational modifications, which cannot be identified through genome studies. Therefore, it is necessary to conduct proteomic studies, which enable the elucidation of the presence and role of proteins under specific environmental conditions. This review attempts to present current knowledge on proteomic techniques developed with an aim to detect the response of plant to heavy metal stress. Significant contributions to a better understanding of the complex mechanisms of plant acclimation to metal stress are also discussed.", "title": "" }, { "docid": "869cc834f84bc88a258b2d9d9d4f3096", "text": "Obesity is a multifactorial disease characterized by an excessive weight for height due to an enlarged fat deposition such as adipose tissue, which is attributed to a higher calorie intake than the energy expenditure. The key strategy to combat obesity is to prevent chronic positive impairments in the energy equation. However, it is often difficult to maintain energy balance, because many available foods are high-energy yielding, which is usually accompanied by low levels of physical activity. The pharmaceutical industry has invested many efforts in producing antiobesity drugs; but only a lipid digestion inhibitor obtained from an actinobacterium is currently approved and authorized in Europe for obesity treatment. This compound inhibits the activity of pancreatic lipase, which is one of the enzymes involved in fat digestion. In a similar way, hundreds of extracts are currently being isolated from plants, fungi, algae, or bacteria and screened for their potential inhibition of pancreatic lipase activity. Among them, extracts isolated from common foodstuffs such as tea, soybean, ginseng, yerba mate, peanut, apple, or grapevine have been reported. Some of them are polyphenols and saponins with an inhibitory effect on pancreatic lipase activity, which could be applied in the management of the obesity epidemic.", "title": "" }, { "docid": "8d350cc11997b6a0dc96c9fef2b1919f", "text": "Task-parameterized models of movements aims at automatically adapting movements to new situations encountered by a robot. The task parameters can for example take the form of positions of objects in the environment, or landmark points that the robot should pass through. This tutorial aims at reviewing existing approaches for task-adaptive motion encoding. It then narrows down the scope to the special case of task parameters that take the form of frames of reference, coordinate systems, or basis functions, which are most commonly encountered in service robotics. Each section of the paper is accompanied with source codes designed as simple didactic examples implemented in Matlab with a full compatibility with GNU Octave, closely following the notation and equations of the article. It also presents ongoing work and further challenges that remain to be addressed, with examples provided in simulation and on a real robot (transfer of manipulation behaviors to the Baxter bimanual robot). The repository for the accompanying source codes is available at http://www.idiap.ch/software/pbdlib/.", "title": "" }, { "docid": "3745ead7df976976f3add631ad175930", "text": "Natural products and traditional medicines are of great importance. Such forms of medicine as traditional Chinese medicine, Ayurveda, Kampo, traditional Korean medicine, and Unani have been practiced in some areas of the world and have blossomed into orderly-regulated systems of medicine. This study aims to review the literature on the relationship among natural products, traditional medicines, and modern medicine, and to explore the possible concepts and methodologies from natural products and traditional medicines to further develop drug discovery. The unique characteristics of theory, application, current role or status, and modern research of eight kinds of traditional medicine systems are summarized in this study. Although only a tiny fraction of the existing plant species have been scientifically researched for bioactivities since 1805, when the first pharmacologically-active compound morphine was isolated from opium, natural products and traditional medicines have already made fruitful contributions for modern medicine. When used to develop new drugs, natural products and traditional medicines have their incomparable advantages, such as abundant clinical experiences, and their unique diversity of chemical structures and biological activities.", "title": "" }, { "docid": "f29f529ee14f4ae90ebb08ba26f8a8c1", "text": "After completing this article, the reader should be able to:  Describe the various biopsy types that require specimen imaging.  List methods of guiding biopsy procedures.  Explain the reasons behind specimen imaging.  Describe various methods for imaging specimens.", "title": "" }, { "docid": "a39c0db041f31370135462af467426ed", "text": "Part of the ventral temporal lobe is thought to be critical for face perception, but what determines this specialization remains unknown. We present evidence that expertise recruits the fusiform gyrus 'face area'. Functional magnetic resonance imaging (fMRI) was used to measure changes associated with increasing expertise in brain areas selected for their face preference. Acquisition of expertise with novel objects (greebles) led to increased activation in the right hemisphere face areas for matching of upright greebles as compared to matching inverted greebles. The same areas were also more activated in experts than in novices during passive viewing of greebles. Expertise seems to be one factor that leads to specialization in the face area.", "title": "" }, { "docid": "d7e7cdc9ac55d5af199395becfe02d73", "text": "Text recognition in images is a research area which attempts to develop a computer system with the ability to automatically read the text from images. These days there is a huge demand in storing the information available in paper documents format in to a computer storage disk and then later reusing this information by searching process. One simple way to store information from these paper documents in to computer system is to first scan the documents and then store them as images. But to reuse this information it is very difficult to read the individual contents and searching the contents form these documents line-by-line and word-by-word. The challenges involved in this the font characteristics of the characters in paper documents and quality of images. Due to these challenges, computer is unable to recognize the characters while reading them. Thus there is a need of character recognition mechanisms to perform Document Image Analysis (DIA) which transforms documents in paper format to electronic format. In this paper we have discuss method for text recognition from images. The objective of this paper is to recognition of text from image for better understanding of the reader by using particular sequence of different processing module.", "title": "" }, { "docid": "057a521ce1b852591a44417e788e4541", "text": "We introduce InfraStructs, material-based tags that embed information inside digitally fabricated objects for imaging in the Terahertz region. Terahertz imaging can safely penetrate many common materials, opening up new possibilities for encoding hidden information as part of the fabrication process. We outline the design, fabrication, imaging, and data processing steps to fabricate information inside physical objects. Prototype tag designs are presented for location encoding, pose estimation, object identification, data storage, and authentication. We provide detailed analysis of the constraints and performance considerations for designing InfraStruct tags. Future application scenarios range from production line inventory, to customized game accessories, to mobile robotics.", "title": "" }, { "docid": "f10ce9ef67abec42deeabbf98f7f7cd8", "text": "In this paper we first deal with the design and operational control of Automated Guided Vehicle (AGV) systems, starting from the literature on these topics. Three main issues emerge: track layout, the number of AGVs required and operational transportation control. An hierarchical queueing network approach to determine the number of AGVs is decribed. Also basic concepts are presented for the transportation control of both a job-shop and a flow-shop. Next we report on the results of a case study, in which track layout and transportation control are the main issues. Finally we suggest some topics for further research.", "title": "" }, { "docid": "20b00a2cc472dfec851f4aea42578a9e", "text": "The self-regulatory strength model maintains that all acts of self-regulation, self-control, and choice result in a state of fatigue called ego-depletion. Self-determination theory differentiates between autonomous regulation and controlled regulation. Because making decisions represents one instance of self-regulation, the authors also differentiate between autonomous choice and controlled choice. Three experiments support the hypothesis that whereas conditions representing controlled choice would be egodepleting, conditions that represented autonomous choice would not. In Experiment 3, the authors found significant mediation by perceived self-determination of the relation between the choice condition (autonomous vs. controlled) and ego-depletion as measured by performance.", "title": "" } ]
scidocsrr
5819b7ff73e9e77f30f5d417903402e5
Publications Received
[ { "docid": "026a0651177ee631a80aaa7c63a1c32f", "text": "This paper is an introduction to natural language interfaces to databases (Nlidbs). A brief overview of the history of Nlidbs is rst given. Some advantages and disadvantages of Nlidbs are then discussed, comparing Nlidbs to formal query languages, form-based interfaces, and graphical interfaces. An introduction to some of the linguistic problems Nlidbs have to confront follows, for the beneet of readers less familiar with computational linguistics. The discussion then moves on to Nlidb architectures, porta-bility issues, restricted natural language input systems (including menu-based Nlidbs), and Nlidbs with reasoning capabilities. Some less explored areas of Nlidb research are then presented, namely database updates, meta-knowledge questions, temporal questions, and multi-modal Nlidbs. The paper ends with reeections on the current state of the art.", "title": "" } ]
[ { "docid": "4fc356024295824f6c68360bf2fcb860", "text": "Detecting depression is a key public health challenge, as almost 12% of all disabilities can be attributed to depression. Computational models for depression detection must prove not only that can they detect depression, but that they can do it early enough for an intervention to be plausible. However, current evaluations of depression detection are poor at measuring model latency. We identify several issues with the currently popular ERDE metric, and propose a latency-weighted F1 metric that addresses these concerns. We then apply this evaluation to several models from the recent eRisk 2017 shared task on depression detection, and show how our proposed measure can better capture system differences.", "title": "" }, { "docid": "96b47f766be916548226abac36b8f318", "text": "Deep learning approaches have achieved state-of-the-art performance in cardiac magnetic resonance (CMR) image segmentation. However, most approaches have focused on learning image intensity features for segmentation, whereas the incorporation of anatomical shape priors has received less attention. In this paper, we combine a multi-task deep learning approach with atlas propagation to develop a shape-constrained bi-ventricular segmentation pipeline for short-axis CMR volumetric images. The pipeline first employs a fully convolutional network (FCN) that learns segmentation and landmark localisation tasks simultaneously. The architecture of the proposed FCN uses a 2.5D representation, thus combining the computational advantage of 2D FCNs networks and the capability of addressing 3D spatial consistency without compromising segmentation accuracy. Moreover, the refinement step is designed to explicitly enforce a shape constraint and improve segmentation quality. This step is effective for overcoming image artefacts (e.g. due to different breath-hold positions and large slice thickness), which preclude the creation of anatomically meaningful 3D cardiac shapes. The proposed pipeline is fully automated, due to network’s ability to infer landmarks, which are then used downstream in the pipeline to initialise atlas propagation. We validate the pipeline on 1831 healthy subjects and 649 subjects with pulmonary hypertension. Extensive numerical experiments on the two datasets demonstrate that our proposed method is robust and capable of producing accurate, high-resolution and anatomically smooth bi-ventricular 3D models, despite the artefacts in input CMR volumes.", "title": "" }, { "docid": "8e3aef1e18f1db603368a32be0ed9fab", "text": "IT departments are under pressure to serve their enterprises by professionalizing their business intelligence (BI) operation. Companies can only be effective when their systematic and structured approach to BI is linked into the business itself.", "title": "" }, { "docid": "48a78ce66c4cc2205a39ba25b2710e33", "text": "Viable tumor cells actively release vesicles into the peripheral circulation and other biologic fluids, which exhibit proteins and RNAs characteristic of that cell. Our group demonstrated the presence of these extracellular vesicles of tumor origin within the peripheral circulation of cancer patients and proposed their utility for diagnosing the presence of tumors and monitoring their response to therapy in the 1970s. However, it has only been in the past 10 years that these vesicles have garnered interest based on the recognition that they serve as essential vehicles for intercellular communication, are key determinants of the immunosuppressive microenvironment observed in cancer and provide stability to tumor-derived components that can serve as diagnostic biomarkers. To date, the clinical utility of extracellular vesicles has been hampered by issues with nomenclature and methods of isolation. The term \"exosomes\" was introduced in 1981 to denote any nanometer-sized vesicles released outside the cell and to differentiate them from intracellular vesicles. Based on this original definition, we use \"exosomes\" as synonymous with \"extracellular vesicles.\" While our original studies used ultracentrifugation to isolate these vesicles, we immediately became aware of the significant impact of the isolation method on the number, type, content and integrity of the vesicles isolated. In this review, we discuss and compare the most commonly utilized methods for purifying exosomes for post-isolation analyses. The exosomes derived from these approaches have been assessed for quantity and quality of specific RNA populations and specific marker proteins. These results suggest that, while each method purifies exosomal material, there are pros and cons of each and there are critical issues linked with centrifugation-based methods, including co-isolation of non-exosomal materials, damage to the vesicle's membrane structure and non-standardized parameters leading to qualitative and quantitative variability. The down-stream analyses of these resulting varying exosomes can yield misleading results and conclusions.", "title": "" }, { "docid": "c125a4a70a6b347456f2e22c0899e84e", "text": "Fenotropil and its structural analog--compound RGPU-95 to a greater extent reduce the severity of anxious and depressive behavior in male rats than in females. On expression of the anxiolytic compound RGPU-95 significantly exceeds Fenotropil, but inferior to Diazepam; of antidepressant activity--comparable to Melipramin and exceeds Fenotropil.", "title": "" }, { "docid": "7b548e0e1e02e3a3150d0fac19d6f6fd", "text": "The paper presents a new torque-controlled lightweight robot for medical procedures developed at the Institute of Robotics and Mechatronics of the German Aerospace Center. Based on the experiences in lightweight robotics and anthropomorphic robotic hands, a small robot arm with 7 axis and torque-controlled joints tailored to surgical procedures has been designed. With an optimized anthropomorphic kinematics, integrated multi-modal sensors and flexible robot control architecture, the first prototype KINEMEDIC and the new generation MIRO, enhanced for endoscopic surgery, can easily be adapted to a wide range of different medical procedures and scenarios by the use of specialized instruments and compiling workflows within the robot control. With the options of both, Cartesian impedance and position control, MIRO is suited for tele-manipulation, shared autonomy and completely autonomous procedures. This paper focuses on system and hardware design of the robot, supplemented with a brief description on new specific control methods for the MIRO robot.", "title": "" }, { "docid": "2f7e5807415398cb95f8f1ab36a0438f", "text": "We present a Convolutional Neural Network (CNN) regression based framework for 2-D/3-D medical image registration, which directly estimates the transformation parameters from image features extracted from the DRR and the X-ray images using learned hierarchical regressors. Our framework consists of learning and application stages. In the learning stage, CNN regressors are trained using supervised machine learning to reveal the correlation between the transformation parameters and the image features. In the application stage, CNN regressors are applied on extracted image features in a hierarchical manner to estimate the transformation parameters. Our experiment results demonstrate that the proposed method can achieve real-time 2-D/3-D registration with very high (i.e., sub-milliliter) accuracy.", "title": "" }, { "docid": "931969dc54170c203db23f55b45dfa38", "text": "The popularity and influence of reviews, make sites like Yelp ideal targets for malicious behaviors. We present Marco, a novel system that exploits the unique combination of social, spatial and temporal signals gleaned from Yelp, to detect venues whose ratings are impacted by fraudulent reviews. Marco increases the cost and complexity of attacks, by imposing a tradeoff on fraudsters, between their ability to impact venue ratings and their ability to remain undetected. We contribute a new dataset to the community, which consists of both ground truth and gold standard data. We show that Marco significantly outperforms state-of-the-art approaches, by achieving 94% accuracy in classifying reviews as fraudulent or genuine, and 95.8% accuracy in classifying venues as deceptive or legitimate. Marco successfully flagged 244 deceptive venues from our large dataset with 7,435 venues, 270,121 reviews and 195,417 users. Furthermore, we use Marco to evaluate the impact of Yelp events, organized for elite reviewers, on the hosting venues. We collect data from 149 Yelp elite events throughout the US. We show that two weeks after an event, twice as many hosting venues experience a significant rating boost rather than a negative impact. © 2015 Wiley Periodicals, Inc. Statistical Analysis and Data Mining 0: 000–000, 2015", "title": "" }, { "docid": "fddadfbc6c1b34a8ac14f8973f052da5", "text": "Abstract. Centroidal Voronoi tessellations are useful for subdividing a region in Euclidean space into Voronoi regions whose generators are also the centers of mass, with respect to a prescribed density function, of the regions. Their extensions to general spaces and sets are also available; for example, tessellations of surfaces in a Euclidean space may be considered. In this paper, a precise definition of such constrained centroidal Voronoi tessellations (CCVTs) is given and a number of their properties are derived, including their characterization as minimizers of an “energy.” Deterministic and probabilistic algorithms for the construction of CCVTs are presented and some analytical results for one of the algorithms are given. Computational examples are provided which serve to illustrate the high quality of CCVT point sets. Finally, CCVT point sets are applied to polynomial interpolation and numerical integration on the sphere.", "title": "" }, { "docid": "2839c318c9c2644edbd3e175bf9027b9", "text": "Multiple human tracking (MHT) is a fundamental task in many computer vision applications. Appearance-based approaches, primarily formulated on RGB data, are constrained and affected by problems arising from occlusions and/or illumination variations. In recent years, the arrival of cheap RGB-Depth (RGB-D) devices has led to many new approaches to MHT, and many of these integrate color and depth cues to improve each and every stage of the process. In this survey, we present the common processing pipeline of these methods and review their methodology based (a) on how they implement this pipeline and (b) on what role depth plays within each stage of it. We identify and introduce existing, publicly available, benchmark datasets and software resources that fuse color and depth data for MHT. Finally, we present a brief comparative evaluation of the performance of those works that have applied their methods to these datasets.", "title": "" }, { "docid": "eaca5794d84a96f8c8e7807cf83c3f00", "text": "Background Women represent 15% of practicing general surgeons. Gender-based discrimination has been implicated as discouraging women from surgery. We sought to determine women's perceptions of gender-based discrimination in the surgical training and working environment. Methods Following IRB approval, we fielded a pilot survey measuring perceptions and impact of gender-based discrimination in medical school, residency training, and surgical practice. It was sent electronically to 1,065 individual members of the Association of Women Surgeons. Results We received 334 responses from medical students, residents, and practicing physicians with a response rate of 31%. Eighty-seven percent experienced gender-based discrimination in medical school, 88% in residency, and 91% in practice. Perceived sources of gender-based discrimination included superiors, physician peers, clinical support staff, and patients, with 40% emanating from women and 60% from men. Conclusions The majority of responses indicated perceived gender-based discrimination during medical school, residency, and practice. Gender-based discrimination comes from both sexes and has a significant impact on women surgeons.", "title": "" }, { "docid": "b458269a0bc4a2d4bfc748ff07ffa753", "text": "Meta-analysis may be used to estimate an overall effect across a number of similar studies. A number of statistical techniques are currently used to combine individual study results. The simplest of these is based on a fixed effects model, which assumes the true effect is the same for all studies. A random effects model, however, allows the true effect to vary across studies, with the mean true effect the parameter of interest. We consider three methods currently used for estimation within the framework of a random effects model, and illustrate them by applying each method to a collection of six studies on the effect of aspirin after myocardial infarction. These methods are compared using estimated coverage probabilities of confidence intervals for the overall effect. The techniques considered all generally have coverages below the nominal level, and in particular it is shown that the commonly used DerSimonian and Laird method does not adequately reflect the error associated with parameter estimation, especially when the number of studies is small.", "title": "" }, { "docid": "9f0206aca2f3cccfb2ca1df629c32c7a", "text": "Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it's important to remember Box's maxim that \"All models are wrong but some are useful.\" We focus on the distinction between these models and explanations in philosophy and sociology. These models can be understood as a \"do it yourself kit\" for explanations, allowing a practitioner to directly answer \"what if questions\" or generate contrastive explanations without external assistance. Although a valuable ability, giving these models as explanations appears more difficult than necessary, and other forms of explanation may not have the same trade-offs. We contrast the different schools of thought on what makes an explanation, and suggest that machine learning might benefit from viewing the problem more broadly.", "title": "" }, { "docid": "ab0d19b1cb4a0f5d283f67df35c304f4", "text": "OBJECTIVE\nWe compared temperament and character traits in children and adolescents with bipolar disorder (BP) and healthy control (HC) subjects.\n\n\nMETHOD\nSixty nine subjects (38 BP and 31 HC), 8-17 years old, were assessed with the Kiddie Schedule for Affective Disorders and Schizophrenia-Present and Lifetime. Temperament and character traits were measured with parent and child versions of the Junior Temperament and Character Inventory.\n\n\nRESULTS\nBP subjects scored higher on novelty seeking, harm avoidance, and fantasy subscales, and lower on reward dependence, persistence, self-directedness, and cooperativeness compared to HC (all p < 0.007), by child and parent reports. These findings were consistent in both children and adolescents. Higher parent-rated novelty seeking, lower self-directedness, and lower cooperativeness were associated with co-morbid attention-deficit/hyperactivity disorder (ADHD). Lower parent-rated reward dependence was associated with co-morbid conduct disorder, and higher child-rated persistence was associated with co-morbid anxiety.\n\n\nCONCLUSIONS\nThese findings support previous reports of differences in temperament in BP children and adolescents and may assist in a greater understating of BP children and adolescents beyond mood symptomatology.", "title": "" }, { "docid": "1898ce1b6cb3a195de2d261bfd8bd7ce", "text": "Unmanned aerial vehicles (UAV) are commonly used for missions in unknown environments, where an exact mathematical model of the environment may not be available. This paper provides a framework for using reinforcement learning to allow the UAV to navigate successfully in such environments. We conducted our simulation and real implementation to show how the UAVs can successfully learn to navigate through an unknown environment. Technical aspects regarding to applying reinforcement learning algorithm to a UAV system and UAV flight control were also addressed. This will enable continuing research using a UAV with learning capabilities in more important applications, such as wildfire monitoring, or search and rescue missions.", "title": "" }, { "docid": "a2f062482157efb491ca841cc68b7fd3", "text": "Coping with malware is getting more and more challenging, given their relentless growth in complexity and volume. One of the most common approaches in literature is using machine learning techniques, to automatically learn models and patterns behind such complexity, and to develop technologies to keep pace with malware evolution. This survey aims at providing an overview on the way machine learning has been used so far in the context of malware analysis in Windows environments, i.e. for the analysis of Portable Executables. We systematize surveyed papers according to their objectives (i.e., the expected output), what information about malware they specifically use (i.e., the features), and what machine learning techniques they employ (i.e., what algorithm is used to process the input and produce the output). We also outline a number of issues and challenges, including those concerning the used datasets, and identify the main current topical trends and how to possibly advance them. In particular, we introduce the novel concept of malware analysis economics, regarding the study of existing trade-offs among key metrics, such as analysis accuracy and economical costs.", "title": "" }, { "docid": "9f5f3423e062721e79c20db9710a986d", "text": "Reliable traffic light detection and classification is crucial for automated driving in urban environments. Currently, there are no systems that can reliably perceive traffic lights in real-time, without map-based information, and in sufficient distances needed for smooth urban driving. We propose a complete system consisting of a traffic light detector, tracker, and classifier based on deep learning, stereo vision, and vehicle odometry which perceives traffic lights in real-time. Within the scope of this work, we present three major contributions. The first is an accurately labeled traffic light dataset of 5000 images for training and a video sequence of 8334 frames for evaluation. The dataset is published as the Bosch Small Traffic Lights Dataset and uses our results as baseline. It is currently the largest publicly available labeled traffic light dataset and includes labels down to the size of only 1 pixel in width. The second contribution is a traffic light detector which runs at 10 frames per second on 1280×720 images. When selecting the confidence threshold that yields equal error rate, we are able to detect traffic lights as small as 4 pixels in width. The third contribution is a traffic light tracker which uses stereo vision and vehicle odometry to compute the motion estimate of traffic lights and a neural network to correct the aforementioned motion estimate.", "title": "" }, { "docid": "b6cc88bc123a081d580c9430c0ad0207", "text": "This paper presents a comparative survey of research activities and emerging technologies of solid-state fault current limiters for power distribution systems.", "title": "" }, { "docid": "9653346c41cab4e22c9987586bb155c1", "text": "The focus of the great majority of climate change impact studies is on changes in mean climate. In terms of climate model output, these changes are more robust than changes in climate variability. By concentrating on changes in climate means, the full impacts of climate change on biological and human systems are probably being seriously underestimated. Here, we briefly review the possible impacts of changes in climate variability and the frequency of extreme events on biological and food systems, with a focus on the developing world. We present new analysis that tentatively links increases in climate variability with increasing food insecurity in the future. We consider the ways in which people deal with climate variability and extremes and how they may adapt in the future. Key knowledge and data gaps are highlighted. These include the timing and interactions of different climatic stresses on plant growth and development, particularly at higher temperatures, and the impacts on crops, livestock and farming systems of changes in climate variability and extreme events on pest-weed-disease complexes. We highlight the need to reframe research questions in such a way that they can provide decision makers throughout the food system with actionable answers, and the need for investment in climate and environmental monitoring. Improved understanding of the full range of impacts of climate change on biological and food systems is a critical step in being able to address effectively the effects of climate variability and extreme events on human vulnerability and food security, particularly in agriculturally based developing countries facing the challenge of having to feed rapidly growing populations in the coming decades.", "title": "" }, { "docid": "5536f306c3633874299be57a19e35c01", "text": "0957-4174/$ see front matter 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.eswa.2013.04.023 ⇑ Corresponding author. Tel.: +55 8197885665. E-mail addresses: rflm@cin.ufpe.br (Rafael Ferreira), lscabral@gmail.com (L. de Souza Cabral), rdl@cin.ufpe.br (R.D. Lins), gfps.cin@gmail.com (G. Pereira e Silva), fred@cin.ufpe.br (F. Freitas), gdcc@cin.ufpe.br (G.D.C. Cavalcanti), rjlima01@gmail. com (R. Lima), steven.simske@hp.com (S.J. Simske), luciano.favaro@hp.com (L. Favaro). Rafael Ferreira a,⇑, Luciano de Souza Cabral , Rafael Dueire Lins , Gabriel Pereira e Silva , Fred Freitas , George D.C. Cavalcanti , Rinaldo Lima , Steven J. Simske , Luciano Favaro c", "title": "" } ]
scidocsrr
ec4cfc49f33587433f421a7dabc2003d
A Critical Evaluation of Website Fingerprinting Attacks
[ { "docid": "1272ee56c591f882c07817686621c0f8", "text": "Low-latency anonymization networks such as Tor and JAP claim to hide the recipient and the content of communications from a local observer, i.e., an entity that can eavesdrop the traffic between the user and the first anonymization node. Especially users in totalitarian regimes strongly depend on such networks to freely communicate. For these people, anonymity is particularly important and an analysis of the anonymization methods against various attacks is necessary to ensure adequate protection. In this paper we show that anonymity in Tor and JAP is not as strong as expected so far and cannot resist website fingerprinting attacks under certain circumstances. We first define features for website fingerprinting solely based on volume, time, and direction of the traffic. As a result, the subsequent classification becomes much easier. We apply support vector machines with the introduced features. We are able to improve recognition results of existing works on a given state-of-the-art dataset in Tor from 3% to 55% and in JAP from 20% to 80%. The datasets assume a closed-world with 775 websites only. In a next step, we transfer our findings to a more complex and realistic open-world scenario, i.e., recognition of several websites in a set of thousands of random unknown websites. To the best of our knowledge, this work is the first successful attack in the open-world scenario. We achieve a surprisingly high true positive rate of up to 73% for a false positive rate of 0.05%. Finally, we show preliminary results of a proof-of-concept implementation that applies camouflage as a countermeasure to hamper the fingerprinting attack. For JAP, the detection rate decreases from 80% to 4% and for Tor it drops from 55% to about 3%.", "title": "" } ]
[ { "docid": "cad72a5b8831796d2cef5bd256b821b1", "text": "This paper presents a linear chirp generator for synthesizing ultra-wideband signals for use in an FM-CW radar being used for airborne snow thickness measurements. Ultra-wideband chirp generators with rigorous linearity requirements are needed for long-range FMCW radars. The chirp generator is composed of a direct digital synthesizer and a frequency multiplier chain. The implementation approach combines recently available high-speed digital, mixed signal, and microwave components along with a frequency pre-distortion technique to synthesize a 6-GHz chirp signal over 240 μs with a <;0.02 MHz/μs deviation from linearity.", "title": "" }, { "docid": "3129b636e3739281ba59721765eeccb9", "text": "Despite the rapid adoption of Facebook as a means of photo sharing, minimal research has been conducted to understand user gratification behind this activity. In order to address this gap, the current study examines users’ gratifications in sharing photos on Facebook by applying Uses and Gratification (U&G) theory. An online survey completed by 368 respondents identified six different gratifications, namely, affection, attention seeking, disclosure, habit, information sharing, and social influence, behind sharing digital photos on Facebook. Some of the study’s prominent findings were: age was in positive correlation with disclosure and social influence gratifications; gender differences were identified among habit and disclosure gratifications; number of photos shared was negatively correlated with habit and information sharing gratifications. The study’s implications can be utilized to refine existing and develop new features and services bridging digital photos and social networking services.", "title": "" }, { "docid": "861e7e5b518681d8f09de17feb637bb7", "text": "Innovation starts with people, making the human capital within the workforce decisive. In a fastchanging knowledge economy, 21st-century digital skills drive organizations' competitiveness and innovation capacity. Although such skills are seen as crucial, the digital aspect integrated with 21stcentury skills is not yet sufficiently defined. The main objectives of this study were to (1) examine the relation between 21st-century skills and digital skills; and (2) provide a framework of 21st-century digital skills with conceptual dimensions and key operational components aimed at the knowledge worker. A systematic literature review was conducted to synthesize the relevant academic literature concerned with 21st-century digital skills. In total, 1592 different articles were screened from which 75 articles met the predefined inclusion criteria. The results show that 21st-century skills are broader than digital skills e the list of mentioned skills is far more extensive. In addition, in contrast to digital skills, 21st-century skills are not necessarily underpinned by ICT. Furthermore, we identified seven core skills: technical, information management, communication, collaboration, creativity, critical thinking and problem solving. Five contextual skills were also identified: ethical awareness, cultural awareness, flexibility, selfdirection and lifelong learning. © 2017 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "fdbe390730b949ccaa060a84257af2f1", "text": "An increase in the prevalence of chronic disease has led to a rise in the demand for primary healthcare services in many developed countries. Healthcare technology tools may provide the leverage to alleviate the shortage of primary care providers. Here we describe the development and usage of an automated healthcare kiosk for the management of patients with stable chronic disease in the primary care setting. One-hundred patients with stable chronic disease were recruited from a primary care clinic. They used a kiosk in place of doctors’ consultations for two subsequent follow-up visits. Patient and physician satisfaction with kiosk usage were measured on a Likert scale. Kiosk blood pressure measurements and triage decisions were validated and optimized. Patients were assessed if they could use the kiosk independently. Patients and physicians were satisfied with all areas of kiosk usage. Kiosk triage decisions were accurate by the 2nd month of the study. Blood pressure measurements by the kiosk were equivalent to that taken by a nurse (p = 0.30, 0.14). Independent kiosk usage depended on patients’ language skills and educational levels. Healthcare kiosks represent an alternative way to manage patients with stable chronic disease. They have the potential to replace physician visits and improve access to primary healthcare. Patients welcome the use of healthcare technology tools, including those with limited literacy and education. Optimization of environmental and patient factors may be required prior to the implementation of kiosk-based technology in the healthcare setting.", "title": "" }, { "docid": "9ffd665d6fe680fc4e7b9e57df48510c", "text": "BACKGROUND\nIn light of the increasing rate of dengue infections throughout the world despite vector-control measures, several dengue vaccine candidates are in development.\n\n\nMETHODS\nIn a phase 3 efficacy trial of a tetravalent dengue vaccine in five Latin American countries where dengue is endemic, we randomly assigned healthy children between the ages of 9 and 16 years in a 2:1 ratio to receive three injections of recombinant, live, attenuated, tetravalent dengue vaccine (CYD-TDV) or placebo at months 0, 6, and 12 under blinded conditions. The children were then followed for 25 months. The primary outcome was vaccine efficacy against symptomatic, virologically confirmed dengue (VCD), regardless of disease severity or serotype, occurring more than 28 days after the third injection.\n\n\nRESULTS\nA total of 20,869 healthy children received either vaccine or placebo. At baseline, 79.4% of an immunogenicity subgroup of 1944 children had seropositive status for one or more dengue serotypes. In the per-protocol population, there were 176 VCD cases (with 11,793 person-years at risk) in the vaccine group and 221 VCD cases (with 5809 person-years at risk) in the control group, for a vaccine efficacy of 60.8% (95% confidence interval [CI], 52.0 to 68.0). In the intention-to-treat population (those who received at least one injection), vaccine efficacy was 64.7% (95% CI, 58.7 to 69.8). Serotype-specific vaccine efficacy was 50.3% for serotype 1, 42.3% for serotype 2, 74.0% for serotype 3, and 77.7% for serotype 4. Among the severe VCD cases, 1 of 12 was in the vaccine group, for an intention-to-treat vaccine efficacy of 95.5%. Vaccine efficacy against hospitalization for dengue was 80.3%. The safety profile for the CYD-TDV vaccine was similar to that for placebo, with no marked difference in rates of adverse events.\n\n\nCONCLUSIONS\nThe CYD-TDV dengue vaccine was efficacious against VCD and severe VCD and led to fewer hospitalizations for VCD in five Latin American countries where dengue is endemic. (Funded by Sanofi Pasteur; ClinicalTrials.gov number, NCT01374516.).", "title": "" }, { "docid": "d45c7f39c315bf5e8eab3052e75354bb", "text": "Predicting the future in real-world settings, particularly from raw sensory observations such as images, is exceptionally challenging. Real-world events can be stochastic and unpredictable, and the high dimensionality and complexity of natural images require the predictive model to build an intricate understanding of the natural world. Many existing methods tackle this problem by making simplifying assumptions about the environment. One common assumption is that the outcome is deterministic and there is only one plausible future. This can lead to low-quality predictions in real-world settings with stochastic dynamics. In this paper, we develop a stochastic variational video prediction (SV2P) method that predicts a different possible future for each sample of its latent variables. To the best of our knowledge, our model is the first to provide effective stochastic multi-frame prediction for real-world videos. We demonstrate the capability of the proposed method in predicting detailed future frames of videos on multiple real-world datasets, both action-free and action-conditioned. We find that our proposed method produces substantially improved video predictions when compared to the same model without stochasticity, and to other stochastic video prediction methods. Our SV2P implementation will be open sourced upon publication.", "title": "" }, { "docid": "c357b9646e31e2d881c0832983593516", "text": "The history of digital image compositing—other than simple digital implementation of known film art—is essentially the history of the alpha channel. Distinctions are drawn between digital printing and digital compositing, between matte creation and matte usage, and between (binary) masking and (subtle) matting. The history of the integral alpha channel and premultiplied alpha ideas are presented and their importance in the development of digital compositing in its current modern form is made clear. Basic Definitions Digital compositing is often confused with several related technologies. Here we distinguish compositing from printing and matte creation—eg, blue-screen matting. Printing v Compositing Digital film printing is the transfer, under digital computer control, of an image stored in digital form to standard chemical, analog movie film. It requires a sophisticated understanding of film characteristics, light source characteristics, precision film movements, film sizes, filter characteristics, precision scanning devices, and digital computer control. We had to solve all these for the Lucasfilm laser-based digital film printer—that happened to be a digital film input scanner too. My colleague David DiFrancesco was honored by the Academy of Motion Picture Art and Sciences last year with a technical award for his achievement on the scanning side at Lucasfilm (along with Gary Starkweather). Also honored was Gary Demos for his CRT-based digital film scanner (along with Dan Cameron). Digital printing is the generalization of this technology to other media, such as video and paper. Digital film compositing is the combining of two or more strips of film—in digital form—to create a resulting strip of film—in digital form—that is the composite of the components. For example, several spacecraft may have been filmed, one per film strip in its separate motion, and a starfield may have also been filmed. Then a digital film compositing step is performed to combine the separate spacecrafts over the starfield. The important point is that none of the technology mentioned above for digital film printing is involved in the digital compositing process. The separate spacecraft elements are digitally represented, and the starfield is digitally represented, so the composite is a strictly digital computation. Digital compositing is the generalization of this technology to other media. Alpha and the History of Digital Compositing Microsoft Tech Memo 7 Alvy 2 This only means that the digital images being combined are represented in resolutions appropriate to their intended final output medium; the compositing techniques involved are the same regardless of output medium being, after all, digital computations. No knowledge of film characteristics, light sources characteristics, film movements, etc. is required for digital compositing. In short, the technology of digital film printing is completely separate from the technology of digital film compositing. The technology of digital film scanning is required, perhaps, to get the spacecrafts and starfield into digital form, and that of digital film printing is required to write the composite of these elements out to film, but the composite itself is a computation, not a physico-chemical process. This argument holds regardless of input or output media. In fact, from hereon I will refer to film as my example, it being clear that the argument generalizes to other media. Matte Creation v Matte Usage The general distinction drawn here is between the technology of pulling mattes, or matte creation, and that of compositing, or matte usage. To perform a film composite of, say a spacecraft, over, say a starfield, one must know where on an output film frame to write the foreground spacecraft and where to write the background starfield—that is, where to expose the foreground element to the unexposed film frame and where to expose the background element. We will ignore for the moment, for the purpose of clarity, the problem of partial transparencies of the foreground object that allow the background object to show through partially. In classic film technology, predating the computer by decades ([Beyer64], [Fielding72], [Vlahos80]), the required spatial information is provided by a (traveling) matte, another piece of film that is transparent where the spacecraft, for example, exists in the frame and opaque elsewhere. This can be done with monochrome film. It is also easy to generate the complement of this matte, sometimes called the holdout matte, by simply exposing the matte film strip to an unexposed strip of monochrome film. So the holdout matte film strip is placed up against the background film strip, in frame by frame register, called a bipack configuration of film, and exposed to a strip of unexposed color film. The starfield, for example, gets exposed to this receiving strip where the holdout matte does not hold out—that is, where the holdout matte is transparent. Then the same strip of film is re-exposed to a bipack consisting of the matte and the foreground element. This time the spacecraft, for example, gets exposed exactly where the starfield was not exposed. Digital film compositing technology is, in its simplest implementation, the digital version of this process, where each strip of film is replaced with a digital equivalent, and the composite is done with a digital computation. Once the foreground and background elements are in digital form and the matte is in digital form, then digital film compositing is a computation, not a physico-chemical process. As we shall see, the computer has caused several fundamentally new Alpha and the History of Digital Compositing Microsoft Tech Memo 7 Alvy 3 ideas to be added to the compositor’s arsenal that are not simply simulations of known analog art. The question becomes: Where does the matte come from? There are several classic (pre-computer) answers to this question. One set of techniques (at least one of which, the sodium vapor technique, was invented by Petro Vlahos [Vlahos58]) causes the generation of the matte strip of film simultaneously with the foreground element strip of film. So this technique simultaneously generates two strips of film for each foreground element. Then optical techniques are used, as described above, to form the composite. Digital technology has nothing new to contribute here; it simply emulates the analog technique. Another technique called blue-screen matting provides the matte strip of film after the fact, so to speak. Blue-screen matting (or more generally, constant color matting, since blue is not required) was also invented by Petro Vlahos [Vlahos64]. It requires that a foreground element be filmed against a constant-color, often bright ultramarine blue, background. Then with a tricky set of optical and film techniques that don’t need to concern us here, a matte is generated that is transparent where the the foreground film strip is the special blue color and opaque elsewhere, or the complement of this. There are digital simulations of this technique that are complicated but involve nothing more than a digital computer to accomplish. The art of generating a matte when one is not provided is often called, in filmmaking circles, pulling a matte. It is an art, requiring experts to accomplish1. I will generalize this concept to all ways of producing a matte, and term it matte creation. The important point is that matte creation is a technology separate from that of compositing, which is a technology that assumes a matte already exists. In short, the technology of matte creation is completely separate from the technology of digital film compositing. Petro Vlahos has been awarded by the Academy of Motion Picture Arts and Sciences for his inventions of this technology, a lifetime achievement award in fact. The digital computer can be used to simulate what he has done and for relatively minor improvements. At Lucasfilm, my colleague Tom Porter and I implemented digital matte creation techniques and improved them, but do not consider this part of our compositing technology. It is part of our matte creation technology. It is time now to return to the discussion of transparency mentioned earlier. One of the hardest things to accomplish in matte creation technology is the representation of partial transparency in the matte. Transparencies are important for foreground elements such as glasses of water, windows, hair, halos, filmy clothes, motion blurred objects, etc. I will not go into the details of why this is difficult or how it is solved, because that is irrelevant to the arguments here. The important points are (1) partial transparency is fundamental to convincing com1 I have proved, in fact, in [Smith82b] that blue-screen matting is an underspecified problem in general and therefore requires a human in the loop. Alpha and the History of Digital Compositing Microsoft Tech Memo 7 Alvy 4 posites, and (2) representing transparencies in a matte is part matte creation technology, not the compositing technology, which just uses the result.", "title": "" }, { "docid": "45f895841ad08bd4473025385e57073a", "text": "Robust brain magnetic resonance (MR) segmentation algorithms are critical to analyze tissues and diagnose tumor and edema in a quantitative way. In this study, we present a new tissue segmentation algorithm that segments brain MR images into tumor, edema, white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF). The detection of the healthy tissues is performed simultaneously with the diseased tissues because examining the change caused by the spread of tumor and edema on healthy tissues is very important for treatment planning. We used T1, T2, and FLAIR MR images of 20 subjects suffering from glial tumor. We developed an algorithm for stripping the skull before the segmentation process. The segmentation is performed using self-organizing map (SOM) that is trained with unsupervised learning algorithm and fine-tuned with learning vector quantization (LVQ). Unlike other studies, we developed an algorithm for clustering the SOM instead of using an additional network. Input feature vector is constructed with the features obtained from stationary wavelet transform (SWT) coefficients. The results showed that average dice similarity indexes are 91% for WM, 87% for GM, 96% for CSF, 61% for tumor, and 77% for edema.", "title": "" }, { "docid": "90cbb02beb09695320d7ab72d709b70e", "text": "Domain adaptation learning aims to solve the classification problems of unlabeled target domain by using rich labeled samples in source domain, but there are three main problems: negative transfer, under adaptation and under fitting. Aiming at these problems, a domain adaptation network based on hypergraph regularized denoising autoencoder (DAHDA) is proposed in this paper. To better fit the data distribution, the network is built with denoising autoencoder which can extract more robust feature representation. In the last feature and classification layers, the marginal and conditional distribution matching terms between domains are obtained via maximum mean discrepancy measurement to solve the under adaptation problem. To avoid negative transfer, the hypergraph regularization term is introduced to explore the high-order relationships among data. The classification performance of the model can be improved by preserving the statistical property and geometric structure simultaneously. Experimental results of 16 cross-domain transfer tasks verify that DAHDA outperforms other state-of-the-art methods.", "title": "" }, { "docid": "1a7b0df571b07927141a2e61314054ae", "text": "We propose a new method of power control for interference limited wireless networks with Rayleigh fading of both the desired and interference signals. Our method explictly takes into account the statistical variation of both the received signal and interference power, and optimally allocates power subject to constraints on the probability of fading induced outage for each transmitter/receiver pair. We establish several results for this type of problem. For the case in which the only constraints are those on the outage probabilities, we give a fast iterative method for finding the optimal power allocation. We establish tight bounds that relate the outage probability caused by channel fading to the signal-to-interference margin calculated when the statistical variation of the signal and intereference powers are ignored. This allows us to show that well-known methods for allocating power, based on Perron-Frobenius eigenvalue theory, can be used to determine power allocations that are provably close to achieving optimal (i.e., minimal) outage probability. In the most general case, which includes bounds on powers and other constraints, we show that the power control problem can be posed as a geometric program, which is a special type of optimization problem that can be transformed to a nonlinear convex optimization by a change of variables, and therefore solved globally and efficiently by recently developed interior-point methods.", "title": "" }, { "docid": "4f98e0a0d11796abcf04a448701b0444", "text": "BACKGROUND\nThe Alzheimer's Disease Assessment Scale (ADAS) was designed as a rating scale for the severity of dysfunction in the cognitive and non-cognitive behaviours that are characteristic of persons with Alzheimer's disease. Its subscale, the ADAS-cog, is a cognitive testing instrument most widely used to measure the impact of the disease. However, the ADAS-cog takes more than 45 min to administer and requires a qualified clinical psychologist as the rater. A more comprehensive rating battery is therefore required. In the present study, we developed a computerized test battery named the Touch Panel-type Dementia Assessment Scale (TDAS), which was intended to substitute for the ADAS-Cog, and was specifically designed to rate cognitive dysfunction quickly and without the need of a specialist rater.\n\n\nMETHODS\nThe hardware for the TDAS comprises a 14-inch touch panel display and computer devices built into one case. The TDAS runs on Windows OS and was bundled with a custom program made with reference to the ADAS-cog. Participants in the present study were 34 patients with Alzheimer's disease. Each participant was administered the ADAS-cog and the TDAS. The test scores for each patient were compared to determine whether the severity of cognitive dysfunction of the patients could be rated equally as well by both tests.\n\n\nRESULTS\nPearson's correlation coefficient showed a significant correlation between the total scores (r= 0.69, P < 0.01) on the two scales for each patient. The Kendall coefficients of concordance obtained for the three corresponding pairs of tasks (word recognition, orientation, and naming object and fingers) showed the three TDAS tasks can rate symptoms of cognitive decline equally as well as the corresponding items on the ADAS-cog.\n\n\nCONCLUSIONS\nThe TDAS appears to be a sensitive and comprehensive assessment battery for rating the symptoms of Alzheimer's disease, and can be substituted for the ADAS-cog.", "title": "" }, { "docid": "be35c342291d4805d2a5333e31ee26d6", "text": "References • We study efficient exploration in reinforcement learning. • Most provably-efficient learning algorithms introduce optimism about poorly understood states and actions. • Motivated by potential advantages relative to optimistic algorithms, we study an alternative approach: posterior sampling for reinforcement learning (PSRL). • This is the extension of the Thompson sampling algorithm for multi-armed bandit problems to reinforcement learning. • We establish the first regret bounds for this algorithm.  Conceptually simple, separates algorithm from analysis: • PSRL selects policies according to the probability they are optimal without need for explicit construction of confidence sets. • UCRL2 bounds error in each s, a separately, which allows for worst-case mis-estimation to occur simultaneously in every s, a . • We believe this will make PSRL more statistically efficient.", "title": "" }, { "docid": "4350da9c0b2debf7ff9b117a9d9d3dbb", "text": "Purpose – The aim of this paper is to consider some of the issues in light of the application of Big Data in the domain of border security and immigration management. Investment in the technologies of borders and their securitisation continues to be a focal point for many governments across the globe. This paper is concerned with a particular example of such technologies, namely, “Big Data” analytics. In the past two years, the technology of Big Data has gained a remarkable popularity within a variety of sectors, ranging from business and government to scientific and research fields. While Big Data techniques are often extolled as the next frontier for innovation and productivity, they are also raising many ethical issues. Design/methodology/approach – The author draws on the example of the new Big Data solution recently developed by IBM for the Australian Customs and Border Protection Service. The system, which relies on data collected from Passenger Name Records, aims to facilitate and automate mechanisms of profiling enable the identification of “high-risk” travellers. It is argued that the use of such Big Data techniques risks augmenting the function and intensity of borders. Findings – The main concerns addressed here revolve around three key elements, namely, the problem of categorisation, the projective and predictive nature of Big Data techniques and their approach to the future and the implications of Big Data on understandings and practices of identity. Originality/value – By exploring these issues, the paper aims to contribute to the debates on the impact of information and communications technology-based surveillance in border management.", "title": "" }, { "docid": "c839542db0e80ce253a170a386d91bab", "text": "Description\nThe American College of Physicians (ACP) developed this guideline to present the evidence and provide clinical recommendations on the management of gout.\n\n\nMethods\nUsing the ACP grading system, the committee based these recommendations on a systematic review of randomized, controlled trials; systematic reviews; and large observational studies published between January 2010 and March 2016. Clinical outcomes evaluated included pain, joint swelling and tenderness, activities of daily living, patient global assessment, recurrence, intermediate outcomes of serum urate levels, and harms.\n\n\nTarget Audience and Patient Population\nThe target audience for this guideline includes all clinicians, and the target patient population includes adults with acute or recurrent gout.\n\n\nRecommendation 1\nACP recommends that clinicians choose corticosteroids, nonsteroidal anti-inflammatory drugs (NSAIDs), or colchicine to treat patients with acute gout. (Grade: strong recommendation, high-quality evidence).\n\n\nRecommendation 2\nACP recommends that clinicians use low-dose colchicine when using colchicine to treat acute gout. (Grade: strong recommendation, moderate-quality evidence).\n\n\nRecommendation 3\nACP recommends against initiating long-term urate-lowering therapy in most patients after a first gout attack or in patients with infrequent attacks. (Grade: strong recommendation, moderate-quality evidence).\n\n\nRecommendation 4\nACP recommends that clinicians discuss benefits, harms, costs, and individual preferences with patients before initiating urate-lowering therapy, including concomitant prophylaxis, in patients with recurrent gout attacks. (Grade: strong recommendation, moderate-quality evidence).", "title": "" }, { "docid": "4d6540d6a200689721063bb7a92b71c3", "text": "The recently-developed statistical method known as the \"bootstrap\" can be used to place confidence intervals on phylogenies. It involves resampling points from one's own data, with replacement, to create a series of bootstrap samples of the same size as the original data. Each of these is analyzed, and the variation among the resulting estimates taken to indicate the size of the error involved in making estimates from the original data. In the case of phylogenies, it is argued that the proper method of resampling is to keep all of the original species while sampling characters with replacement, under the assumption that the characters have been independently drawn by the systematist and have evolved independently. Majority-rule consensus trees can be used to construct a phylogeny showing all of the inferred monophyletic groups that occurred in a majority of the bootstrap samples. If a group shows up 95% of the time or more, the evidence for it is taken to be statistically significant. Existing computer programs can be used to analyze different bootstrap samples by using weights on the characters, the weight of a character being how many times it was drawn in bootstrap sampling. When all characters are perfectly compatible, as envisioned by Hennig, bootstrap sampling becomes unnecessary; the bootstrap method would show significant evidence for a group if it is defined by three or more characters.", "title": "" }, { "docid": "085db8b346c8d7875bccca5d4052192f", "text": "BACKGROUND\nTopical antipsoriatics are recommended first-line treatment of psoriasis, but rates of adherence are low. Patient support by use of electronic health (eHealth) services is suggested to improve medical adherence.\n\n\nOBJECTIVE\nTo review randomised controlled trials (RCTs) testing eHealth interventions designed to improve adherence to topical antipsoriatics and to review applications for smartphones (apps) incorporating the word psoriasis.\n\n\nMATERIAL AND METHODS\nLiterature review: Medline, Embase, Cochrane, PsycINFO and Web of Science were searched using search terms for eHealth, psoriasis and topical antipsoriatics. General analysis of apps: The operating systems (OS) for smartphones, iOS, Google Play, Microsoft Store, Symbian OS and Blackberry OS were searched for apps containing the word psoriasis.\n\n\nRESULTS\nLiterature review: Only one RCT was included, reporting on psoriasis patients' Internet reporting their status of psoriasis over a 12-month period. The rate of adherence was measured by Medication Event Monitoring System (MEMS®). An improvement in medical adherence and reduction of severity of psoriasis were reported. General analysis of apps: A total 184 apps contained the word psoriasis.\n\n\nCONCLUSION\nThere is a critical need for high-quality RCTs testing if the ubiquitous eHealth technologies, for example, some of the numerous apps, can improve psoriasis patients' rates of adherence to topical antipsoriatics.", "title": "" }, { "docid": "cfcc5b98ebebe08475d68667aacaf46f", "text": "Sequence alignment is an important task in bioinformatics which involves typical database search where data is in the form of DNA, RNA or protein sequence. For alignment various methods have been devised starting from pairwise alignment to multiple sequence alignment (MSA). To perform multiple sequence alignment various methods exists like progressive, iterative and concepts of dynamic programming in which we use Needleman Wunsch and Smith Waterman algorithms. This paper discusses various sequence alignment methods including their advantages and disadvantages. The alignment results of DNA sequence of chimpanzee and gorilla are shown.", "title": "" }, { "docid": "8c8120beecf9086f3567083f89e9dfa2", "text": "This thesis studies the problem of product name recognition from short product descriptions. This is an important problem especially with the increasing use of ERP (Enterprise Resource Planning) software at the core of modern business management systems, where the information of business transactions is stored in unstructured data stores. A solution to the problem of product name recognition is especially useful for the intermediate businesses as they are interested in finding potential matches between the items in product catalogs (produced by manufacturers or another intermediate business) and items in the product requests (given by the end user or another intermediate business). In this context the problem of product name recognition is specifically challenging because product descriptions are typically short, ungrammatical, incomplete, abbreviated and multilingual. In this thesis we investigate the application of supervised machine-learning techniques and gazetteer-based techniques to our problem. To approach the problem, we define it as a classification problem where the tokens of product descriptions are classified into I, O and B classes according to the standard IOB tagging scheme. Next we investigate and compare the performance of a set of hybrid solutions that combine machine learning and gazetteer-based approaches. We study a solution space that uses four learning models: linear and non-linear SVC, Random Forest, and AdaBoost. For each solution, we use the same set of features. We divide the features into four categories: token-level features, documentlevel features, gazetteer-based features and frequency-based features. Moreover, we use automatic feature selection to reduce the dimensionality of data; that consequently improves the training efficiency and avoids over-fitting. To be able to evaluate the solutions, we develop a machine learning framework that takes as its inputs a list of predefined solutions (i.e. our solution space) and a preprocessed labeled dataset (i.e. a feature vector X, and a corresponding class label vector Y). It automatically selects the optimal number of most relevant features, optimizes the hyper-parameters of the learning models, trains the learning models, and evaluates the solution set. We believe that our automated machine learning framework can effectively be used as an AutoML framework that automates most of the decisions that have to be made in the design process of a machine learning", "title": "" }, { "docid": "3c0b072b1b2c5082552aff2379bbeeee", "text": "Big Data is a recent research style which brings up challenges in decision making process. The size of the dataset turn intotremendously big, the process of extracting valuablefacts by analyzing these data also has become tedious. To solve this problem of information extraction with Big Data, parallel programming models can be used. Parallel Programming model achieves information extraction by partitioning the huge data into smaller chunks. MapReduce is one of the parallel programming models which works well with Hadoop Distributed File System(HDFS) that can be used to partition the data in a more efficient and effective way. In MapReduce, once the data is partitioned based on the <key, value> pair, it is ready for data analytics. Time Series data play an important role in Big Data Analytics where Time Series analysis can be performed with many machine learning algorithms as well as traditional algorithmic concepts such as regression, exponential smoothing, moving average, classification, clustering and model-based recommendation. For Big Data, these algorithms can be used with MapReduce programming model on Hadoop clusters by translating their data analytics logic to the MapReduce job which is to be run over Hadoop clusters. But Time Series data are sequential in nature so that the partitioning of Time Series data must be carefully done to retain its prediction accuracy.In this paper, a novel parallel approach to forecast Time Series data with Holt-Winters model (PAFHW) is proposed and the proposed approach PAFHW is enhanced by combining K-means clusteringfor forecasting the Time Series data in distributed environment.", "title": "" } ]
scidocsrr
87fa281fc1b05466979cc4b3577e5e96
From Shapeshifter to Lava Monster : Gender Stereotypes in Disney ’ s Moana
[ { "docid": "6f1d7e2faff928c80898bfbf05ac0669", "text": "This study examined level of engagement with Disney Princess media/products as it relates to gender-stereotypical behavior, body esteem (i.e. body image), and prosocial behavior during early childhood. Participants consisted of 198 children (Mage  = 58 months), who were tested at two time points (approximately 1 year apart). Data consisted of parent and teacher reports, and child observations in a toy preference task. Longitudinal results revealed that Disney Princess engagement was associated with more female gender-stereotypical behavior 1 year later, even after controlling for initial levels of gender-stereotypical behavior. Parental mediation strengthened associations between princess engagement and adherence to female gender-stereotypical behavior for both girls and boys, and for body esteem and prosocial behavior for boys only.", "title": "" } ]
[ { "docid": "2b169a32d20bb4af5527be41837f17f7", "text": "This paper introduces a two-switch flyback-forward pulse-width modulated (PWM) DC-DC converter along with the steady-state analysis, simplified design procedure, and experimental verification. The proposed converter topology is the result of integrating the secondary sides of the two-switch flyback and the two-switch forward converters in an anti-parallel connection, while retaining the two-main switches and the clamping diodes on a single winding primary side. The hybrid two-switch flyback-forward converter shares the semiconductor devices on the primary side and the magnetic component on the secondary side resulting in a low volume DC-DC converter with reduced switch voltage stress. Simulation and experimental results are given for a 10-V/30-W, 100 kHz laboratory prototype to verify the theoretical analysis.", "title": "" }, { "docid": "aae7c62819cb70e21914486ade94a762", "text": "From failure experience on power transformers very often it was suspected that inrush currents, occurring when energizing unloaded transformers, were the reason for damage. In this paper it was investigated how mechanical forces within the transformer coils build up under inrush compared to those occurring at short circuit. 2D and 3D computer modeling for a real 268 MVA, 525/17.75 kV three-legged step up transformer were employed. The results show that inrush current peaks of 70% of the rated short circuit current cause local forces in the same order of magnitude as those at short circuit. The resulting force summed up over the high voltage coil is even three times higher. Although inrush currents are normally smaller, the forces can have similar amplitudes as those at short circuit, with longer exposure time, however. Therefore, care has to be taken to avoid such high inrush currents. Today controlled switching offers an elegant and practical solution.", "title": "" }, { "docid": "0fcefddfe877b804095838eb9de9581d", "text": "This paper examines the torque ripple and cogging torque variation in surface-mounted permanent-magnet synchronous motors (PMSMs) with skewed rotor. The effect of slot/pole combinations and magnet shapes on the magnitude and harmonic content of torque waveforms in a PMSM drive has been studied. Finite element analysis and experimental results show that the skewing with steps does not necessarily reduce the torque ripple but may cause it to increase for certain magnet designs and configurations. The electromagnetic torque waveforms, including cogging torque, have been analyzed for four different PMSM configurations having the same envelop dimensions and output requirements.", "title": "" }, { "docid": "857e9430ebc5cf6aad2737a0ce10941e", "text": "Despite a long tradition of effectiveness in laboratory tests, normative messages have had mixed success in changing behavior in field contexts, with some studies showing boomerang effects. To test a theoretical account of this inconsistency, we conducted a field experiment in which normative messages were used to promote household energy conservation. As predicted, a descriptive normative message detailing average neighborhood usage produced either desirable energy savings or the undesirable boomerang effect, depending on whether households were already consuming at a low or high rate. Also as predicted, adding an injunctive message (conveying social approval or disapproval) eliminated the boomerang effect. The results offer an explanation for the mixed success of persuasive appeals based on social norms and suggest how such appeals should be properly crafted.", "title": "" }, { "docid": "95d1a35068e7de3293f8029e8b8694f9", "text": "Botnet is one of the major threats on the Internet for committing cybercrimes, such as DDoS attacks, stealing sensitive information, spreading spams, etc. It is a challenging issue to detect modern botnets that are continuously improving for evading detection. In this paper, we propose a machine learning based botnet detection system that is shown to be effective in identifying P2P botnets. Our approach extracts convolutional version of effective flow-based features, and trains a classification model by using a feed-forward artificial neural network. The experimental results show that the accuracy of detection using the convolutional features is better than the ones using the traditional features. It can achieve 94.7% of detection accuracy and 2.2% of false positive rate on the known P2P botnet datasets. Furthermore, our system provides an additional confidence testing for enhancing performance of botnet detection. It further classifies the network traffic of insufficient confidence in the neural network. The experiment shows that this stage can increase the detection accuracy up to 98.6% and decrease the false positive rate up to 0.5%.", "title": "" }, { "docid": "4cde522275c034a8025c75d144a74634", "text": "Novel sentence detection aims at identifying novel information from an incoming stream of sentences. Our research applies named entity recognition (NER) and part-of-speech (POS) tagging on sentence-level novelty detection and proposes a mixed method to utilize these two techniques. Furthermore, we discuss the performance when setting different history sentence sets. Experimental results of different approaches on TREC'04 Novelty Track show that our new combined method outperforms some other novelty detection methods in terms of precision and recall. The experimental observations of each approach are also discussed.", "title": "" }, { "docid": "d1525fdab295a16d5610210e80fb8104", "text": "The analysis of big data requires powerful, scalable, and accurate data analytics techniques that the traditional data mining and machine learning do not have as a whole. Therefore, new data analytics frameworks are needed to deal with the big data challenges such as volumes, velocity, veracity, variety of the data. Distributed data mining constitutes a promising approach for big data sets, as they are usually produced in distributed locations, and processing them on their local sites will reduce significantly the response times, communications, etc. In this paper, we propose to study the performance of a distributed clustering, called Dynamic Distributed Clustering (DDC). DDC has the ability to remotely generate clusters and then aggregate them using an efficient aggregation algorithm. The technique is developed for spatial datasets. We evaluated the DDC using two types of communications (synchronous and asynchronous), and tested using various load distributions. The experimental results show that the approach has super-linear speed-up, scales up very well, and can take advantage of the recent programming models, such as MapReduce model, as its results are not affected by the types of communications.", "title": "" }, { "docid": "1982db485fbef226a5a1b839fa9bf12e", "text": "The photopigment in the human eye that transduces light for circadian and neuroendocrine regulation, is unknown. The aim of this study was to establish an action spectrum for light-induced melatonin suppression that could help elucidate the ocular photoreceptor system for regulating the human pineal gland. Subjects (37 females, 35 males, mean age of 24.5 +/- 0.3 years) were healthy and had normal color vision. Full-field, monochromatic light exposures took place between 2:00 and 3:30 A.M. while subjects' pupils were dilated. Blood samples collected before and after light exposures were quantified for melatonin. Each subject was tested with at least seven different irradiances of one wavelength with a minimum of 1 week between each nighttime exposure. Nighttime melatonin suppression tests (n = 627) were completed with wavelengths from 420 to 600 nm. The data were fit to eight univariant, sigmoidal fluence-response curves (R(2) = 0.81-0.95). The action spectrum constructed from these data fit an opsin template (R(2) = 0.91), which identifies 446-477 nm as the most potent wavelength region providing circadian input for regulating melatonin secretion. The results suggest that, in humans, a single photopigment may be primarily responsible for melatonin suppression, and its peak absorbance appears to be distinct from that of rod and cone cell photopigments for vision. The data also suggest that this new photopigment is retinaldehyde based. These findings suggest that there is a novel opsin photopigment in the human eye that mediates circadian photoreception.", "title": "" }, { "docid": "a70d064af5e8c5842b8ca04abc3fb2d6", "text": "In the current scenario of cloud computing, heterogeneous resources are located in various geographical locations requiring security-aware resource management to handle security threats. However, existing techniques are unable to protect systems from security attacks. To provide a secure cloud service, a security-based resource management technique is required that manages cloud resources automatically and delivers secure cloud services. In this paper, we propose a self-protection approach in cloud resource management called SECURE, which offers self-protection against security attacks and ensures continued availability of services to authorized users. The performance of SECURE has been evaluated using SNORT. The experimental results demonstrate that SECURE performs effectively in terms of both the intrusion detection rate and false positive rate. Further, the impact of security on quality of service (QoS) has been analyzed.", "title": "" }, { "docid": "170e2b0f15d9485bb3c00026c6c384a8", "text": "Chatbots are a rapidly expanding application of dialogue systems with companies switching to bot services for customer support, and new applications for users interested in casual conversation. One style of casual conversation is argument; many people love nothing more than a good argument. Moreover, there are a number of existing corpora of argumentative dialogues, annotated for agreement and disagreement, stance, sarcasm and argument quality. This paper introduces Debbie, a novel arguing bot, that selects arguments from conversational corpora, and aims to use them appropriately in context. We present an initial working prototype of Debbie, with some preliminary evaluation and describe future work.", "title": "" }, { "docid": "8244bb1d75e550beb417049afb1ff9d5", "text": "Electronically available data on the Web is exploding at an ever increasing pace. Much of this data is unstructured, which makes searching hard and traditional database querying impossible. Many Web documents, however, contain an abundance of recognizable constants that together describe the essence of a document’s content. For these kinds of data-rich, multiple-record documents (e.g. advertisements, movie reviews, weather reports, travel information, sports summaries, financial statements, obituaries, and many others) we can apply a conceptual-modeling approach to extract and structure data automatically. The approach is based on an ontology—a conceptual model instance—that describes the data of interest, including relationships, lexical appearance, and context keywords. By parsing the ontology, we can automatically produce a database scheme and recognizers for constants and keywords, and then invoke routines to recognize and extract data from unstructured documents and structure it according to the generated database scheme. Experiments show that it is possible to achieve good recall and precision ratios for documents that are rich in recognizable constants and narrow in ontological breadth. Our approach is less labor-intensive than other approaches that manually or semiautomatically generate wrappers, and it is generally insensitive to changes in Web-page format.", "title": "" }, { "docid": "4ecb2bd91312598428745851cac90d64", "text": "In large parking area attached to shopping malls and so on, it is difficult to find a vacant parking space. In addition, searching for parking space during long time leads to drivers stress and wasteful energy loss. In order to solve these problems, the navigation system in parking area by using ZigBee networks is proposed in this paper. The ZigBee is expected to realize low power consumption wireless system with low cost. Moreover, the ZigBee can form ad-hoc network easily and more than 65000 nodes can connect at the same time. Therefore, it is suitable for usage in the large parking area. In proposed system, the shortest route to the vacant parking space is transmitted to the own vehicle by the ZigBee ad-hoc network. Thus, the efficient guide is provided to the drivers. To show the effectiveness of the proposed parking system, the average time for arrival in the parking area is evaluated, and the performance of the vehicles that equips the ZigBee terminals is compared with the ordinary vehicles that do not equip the ZigBee terminals.", "title": "" }, { "docid": "c998270736000da12e509103af2c70ec", "text": "Flash memory grew from a simple concept in the early 1980s to a technology that generated close to $23 billion in worldwide revenue in 2007, and this represents one of the many success stories in the semiconductor industry. This success was made possible by the continuous innovation of the industry along many different fronts. In this paper, the history, the basic science, and the successes of flash memories are briefly presented. Flash memories have followed the Moore’s Law scaling trend for which finer line widths, achieved by improved lithographic resolution, enable more memory bits to be produced for the same silicon area, reducing cost per bit. When looking toward the future, significant challenges exist to the continued scaling of flash memories. In this paper, I discuss possible areas that need development in order to overcome some of the size-scaling challenges. Innovations are expected to continue in the industry, and flash memories will continue to follow the historical trend in cost reduction of semiconductor memories through the rest of this decade.", "title": "" }, { "docid": "d1756aa5f0885157bdad130d96350cd3", "text": "In this paper, we describe the winning approach for the RecSys Challenge 2015. Our key points are (1) two-stage classification, (2) massive usage of categorical features, (3) strong classifiers built by gradient boosting and (4) threshold optimization based directly on the competition score. We describe our approach and discuss how it can be used to build scalable personalization systems.", "title": "" }, { "docid": "e9b036925d05faa55b55ec8711715296", "text": "Chest X-rays is one of the most commonly available and affordable radiological examinations in clinical practice. While detecting thoracic diseases on chest X-rays is still a challenging task for machine intelligence, due to 1) the highly varied appearance of lesion areas on X-rays from patients of different thoracic disease and 2) the shortage of accurate pixel-level annotations by radiologists for model training. Existing machine learning methods are unable to deal with the challenge that thoracic diseases usually happen in localized disease-specific areas. In this article, we propose a weakly supervised deep learning framework equipped with squeeze-and-excitation blocks, multi-map transfer and max-min pooling for classifying common thoracic diseases as well as localizing suspicious lesion regions on chest X-rays. The comprehensive experiments and discussions are performed on the ChestX-ray14 dataset. Both numerical and visual results have demonstrated the effectiveness of proposed model and its better performance against the state-of-the-art pipelines.", "title": "" }, { "docid": "940e7dc630b7dcbe097ade7abb2883a4", "text": "Modern object detection methods typically rely on bounding box proposals as input. While initially popularized in the 2D case, this idea has received increasing attention for 3D bounding boxes. Nevertheless, existing 3D box proposal techniques all assume having access to depth as input, which is unfortunately not always available in practice. In this paper, we therefore introduce an approach to generating 3D box proposals from a single monocular RGB image. To this end, we develop an integrated, fully differentiable framework that inherently predicts a depth map, extracts a 3D volumetric scene representation and generates 3D object proposals. At the core of our approach lies a novel residual, differentiable truncated signed distance function module, which, accounting for the relatively low accuracy of the predicted depth map, extracts a 3D volumetric representation of the scene. Our experiments on the standard NYUv2 dataset demonstrate that our framework lets us generate high-quality 3D box proposals and that it outperforms the two-stage technique consisting of successively performing state-of-the-art depth prediction and depthbased 3D proposal generation.", "title": "" }, { "docid": "fb7961117dae98e770e0fe84c33673b9", "text": "Named-Entity Recognition (NER) aims at identifying the fragments of a given text that mention a given entity of interest. This manuscript presents our Minimal named-Entity Recognizer (MER), designed with flexibility, autonomy and efficiency in mind. To annotate a given text, MER only requires a lexicon (text file) with the list of terms representing the entities of interest; and a GNU Bash shell grep and awk tools. MER was deployed in a cloud infrastructure using multiple Virtual Machines to work as an annotation server and participate in the Technical Interoperability and Performance of annotation Servers (TIPS) task of BioCreative V.5. Preliminary results show that our solution processed each document (text retrieval and annotation) in less than 3 seconds on average without using any type of cache. MER is publicly available in a GitHub repository (https://github.com/lasigeBioTM/MER) and through a RESTful Web service (http://labs.fc.ul.pt/mer/).", "title": "" }, { "docid": "26b0fd17e691a1a95e4c08aa53167b43", "text": "We propose Teacher-Student Curriculum Learning (TSCL), a framework for automatic curriculum learning, where the Student tries to learn a complex task and the Teacher automatically chooses subtasks from a given set for the Student to train on. We describe a family of Teacher algorithms that rely on the intuition that the Student should practice more those tasks on which it makes the fastest progress, i.e. where the slope of the learning curve is highest. In addition, the Teacher algorithms address the problem of forgetting by also choosing tasks where the Student’s performance is getting worse. We demonstrate that TSCL matches or surpasses the results of carefully hand-crafted curricula in two tasks: addition of decimal numbers with LSTM and navigation in Minecraft. Using our automatically generated curriculum enabled to solve a Minecraft maze that could not be solved at all when training directly on solving the maze, and the learning was an order of magnitude faster than uniform sampling of subtasks.", "title": "" }, { "docid": "428c480be4ae3d2043c9f5485087c4af", "text": "Current difference-expansion (DE) embedding techniques perform one layer embedding in a difference image. They do not turn to the next difference image for another layer embedding unless the current difference image has no expandable differences left. The obvious disadvantage of these techniques is that image quality may have been severely degraded even before the later layer embedding begins because the previous layer embedding has used up all expandable differences, including those with large magnitude. Based on integer Haar wavelet transform, we propose a new DE embedding algorithm, which utilizes the horizontal as well as vertical difference images for data hiding. We introduce a dynamical expandable difference search and selection mechanism. This mechanism gives even chances to small differences in two difference images and effectively avoids the situation that the largest differences in the first difference image are used up while there is almost no chance to embed in small differences of the second difference image. We also present an improved histogram-based difference selection and shifting scheme, which refines our algorithm and makes it resilient to different types of images. Compared with current algorithms, the proposed algorithm often has better embedding capacity versus image quality performance. The advantage of our algorithm is more obvious near the embedding rate of 0.5 bpp.", "title": "" } ]
scidocsrr
c6e5af0540d26129576a7e1e371d5528
Why Do Social Media Users Share Misinformation?
[ { "docid": "85d4675562eb87550c3aebf0017e7243", "text": "Online social media are complementing and in some cases replacing person-to-person social interaction and redefining the diffusion of information. In particular, microblogs have become crucial grounds on which public relations, marketing, and political battles are fought. We introduce an extensible framework that will enable the real-time analysis of meme diffusion in social media by mining, visualizing, mapping, classifying, and modeling massive streams of public microblogging events. We describe a Web service that leverages this framework to track political memes in Twitter and help detect astroturfing, smear campaigns, and other misinformation in the context of U.S. political elections. We present some cases of abusive behaviors uncovered by our service. Finally, we discuss promising preliminary results on the detection of suspicious memes via supervised learning based on features extracted from the topology of the diffusion networks, sentiment analysis, and crowdsourced annotations.", "title": "" }, { "docid": "9948738a487ed899ec50ac292e1f9c6d", "text": "A Web survey of 1,715 college students was conducted to examine Facebook Groups users' gratifications and the relationship between users' gratifications and their political and civic participation offline. A factor analysis revealed four primary needs for participating in groups within Facebook: socializing, entertainment, self-status seeking, and information. These gratifications vary depending on user demographics such as gender, hometown, and year in school. The analysis of the relationship between users' needs and civic and political participation indicated that, as predicted, informational uses were more correlated to civic and political action than to recreational uses.", "title": "" }, { "docid": "96bb4155000096c1cba6285ad82c9a4d", "text": "0747-5632/$ see front matter 2011 Elsevier Ltd. A doi:10.1016/j.chb.2011.10.002 ⇑ Corresponding author. Tel.: +65 6790 6636; fax: + E-mail addresses: leecs@ntu.edu.sg (C.S. Lee), malo 1 Tel.: +65 67905772; fax: +65 6791 5214. Recent events indicate that sharing news in social media has become a phenomenon of increasing social, economic and political importance because individuals can now participate in news production and diffusion in large global virtual communities. Yet, knowledge about factors influencing news sharing in social media remains limited. Drawing from the uses and gratifications (U&G) and social cognitive theories (SCT), this study explored the influences of information seeking, socializing, entertainment, status seeking and prior social media sharing experience on news sharing intention. A survey was designed and administered to 203 students in a large local university. Results from structural equation modeling (SEM) analysis revealed that respondents who were driven by gratifications of information seeking, socializing, and status seeking were more likely to share news in social media platforms. Prior experience with social media was also a significant determinant of news sharing intention. Implications and directions for future work are discussed. 2011 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "28b493b0f30c6605ff0c22ccea5d2ace", "text": "A serious threat today is malicious executables. It is designed to damage computer system and some of them spread over network without the knowledge of the owner using the system. Two approaches have been derived for it i.e. Signature Based Detection and Heuristic Based Detection. These approaches performed well against known malicious programs but cannot catch the new malicious programs. Different researchers have proposed methods using data mining and machine learning for detecting new malicious programs. The method based on data mining and machine learning has shown good results compared to other approaches. This work presents a static malware detection system using data mining techniques such as Information Gain, Principal component analysis, and three classifiers: SVM, J48, and Naïve Bayes. For overcoming the lack of usual anti-virus products, we use methods of static analysis to extract valuable features of Windows PE file. We extract raw features of Windows executables which are PE header information, DLLs, and API functions inside each DLL of Windows PE file. Thereafter, Information Gain, calling frequencies of the raw features are calculated to select valuable subset features, and then Principal Component Analysis is used for dimensionality reduction of the selected features. By adopting the concepts of machine learning and data-mining, we construct a static malware detection system which has a detection rate of 99.6%.", "title": "" }, { "docid": "170f14fbf337186c8bd9f36390916d2e", "text": "In this paper, we draw upon two sets of theoretical resources to develop a comprehensive theory of sexual offender rehabilitation named the Good Lives Model-Comprehensive (GLM-C). The original Good Lives Model (GLM-O) forms the overarching values and principles guiding clinical practice in the GLM-C. In addition, the latest sexual offender theory (i.e., the Integrated Theory of Sexual Offending; ITSO) provides a clear etiological grounding for these principles. The result is a more substantial and improved rehabilitation model that is able to conceptually link latest etiological theory with clinical practice. Analysis of the GLM-C reveals that it also has the theoretical resources to secure currently used self-regulatory treatment practice within a meaningful structure. D 2005 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "45494f14c2d9f284dd3ad3a5be49ca78", "text": "Developing segmentation techniques for overlapping cells has become a major hurdle for automated analysis of cervical cells. In this paper, an automated three-stage segmentation approach to segment the nucleus and cytoplasm of each overlapping cell is described. First, superpixel clustering is conducted to segment the image into small coherent clusters that are used to generate a refined superpixel map. The refined superpixel map is passed to an adaptive thresholding step to initially segment the image into cellular clumps and background. Second, a linear classifier with superpixel-based features is designed to finalize the separation between nuclei and cytoplasm. Finally, edge and region based cell segmentation are performed based on edge enhancement process, gradient thresholding, morphological operations, and region properties evaluation on all detected nuclei and cytoplasm pairs. The proposed framework has been evaluated using the ISBI 2014 challenge dataset. The dataset consists of 45 synthetic cell images, yielding 270 cells in total. Compared with the state-of-the-art approaches, our approach provides more accurate nuclei boundaries, as well as successfully segments most of overlapping cells.", "title": "" }, { "docid": "60718ad958d65eb60a520d516f1dd4ea", "text": "With the advent of the Internet, more and more public universities in Malaysia are putting in effort to introduce e-learning in their respective universities. Using a structured questionnaire derived from the literature, data was collected from 250 undergraduate students from a public university in Penang, Malaysia. Data was analyzed using AMOS version 16. The results of the structural equation model indicated that service quality (β = 0.20, p < 0.01), information quality (β = 0.37, p < 0.01) and system quality (β = 0.20, p < 0.01) were positively related to user satisfaction explaining a total of 45% variance. The second regression analysis was to examine the impact of user satisfaction on continuance intention. The results showed that satisfaction (β = 0.31, p < 0.01), system quality (β = 0.18, p < 0.01) and service quality (β = 0.30, p < 0.01) were positively related to continuance intention explaining 44% of the variance. Implications from these findings to e-learning system developers and implementers were further elaborated.", "title": "" }, { "docid": "e425bba0f3ab24c226ab8881f3fe0780", "text": "We present a new method for solving total variation (TV) minimization problems in image restoration. The main idea is to remove some of the singularity caused by the nondifferentiability of the quantity |∇u| in the definition of the TV-norm before we apply a linearization technique such as Newton’s method. This is accomplished by introducing an additional variable for the flux quantity appearing in the gradient of the objective function, which can be interpreted as the normal vector to the level sets of the image u. Our method can be viewed as a primal-dual method as proposed by Conn and Overton [A Primal-Dual Interior Point Method for Minimizing a Sum of Euclidean Norms, preprint, 1994] and Andersen [Ph.D. thesis, Odense University, Denmark, 1995] for the minimization of a sum of Euclidean norms. In addition to possessing local quadratic convergence, experimental results show that the new method seems to be globally convergent.", "title": "" }, { "docid": "9c2609adae64ec8d0b4e2cc987628c05", "text": "We propose a novel method capable of retrieving clips from untrimmed videos based on natural language queries. This cross-modal retrieval task plays a key role in visual-semantic understanding, and requires localizing clips in time and computing their similarity to the query sentence. Current methods generate sentence and video embeddings and then compare them using a late fusion approach, but this ignores the word order in queries and prevents more fine-grained comparisons. Motivated by the need for fine-grained multi-modal feature fusion, we propose a novel early fusion embedding approach that combines video and language information at the word level. Furthermore, we use the inverse task of dense video captioning as a side-task to improve the learned embedding. Our full model combines these components with an efficient proposal pipeline that performs accurate localization of potential video clips. We present a comprehensive experimental validation on two large-scale text-to-clip datasets (Charades-STA and DiDeMo) and attain state-ofthe-art retrieval results with our model.", "title": "" }, { "docid": "38f75a17a30c1d3c08dc316cb8a3e4ac", "text": "There are often problems when students enter a course with widely different experience levels with key course topics. If the material is covered too slowly, those with greater experience get bored and lose interest. If the material is covered too quickly, those with less experience get lost and feel incompetent. This problem with incoming students of our Computer Science Major led us to create CS 0.5: an introductory Computer Science course to target those CS majors who have little or no background with programming. Our goal is to provide these students with an engaging curriculum and prepare them to keep pace in future courses with those students who enter with a stronger background.\n Following the lead of Mark Guzdial's work on using media computation for non-majors at Georgia Tech, we use media computation as the tool to provide this engaging curriculum. We report here on our experience to date using the CS 0.5 approach with a media computation course.", "title": "" }, { "docid": "edfb50c784e6e7a89ce12d524f667398", "text": "Unconventional machining processes (communally named advanced or modern machining processes) are widely used by manufacturing industries. These advanced machining processes allow producing complex profiles and high quality-products. However, several process parameters should be optimized to achieve this end. In this paper, the optimization of process parameters of two conventional and four advanced machining processes is investigated: drilling process, grinding process, abrasive jet machining (AJM), abrasive water jet machining (AWJM), ultrasonic machining (USM), and water jet machining (WJM), respectively. This research employed two bio-inspired algorithms called the cuckoo optimization algorithm (COA) and the hoopoe heuristic (HH) to optimize the machining control parameters of these processes. The obtained results are compared with other optimization algorithms described and applied in the literature.", "title": "" }, { "docid": "959f2723ba18e71b2f4acd6108350dd3", "text": "The manufacturing, converting and ennobling processes of paper are truly large area and reel-to-reel processes. Here, we describe a project focusing on using the converting and ennobling processes of paper in order to introduce electronic functions onto the paper surface. As key active electronic materials we are using organic molecules and polymers. We develop sensor, communication and display devices on paper and the main application areas are packaging and paper display applications.", "title": "" }, { "docid": "4afbb5f877f3920dccdf60f6f4dfbf91", "text": "Handling degenerate rotation-only camera motion is a challenge for keyframe-based simultaneous localization and mapping with six degrees of freedom. Existing systems usually filter corresponding keyframe candidates, resulting in mapping starvation and tracking failure. We propose to employ these otherwise discarded keyframes to build up local panorama maps registered in the 3D map. Thus, the system is able to maintain tracking during rotational camera motions. Additionally, we seek to actively associate panoramic and 3D map data for improved 3D mapping through the triangulation of more new 3D map features. We demonstrate the efficacy of our approach in several evaluations that show how the combined system handles rotation only camera motion while creating larger and denser maps compared to a standard SLAM system.", "title": "" }, { "docid": "f6a24aa476ec27b86e549af6d30f22b6", "text": "Designing autonomous robotic systems able to manipulate deformable objects without human intervention constitutes a challenging area of research. The complexity of interactions between a robot manipulator and a deformable object originates from a wide range of deformation characteristics that have an impact on varying degrees of freedom. Such sophisticated interaction can only take place with the assistance of intelligent multisensory systems that combine vision data with force and tactile measurements. Hence, several issues must be considered at the robotic and sensory levels to develop genuine dexterous robotic manipulators for deformable objects. This chapter presents a thorough examination of the modern concepts developed by the robotic community related to deformable objects grasping and manipulation. Since the convention widely adopted in the literature is often to extend algorithms originally proposed for rigid objects, a comprehensive coverage on the new trends on rigid objects manipulation is initially proposed. State-of-the-art techniques on robotic interaction with deformable objects are then examined and discussed. The chapter proposes a critical evaluation of the manipulation algorithms, the instrumentation systems adopted and the examination of end-effector technologies, including dexterous robotic hands. The motivation for this review is to provide an extensive appreciation of state-of-the-art solutions to help researchers and developers determine the best possible options when designing autonomous robotic systems to interact with deformable objects. Typically in a robotic setup, when robot manipulators are programmed to perform their tasks, they must have a complete knowledge about the exact structure of the manipulated object (shape, surface texture, rigidity) and about its location in the environment (pose). For some of these tasks, the manipulator becomes in contact with the object. Hence, interaction forces and moments are developed and consequently these interaction forces and moments, as well as the position of the end-effector, must be controlled, which leads to the concept of “force controlled manipulation” (Natale, 2003). There are different control strategies used in 28", "title": "" }, { "docid": "9c74807f3c1a5b0928ade3f9e3c1229d", "text": "Current perception systems of intelligent vehicles not only make use of visual sensors, but also take advantage of depth sensors. Extrinsic calibration of these heterogeneous sensors is required for fusing information obtained separately by vision sensors and light detection and ranging (LIDARs). In this paper, an optimal extrinsic calibration algorithm between a binocular stereo vision system and a 2-D LIDAR is proposed. Most extrinsic calibration methods between cameras and a LIDAR proceed by calibrating separately each camera with the LIDAR. We show that by placing a common planar chessboard with different poses in front of the multisensor system, the extrinsic calibration problem is solved by a 3-D reconstruction of the chessboard and geometric constraints between the views from the stereovision system and the LIDAR. Furthermore, our method takes sensor noise into account that it provides optimal results under Mahalanobis distance constraints. To evaluate the performance of the algorithm, experiments based on both computer simulation and real datasets are presented and analyzed. The proposed approach is also compared with a popular camera/LIDAR calibration method to show the benefits of our method.", "title": "" }, { "docid": "cc1f6ab87bdf7edd4f6e2c024988a838", "text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Versions of published Taylor & Francis and Routledge Open articles and Taylor & Francis and Routledge Open Select articles posted to institutional or subject repositories or any other third-party website are without warranty from Taylor & Francis of any kind, either expressed or implied, including, but not limited to, warranties of merchantability, fitness for a particular purpose, or non-infringement. Any opinions and views expressed in this article are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor & Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.", "title": "" }, { "docid": "e3c3f3fb3dd432017bf92e0fe5f7c341", "text": "This study aimed to evaluate the accuracy of intraoral scanners in full-arch scans. A representative model with 14 prepared abutments was digitized using an industrial scanner (reference scanner) as well as four intraoral scanners (iTero, CEREC AC Bluecam, Lava C.O.S., and Zfx IntraScan). Datasets obtained from different scans were loaded into 3D evaluation software, superimposed, and compared for accuracy. One-way analysis of variance (ANOVA) was implemented to compute differences within groups (precision) as well as comparisons with the reference scan (trueness). A level of statistical significance of p < 0.05 was set. Mean trueness values ranged from 38 to 332.9 μm. Data analysis yielded statistically significant differences between CEREC AC Bluecam and other scanners as well as between Zfx IntraScan and Lava C.O.S. Mean precision values ranged from 37.9 to 99.1 μm. Statistically significant differences were found between CEREC AC Bluecam and Lava C.O.S., CEREC AC Bluecam and iTero, Zfx Intra Scan and Lava C.O.S., and Zfx Intra Scan and iTero (p < 0.05). Except for one intraoral scanner system, all tested systems showed a comparable level of accuracy for full-arch scans of prepared teeth. Further studies are needed to validate the accuracy of these scanners under clinical conditions. Despite excellent accuracy in single-unit scans having been demonstrated, little is known about the accuracy of intraoral scanners in simultaneous scans of multiple abutments. Although most of the tested scanners showed comparable values, the results suggest that the inaccuracies of the obtained datasets may contribute to inaccuracies in the final restorations.", "title": "" }, { "docid": "652536bf512c975b7cb61e60a3246829", "text": "OBJECTIVE\nInterventions to prevent type 2 diabetes should be directed toward individuals at increased risk for the disease. To identify such individuals without laboratory tests, we developed the Diabetes Risk Score.\n\n\nRESEARCH DESIGN AND METHODS\nA random population sample of 35- to 64-year-old men and women with no antidiabetic drug treatment at baseline were followed for 10 years. New cases of drug-treated type 2 diabetes were ascertained from the National Drug Registry. Multivariate logistic regression model coefficients were used to assign each variable category a score. The Diabetes Risk Score was composed as the sum of these individual scores. The validity of the score was tested in an independent population survey performed in 1992 with prospective follow-up for 5 years.\n\n\nRESULTS\nAge, BMI, waist circumference, history of antihypertensive drug treatment and high blood glucose, physical activity, and daily consumption of fruits, berries, or vegetables were selected as categorical variables. Complete baseline risk data were found in 4435 subjects with 182 incident cases of diabetes. The Diabetes Risk Score value varied from 0 to 20. To predict drug-treated diabetes, the score value >or=9 had sensitivity of 0.78 and 0.81, specificity of 0.77 and 0.76, and positive predictive value of 0.13 and 0.05 in the 1987 and 1992 cohorts, respectively.\n\n\nCONCLUSIONS\nThe Diabetes Risk Score is a simple, fast, inexpensive, noninvasive, and reliable tool to identify individuals at high risk for type 2 diabetes.", "title": "" }, { "docid": "f34e0d226da243a2752bb65c0174f0c9", "text": "We used echo state networks, a subclass of recurrent neural networks, to predict stock prices of the S&P 500. Our network outperformed a Kalman filter, predicting more of the higher frequency fluctuations in stock price. The Challenge of Time Series Prediction Learning from past history is a fudamentality ill-posed. A model may fit past data well but not perform well when presented with new inputs. With recurrent neural networks (RNNs), we leverage the modeling abilities of neural networks (NNs) for time series forecastings. Feedforward NNs have done well in classification tasks such as handwriting recognition, however in dynamical environments, we need techniques that account for history. In RNNs, signals passing through recurrent connections constitute an effective memory for the network, which can then use information in memory to better predict future time series values. Unfortunately, RNNs are difficult to train. Traditional techniques used with feedforward NNs such as backpropagation fail to yield acceptable performance. However, subsets of RNNs that are more amenable to training have been developed in the emerging field known as reservoir computing. In reservoir computing, the recurrent connections of the network are viewed as a fixed reservoir used to map inputs into a high dimensional, dynamical space–a similar idea to the support vector machine. With a sufficiently high dimensional space, a simple linear decode can be used to approximate any function varying with time. Two reservoir networks known as Echo State Networks (ESNs) and Liquid State Machines (LSMs) have met with success in modeling nonlinear dynamical systems [2, 4]. We focus on the former, ESN, in this project and use it to predict stock prices and compare its performance to a Kalman filter. In an ESN, only the output weights are trained (see Figure 1). Echo State Network Implementation The state vector, x(t), of the network is governed by x(t+ 1) = f ( W u(t) +Wx(t) +W y(t) ) , (1) where f(·) = tanh(·), W in describes the weights connecting the inputs to the network, u(t) is the input vector, W describes the recurrent weights, W fb describes the feedback weights connecting the outputs back to the network, and y(t) are the outputs. The output y(t) is governed by y(t) = W z(t), where z(t) = [x(t),u(t)] is the extended state. By including the input vector, the extended state allows the network to use a linear combination of the inputs in addition to the state to form the output. ESN creation follows the procedure outlined in [3]. Briefly, 1. Initialize network of N reservoir units with random W , W , and W .", "title": "" }, { "docid": "475fc34de30b8310a6eb2aba176f33fa", "text": "A novel compact broadband water dense patch antenna with relatively thick air layer is introduced. The distilled water with high permittivity is located on the top of the low-loss, low-permittivity supporting substrate to provide an electric wall boundary. The dense water patch antenna is excited with cavity mode, reducing the impact of dielectric loss of the water on the antenna efficiency. The designs of loading the distilled water and T-shaped shorting sheet are applied for size reduction. The wide bandwidth is attributed to the coupling L-shaped probe, proper size of the coupled T-shaped shorting sheet, and thick air layer. As a result, the dimensions of the water patch are only 0.146 λ0 × 0.078 λ0 × 0.056 λ0. The proposed antenna has a high radiation up to 70% over the lower frequency band of 4G mobile communication from 690 to 960 MHz. Good agreements are achieved between the measured results and the simulated results.", "title": "" }, { "docid": "29fcfb65f54678ed79d9712ed5755cb8", "text": "Recent studies show that the popularity of the pairs trading strategy has been growing and it may pose a problem as the opportunities to trade become much smaller. Therefore, the optimization of pairs trading strategy has gained widespread attention among high-frequency traders. In this paper, using reinforcement learning, we examine the optimum level of pairs trading specifications over time.More specifically, the reinforcement learning agent chooses the optimum level of parameters of pairs trading to maximize the objective function. Results are obtained by applying a combination of the reinforcement learning method and cointegration approach. We find that boosting pairs trading specifications by using the proposed approach significantly overperform the previous methods. Empirical results based on the comprehensive intraday data which are obtained from S&P500 constituent stocks confirm the efficiently of our proposed method. Communicated by V. Loia. B Hasan Hakimian hasan.hakimian@ut.ac.ir Saeid Fallahpour falahpor@ut.ac.ir Khalil Taheri k.taheri@ut.ac.ir Ehsan Ramezanifar e.ramezanifar@maastrichtuniversity.nl 1 Department of Finance, Faculty of Management, University of Tehran, Tehran, Iran 2 Advanced Robotics and Intelligent Systems Laboratory, School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, Iran 3 Department of Finance, School of Business and Economics, Maastricht, The Netherlands", "title": "" }, { "docid": "b6fc3332243aa421fbe812e5c4698dc9", "text": "BACKGROUND\nStatistical shape models are widely used in biomedical research. They are routinely implemented for automatic image segmentation or object identification in medical images. In these fields, however, the acquisition of the large training datasets, required to develop these models, is usually a time-consuming process. Even after this effort, the collections of datasets are often lost or mishandled resulting in replication of work.\n\n\nOBJECTIVE\nTo solve these problems, the Virtual Skeleton Database (VSD) is proposed as a centralized storage system where the data necessary to build statistical shape models can be stored and shared.\n\n\nMETHODS\nThe VSD provides an online repository system tailored to the needs of the medical research community. The processing of the most common image file types, a statistical shape model framework, and an ontology-based search provide the generic tools to store, exchange, and retrieve digital medical datasets. The hosted data are accessible to the community, and collaborative research catalyzes their productivity.\n\n\nRESULTS\nTo illustrate the need for an online repository for medical research, three exemplary projects of the VSD are presented: (1) an international collaboration to achieve improvement in cochlear surgery and implant optimization, (2) a population-based analysis of femoral fracture risk between genders, and (3) an online application developed for the evaluation and comparison of the segmentation of brain tumors.\n\n\nCONCLUSIONS\nThe VSD is a novel system for scientific collaboration for the medical image community with a data-centric concept and semantically driven search option for anatomical structures. The repository has been proven to be a useful tool for collaborative model building, as a resource for biomechanical population studies, or to enhance segmentation algorithms.", "title": "" }, { "docid": "ee9ca88d092538a399d192cf1b9e9df6", "text": "The new user problem in recommender systems is still challenging, and there is not yet a unique solution that can be applied in any domain or situation. In this paper we analyze viable solutions to the new user problem in collaborative filtering (CF) that are based on the exploitation of user personality information: (a) personality-based CF, which directly improves the recommendation prediction model by incorporating user personality information, (b) personality-based active learning, which utilizes personality information for identifying additional useful preference data in the target recommendation domain to be elicited from the user, and (c) personality-based cross-domain recommendation, which exploits personality information to better use user preference data from auxiliary domains which can be used to compensate the lack of user preference data in the target domain. We benchmark the effectiveness of these methods on large datasets that span several domains, namely movies, music and books. Our results show that personality-aware methods achieve performance improvements that range from 6 to 94 % for users completely new to the system, while increasing the novelty of the recommended items by 3–40 % with respect to the non-personalized popularity baseline. We also discuss the limitations of our approach and the situations in which the proposed methods can be better applied, hence providing guidelines for researchers and practitioners in the field.", "title": "" } ]
scidocsrr
b93873a378ef697f0aea212862afa464
Practical and Optimal LSH for Angular Distance
[ { "docid": "0c4ca5a63c7001e6275b05da7771a7a6", "text": "We present a new data structure for the c-approximate near neighbor problem (ANN) in the Euclidean space. For n points in R, our algorithm achieves Oc(n + d log n) query time and Oc(n + d log n) space, where ρ ≤ 0.73/c + O(1/c) + oc(1). This is the first improvement over the result by Andoni and Indyk (FOCS 2006) and the first data structure that bypasses a locality-sensitive hashing lower bound proved by O’Donnell, Wu and Zhou (ICS 2011). By known reductions we obtain a data structure for the Hamming space and l1 norm with ρ ≤ 0.73/c+O(1/c) + oc(1), which is the first improvement over the result of Indyk and Motwani (STOC 1998). Thesis Supervisor: Piotr Indyk Title: Professor of Electrical Engineering and Computer Science", "title": "" } ]
[ { "docid": "1f60109ccff855da33e8540b40f2d3d3", "text": "Nonnegative matrix factorization (NMF) is a widely-used method for multivariate analysis of nonnegative data, the goal of which is decompose a data matrix into a basis matrix and an encoding variable matrix with all of these matrices allowed to have only nonnegative elements. In this paper we present simple algorithms for orthogonal NMF, where orthogonality constraints are imposed on basis matrix or encoding matrix. We develop multiplicative updates directly from the true gradient (natural gradient) in Stiefel manifold, whereas existing algorithms consider additive orthogonality constraints. Numerical experiments on face image data for a image representation task show that our orthogonal NMF algorithm preserves the orthogonality, while the goodness-of-fit (GOF) is minimized. We also apply our orthogonal NMF to a clustering task, showing that it works better than the original NMF, which is confirmed by experiments on several UCI repository data sets.", "title": "" }, { "docid": "da8e929b1599b3241e75e4a1ead06207", "text": "The knowledge pyramid has been used for several years to illustrate the hierarchical relationships between data, information, knowledge, and wisdom. An earlier version of this paper presented a revised knowledge-KM pyramid that included processes such as filtering and sense making, reversed the pyramid by positing there was more knowledge than data, and showed knowledge management as an extraction of the pyramid. This paper expands the revised knowledge pyramid to include the Internet of Things and Big Data. The result is a revision of the data aspect of the knowledge pyramid. Previous thought was of data as reflections of reality as recorded by sensors. Big Data and the Internet of Things expand sensors and readings to create two layers of data. The top layer of data is the traditional transaction / operational data and the bottom layer of data is an expanded set of data reflecting massive data sets and sensors that are near mirrors of reality. The result is a knowledge pyramid that appears as an hourglass.", "title": "" }, { "docid": "4828e830d440cb7a2c0501952033da2f", "text": "This paper presents a current-mode control non-inverting buck-boost converter. The proposed circuit is controlled by the current mode and operated in three operation modes which are buck, buck-boost, and boost mode. The operation mode is automatically determined by the ratio between the input and output voltages. The proposed circuit is simulated by HSPICE with 0.5 um standard CMOS parameters. Its input voltage range is 2.5–5 V, and the output voltage range is 1.5–5 V. The maximum efficiency is 92% when it operates in buck mode.", "title": "" }, { "docid": "ad4b137253407e4323e288b65b03bd08", "text": "We formulate a document summarization method to extract passage-level answers for non-factoid queries, referred to as answer-biased summaries. We propose to use external information from related Community Question Answering (CQA) content to better identify answer bearing sentences. Three optimization-based methods are proposed: (i) query-biased, (ii) CQA-answer-biased, and (iii) expanded-query-biased, where expansion terms were derived from related CQA content. A learning-to-rank-based method is also proposed that incorporates a feature extracted from related CQA content. Our results show that even if a CQA answer does not contain a perfect answer to a query, their content can be exploited to improve the extraction of answer-biased summaries from other corpora. The quality of CQA content is found to impact on the accuracy of optimization-based summaries, though medium quality answers enable the system to achieve a comparable (and in some cases superior) accuracy to state-of-the-art techniques. The learning-to-rank-based summaries, on the other hand, are not significantly influenced by CQA quality. We provide a recommendation of the best use of our proposed approaches in regard to the availability of different quality levels of related CQA content. As a further investigation, the reliability of our approaches was tested on another publicly available dataset.", "title": "" }, { "docid": "5b76f50ef9745ef03205d3657e6fd3cd", "text": "In this paper we present preliminary results and future directions of work for a project in which we are building an RFID based system to sense and monitor free weight exercises.", "title": "" }, { "docid": "19477ceed88d44ea8b068a4826382f44", "text": "In the era of big data, the applications generating tremendous amount of data are becoming the main focus of attention as the wide increment of data generation and storage that has taken place in the last few years. This scenario is challenging for data mining techniques which are not arrogated to the new space and time requirements. In many of the real world applications, classification of imbalanced data-sets is the point of attraction. Most of the classification methods focused on two-class imbalanced problem. So, it is necessary to solve multi-class imbalanced problem, which exist in real-world domains. In the proposed work, we introduced a methodology for classification of multi-class imbalanced data. This methodology consists of two steps: In first step we used Binarization techniques (OVA and OVO) for decomposing original dataset into subsets of binary classes. In second step, the SMOTE algorithm is applied against each subset of imbalanced binary class in order to get balanced data. Finally, to achieve classification goal Random Forest (RF) classifier is used. Specifically, oversampling technique is adapted to big data using MapReduce so that this technique is able to handle as large data-set as needed. An experimental study is carried out to evaluate the performance of proposed method. For experimental analysis, we have used different datasets from UCI repository and the proposed system is implemented on Apache Hadoop and Apache Spark platform. The results obtained shows that proposed method outperforms over other methods.", "title": "" }, { "docid": "97fb823e7b74ac0bfcc99455d801e7ec", "text": "In the fifth generation (5G) of wireless communication systems, hitherto unprecedented requirements are expected to be satisfied. As one of the promising techniques of addressing these challenges, non-orthogonal multiple access (NOMA) has been actively investigated in recent years. In contrast to the family of conventional orthogonal multiple access (OMA) schemes, the key distinguishing feature of NOMA is to support a higher number of users than the number of orthogonal resource slots with the aid of non-orthogonal resource allocation. This may be realized by the sophisticated inter-user interference cancellation at the cost of an increased receiver complexity. In this paper, we provide a comprehensive survey of the original birth, the most recent development, and the future research directions of NOMA. Specifically, the basic principle of NOMA will be introduced at first, with the comparison between NOMA and OMA especially from the perspective of information theory. Then, the prominent NOMA schemes are discussed by dividing them into two categories, namely, power-domain and code-domain NOMA. Their design principles and key features will be discussed in detail, and a systematic comparison of these NOMA schemes will be summarized in terms of their spectral efficiency, system performance, receiver complexity, etc. Finally, we will highlight a range of challenging open problems that should be solved for NOMA, along with corresponding opportunities and future research trends to address these challenges.", "title": "" }, { "docid": "0fefdbc0dbe68391ccfc912be937f4fc", "text": "Privacy and security are essential requirements in practical biometric systems. In order to prevent the theft of biometric patterns, it is desired to modify them through revocable and non invertible transformations called Cancelable Biometrics. In this paper, we propose an efficient algorithm for generating a Cancelable Iris Biometric based on Sectored Random Projections. Our algorithm can generate a new pattern if the existing one is stolen, retain the original recognition performance and prevent extraction of useful information from the transformed patterns. Our method also addresses some of the drawbacks of existing techniques and is robust to degradations due to eyelids and eyelashes.", "title": "" }, { "docid": "f44bfa0a366fb50a571e6df9f4c3f91d", "text": "BACKGROUND\nIn silico predictive models have proved to be valuable for the optimisation of compound potency, selectivity and safety profiles in the drug discovery process.\n\n\nRESULTS\ncamb is an R package that provides an environment for the rapid generation of quantitative Structure-Property and Structure-Activity models for small molecules (including QSAR, QSPR, QSAM, PCM) and is aimed at both advanced and beginner R users. camb's capabilities include the standardisation of chemical structure representation, computation of 905 one-dimensional and 14 fingerprint type descriptors for small molecules, 8 types of amino acid descriptors, 13 whole protein sequence descriptors, filtering methods for feature selection, generation of predictive models (using an interface to the R package caret), as well as techniques to create model ensembles using techniques from the R package caretEnsemble). Results can be visualised through high-quality, customisable plots (R package ggplot2).\n\n\nCONCLUSIONS\nOverall, camb constitutes an open-source framework to perform the following steps: (1) compound standardisation, (2) molecular and protein descriptor calculation, (3) descriptor pre-processing and model training, visualisation and validation, and (4) bioactivity/property prediction for new molecules. camb aims to speed model generation, in order to provide reproducibility and tests of robustness. QSPR and proteochemometric case studies are included which demonstrate camb's application.Graphical abstractFrom compounds and data to models: a complete model building workflow in one package.", "title": "" }, { "docid": "175239ba9ba930efd0019182b2d2f2c8", "text": "Image Steganography is the computing field of hiding information from a source into a target image in a way that it becomes almost imperceptible from one’s eyes. Despite the high capacity of hiding information, the usual Least Significant Bit (LSB) techniques could be easily discovered. In order to hide information in more significant bits, the target image should be optimized. In this paper, it is proposed an optimization solution based on the Standard Particle Swarm Optimization 2011 (PSO), which has been compared with a previous Genetic Algorithm-based approach showing promising results. Specifically, it is shown an adaptation in the solution in order to keep the essence of PSO while remaining message hosted bits unchanged.", "title": "" }, { "docid": "8411019e166f3b193905099721c29945", "text": "In this article we recast the Dahl, LuGre, and Maxwell-slip models as extended, generalized, or semilinear Duhem models. We classified each model as either rate independent or rate dependent. Smoothness properties of the three friction models were also considered. We then studied the hysteresis induced by friction in a single-degree-of-freedom system. The resulting system was modeled as a linear system with Duhem feedback. For each friction model, we computed the corresponding hysteresis map. Next, we developed a DC servo motor testbed and performed motion experiments. We then modeled the testbed dynamics and simulated the system using all three friction models. By comparing the simulated and experimental results, it was found that the LuGre model provides the best model of the gearbox friction characteristics. A manual tuning approach was used to determine parameters that model the friction in the DC motor.", "title": "" }, { "docid": "db36273a3669e1aeda1bf2c5ab751387", "text": "Autonomous Ground Vehicles designed for dynamic environments require a reliable perception of the real world, in terms of obstacle presence, position and speed. In this paper we present a flexible technique to build, in real time, a dense voxel-based map from a 3D point cloud, able to: (1) discriminate between stationary and moving obstacles; (2) provide an approximation of the detected obstacle's absolute speed using the information of the vehicle's egomotion computed through a visual odometry approach. The point cloud is first sampled into a full 3D map based on voxels to preserve the tridimensional information; egomotion information allows computational efficiency in voxels creation; then voxels are processed using a flood fill approach to segment them into a clusters structure; finally, with the egomotion information, the obtained clusters are labeled as stationary or moving obstacles, and an estimation of their speed is provided. The algorithm runs in real time; it has been tested on one of VisLab's AGVs using a modified SGM-based stereo system as 3D data source.", "title": "" }, { "docid": "5eb9c6540de63be3e7c645286f263b4d", "text": "Inductive Power Transfer (IPT) is a practical method for recharging Electric Vehicles (EVs) because is it safe, efficient and convenient. Couplers or Power Pads are the power transmitters and receivers used with such contactless charging systems. Due to improvements in power electronic components, the performance and efficiency of an IPT system is largely determined by the coupling or flux linkage between these pads. Conventional couplers are based on circular pad designs and due to their geometry have fundamentally limited magnetic flux above the pad. This results in poor coupling at any realistic spacing between the ground pad and the vehicle pickup mounted on the chassis. Performance, when added to the high tolerance to misalignment required for a practical EV charging system, necessarily results in circular pads that are large, heavy and expensive. A new pad topology termed a flux pipe is proposed in this paper that overcomes difficulties associated with conventional circular pads. Due to the magnetic structure, the topology has a significantly improved flux path making more efficient and compact IPT charging systems possible.", "title": "" }, { "docid": "ed5a17f62e4024727538aba18f39fc78", "text": "The extent to which people can focus attention in the face of irrelevant distractions has been shown to critically depend on the level and type of information load involved in their current task. The ability to focus attention improves under task conditions of high perceptual load but deteriorates under conditions of high load on cognitive control processes such as working memory. I review recent research on the effects of load on visual awareness and brain activity, including changing effects over the life span, and I outline the consequences for distraction and inattention in daily life and in clinical populations.", "title": "" }, { "docid": "99cd180d0bb08e6360328b77219919c1", "text": "In this paper, we describe our approach to RecSys 2015 challenge problem. Given a dataset of item click sessions, the problem is to predict whether a session results in a purchase and which items are purchased if the answer is yes.\n We define a simpler analogous problem where given an item and its session, we try to predict the probability of purchase for the given item. For each session, the predictions result in a set of purchased items or often an empty set.\n We apply monthly time windows over the dataset. For each item in a session, we engineer features regarding the session, the item properties, and the time window. Then, a balanced random forest classifier is trained to perform predictions on the test set.\n The dataset is particularly challenging due to privacy-preserving definition of a session, the class imbalance problem, and the volume of data. We report our findings with respect to feature engineering, the choice of sampling schemes, and classifier ensembles. Experimental results together with benefits and shortcomings of the proposed approach are discussed. The solution is efficient and practical in commodity computers.", "title": "" }, { "docid": "c4094c8b273d6332f36b6f452886de6a", "text": "This paper presents original research on prevalence, user characteristics and effect profile of N,N-dimethyltryptamine (DMT), a potent hallucinogenic which acts primarily through the serotonergic system. Data were obtained from the Global Drug Survey (an anonymous online survey of people, many of whom have used drugs) conducted between November and December 2012 with 22,289 responses. Lifetime prevalence of DMT use was 8.9% (n=1980) and past year prevalence use was 5.0% (n=1123). We explored the effect profile of DMT in 472 participants who identified DMT as the last new drug they had tried for the first time and compared it with ratings provided by other respondents on psilocybin (magic mushrooms), LSD and ketamine. DMT was most often smoked and offered a strong, intense, short-lived psychedelic high with relatively few negative effects or \"come down\". It had a larger proportion of new users compared with the other substances (24%), suggesting its popularity may increase. Overall, DMT seems to have a very desirable effect profile indicating a high abuse liability that maybe offset by a low urge to use more.", "title": "" }, { "docid": "9dadd96558791417495a5e1afa031851", "text": "INTRODUCTION\nLittle information is available on malnutrition-related factors among school-aged children ≥5 years in Ethiopia. This study describes the prevalence of stunting and thinness and their related factors in Libo Kemkem and Fogera, Amhara Regional State and assesses differences between urban and rural areas.\n\n\nMETHODS\nIn this cross-sectional study, anthropometrics and individual and household characteristics data were collected from 886 children. Height-for-age z-score for stunting and body-mass-index-for-age z-score for thinness were computed. Dietary data were collected through a 24-hour recall. Bivariate and backward stepwise multivariable statistical methods were employed to assess malnutrition-associated factors in rural and urban communities.\n\n\nRESULTS\nThe prevalence of stunting among school-aged children was 42.7% in rural areas and 29.2% in urban areas, while the corresponding figures for thinness were 21.6% and 20.8%. Age differences were significant in both strata. In the rural setting, fever in the previous 2 weeks (OR: 1.62; 95% CI: 1.23-2.32), consumption of food from animal sources (OR: 0.51; 95% CI: 0.29-0.91) and consumption of the family's own cattle products (OR: 0.50; 95% CI: 0.27-0.93), among others factors were significantly associated with stunting, while in the urban setting, only age (OR: 4.62; 95% CI: 2.09-10.21) and years of schooling of the person in charge of food preparation were significant (OR: 0.88; 95% CI: 0.79-0.97). Thinness was statistically associated with number of children living in the house (OR: 1.28; 95% CI: 1.03-1.60) and family rice cultivation (OR: 0.64; 95% CI: 0.41-0.99) in the rural setting, and with consumption of food from animal sources (OR: 0.26; 95% CI: 0.10-0.67) and literacy of head of household (OR: 0.24; 95% CI: 0.09-0.65) in the urban setting.\n\n\nCONCLUSION\nThe prevalence of stunting was significantly higher in rural areas, whereas no significant differences were observed for thinness. Various factors were associated with one or both types of malnutrition, and varied by type of setting. To effectively tackle malnutrition, nutritional programs should be oriented to local needs.", "title": "" }, { "docid": "427028ef819df3851e37734e5d198424", "text": "The code that provides solutions to key software requirements, such as security and fault-tolerance, tends to be spread throughout (or cross-cut) the program modules that implement the “primary functionality” of a software system. Aspect-oriented programming is an emerging programming paradigm that supports implementing such cross-cutting requirements into named program units called “aspects”. To construct a system as an aspect-oriented program (AOP), one develops code for primary functionality in traditional modules and code for cross-cutting functionality in aspect modules. Compiling and running an AOP requires that the aspect code be “woven” into the code. Although aspect-oriented programming supports the separation of concerns into named program units, explicit and implicit dependencies of both aspects and traditional modules will result in systems with new testing challenges, which include new sources for program faults. This paper introduces a candidate fault model, along with associated testing criteria, for AOPs based on interactions that are unique to AOPs. The paper also identifies key issues relevant to the systematic testing of AOPs.", "title": "" }, { "docid": "e089c8d35bd77e1947d11207a7905617", "text": "Real-time monitoring of groups and their rich contexts will be a key building block for futuristic, group-aware mobile services. In this paper, we propose GruMon, a fast and accurate group monitoring system for dense and complex urban spaces. GruMon meets the performance criteria of precise group detection at low latencies by overcoming two critical challenges of practical urban spaces, namely (a) the high density of crowds, and (b) the imprecise location information available indoors. Using a host of novel features extracted from commodity smartphone sensors, GruMon can detect over 80% of the groups, with 97% precision, using 10 minutes latency windows, even in venues with limited or no location information. Moreover, in venues where location information is available, GruMon improves the detection latency by up to 20% using semantic information and additional sensors to complement traditional spatio-temporal clustering approaches. We evaluated GruMon on data collected from 258 shopping episodes from 154 real participants, in two large shopping complexes in Korea and Singapore. We also tested GruMon on a large-scale dataset from an international airport (containing ≈37K+ unlabelled location traces per day) and a live deployment at our university, and showed both GruMon's potential performance at scale and various scalability challenges for real-world dense environment deployments.", "title": "" }, { "docid": "ea525c15c1cbb4a4a716e897287fd770", "text": "This study explored student teachers’ cognitive presence and learning achievements by integrating the SOP Model in which self-study (S), online group discussion (O) and double-stage presentations (P) were implemented in the flipped classroom. The research was conducted at a university in Taiwan with 31 student teachers. Preand post-worksheets measuring knowledge of educational issues were administered before and after group discussion. Quantitative content analysis and behavior sequential analysis were used to evaluate cognitive presence, while a paired-samples t-test analyzed learning achievement. The results showed that the participants had the highest proportion of “Exploration,” the second largest rate of “Integration,” but rarely reached “Resolution.” The participants’ achievements were greatly enhanced using the SOP Model in terms of the scores of the preand post-worksheets. Moreover, the groups with a higher proportion of “Integration” (I) and “Resolution” (R) performed best in the post-worksheets and were also the most progressive groups. Both highand low-rated groups had significant correlations between the “I” and “R” phases, with “I”  “R” in the low-rated groups but “R”  “I” in the high-rated groups. The instructional design of the SOP Model can be a reference for future pedagogical implementations in the higher educational context.", "title": "" } ]
scidocsrr
203cbc65bfaa66d7bfeba057b434cbbf
Anomaly detection in online social networks
[ { "docid": "06860bf1ede8dfe83d3a1b01fe4df835", "text": "The Internet and computer networks are exposed to an increasing number of security threats. With new types of attacks appearing continually, developing flexible and adaptive security oriented approaches is a severe challenge. In this context, anomaly-based network intrusion detection techniques are a valuable technology to protect target systems and networks against malicious activities. However, despite the variety of such methods described in the literature in recent years, security tools incorporating anomaly detection functionalities are just starting to appear, and several important problems remain to be solved. This paper begins with a review of the most well-known anomaly-based intrusion detection techniques. Then, available platforms, systems under development and research projects in the area are presented. Finally, we outline the main challenges to be dealt with for the wide scale deployment of anomaly-based intrusion detectors, with special emphasis on assessment issues. a 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "2c6332afec6a2c728041e0325a27fcbf", "text": "Today’s social networks are plagued by numerous types of malicious profiles which can range from socialbots to sexual predators. We present a novel method for the detection of these malicious profiles by using the social network’s own topological features only. Reliance on these features alone ensures that the proposed method is generic enough to be applied on a range of social networks. The algorithm has been evaluated on several social networks and was found to be effective in detecting various types of malicious profiles. We believe this method is a valuable step in the increasing battle against social network spammers, socialbots, and sexual predictors.", "title": "" } ]
[ { "docid": "968ee8726afb8cc82d629ac8afabf3db", "text": "Online communities are increasingly important to organizations and the general public, but there is little theoretically based research on what makes some online communities more successful than others. In this article, we apply theory from the field of social psychology to understand how online communities develop member attachment, an important dimension of community success. We implemented and empirically tested two sets of community features for building member attachment by strengthening either group identity or interpersonal bonds. To increase identity-based attachment, we gave members information about group activities and intergroup competition, and tools for group-level communication. To increase bond-based attachment, we gave members information about the activities of individual members and interpersonal similarity, and tools for interpersonal communication. Results from a six-month field experiment show that participants’ visit frequency and self-reported attachment increased in both conditions. Community features intended to foster identity-based attachment had stronger effects than features intended to foster bond-based attachment. Participants in the identity condition with access to group profiles and repeated exposure to their group’s activities visited their community twice as frequently as participants in other conditions. The new features also had stronger effects on newcomers than on old-timers. This research illustrates how theory from the social science literature can be applied to gain a more systematic understanding of online communities and how theory-inspired features can improve their success. 1", "title": "" }, { "docid": "a77336cc767ca49479d2704942fe3578", "text": "UNLABELLED\nA longitudinal field experiment was carried out over a period of 2 weeks to examine the influence of product aesthetics and inherent product usability. A 2 × 2 × 3 mixed design was used in the study, with product aesthetics (high/low) and usability (high/low) being manipulated as between-subjects variables and exposure time as a repeated-measures variable (three levels). A sample of 60 mobile phone users was tested during a multiple-session usability test. A range of outcome variables was measured, including performance, perceived usability, perceived aesthetics and emotion. A major finding was that the positive effect of an aesthetically appealing product on perceived usability, reported in many previous studies, began to wane with increasing exposure time. The data provided similar evidence for emotion, which also showed changes as a function of exposure time. The study has methodological implications for the future design of usability tests, notably suggesting the need for longitudinal approaches in usability research.\n\n\nPRACTITIONER SUMMARY\nThis study indicates that product aesthetics influences perceived usability considerably in one-off usability tests but this influence wanes over time. When completing a usability test it is therefore advisable to adopt a longitudinal multiple-session approach to reduce the possibly undesirable influence of aesthetics on usability ratings.", "title": "" }, { "docid": "7077a80ec214dd78ebc7aeedd621d014", "text": "Malicious URL, a.k.a. malicious website, is a common and serious threat to cybersecurity. Malicious URLs host unsolicited content (spam, phishing, drive-by exploits, etc.) and lure unsuspecting users to become victims of scams (monetary loss, theft of private information, and malware installation), and cause losses of billions of dollars every year. It is imperative to detect and act on such threats in a timely manner. Traditionally, this detection is done mostly through the usage of blacklists. However, blacklists cannot be exhaustive, and lack the ability to detect newly generated malicious URLs. To improve the generality of malicious URL detectors, machine learning techniques have been explored with increasing attention in recent years. This article aims to provide a comprehensive survey and a structural understanding of Malicious URL Detection techniques using machine learning. We present the formal formulation of Malicious URL Detection as a machine learning task, and categorize and review the contributions of literature studies that addresses different dimensions of this problem (feature representation, algorithm design, etc.). Further, this article provides a timely and comprehensive survey for a range of different audiences, not only for machine learning researchers and engineers in academia, but also for professionals and practitioners in cybersecurity industry, to help them understand the state of the art and facilitate their own research and practical applications. We also discuss practical issues in system design, open research challenges, and point out some important directions for future research.", "title": "" }, { "docid": "afa3aba4f7edfecd4e632f856c2b7c01", "text": "Ruminants make efficient use of diets that are poor in true protein content because microbes in the rumen are able to synthesize a large proportion of the animal’s required protein. The amino acid (AA) pattern of this protein is of better quality than nearly all of the dietary ingredients commonly fed to domestic ruminants (Broderick, 1994; Schwab, 1996). In addition, ruminal microbial utilization of ammonia allows the feeding of nonprotein N (NPN) compounds, such as urea, as well as the capture of recycled urea N that would otherwise be excreted in the urine. Many studies have shown that lactating dairy cows use feed crude protein (CP; N x 6.25) more efficiently than other ruminant livestock. However, dairy cows still excrete 2-3 times more N in manure than they secrete in milk, even under conditions of optimal nutrition and management. Inefficient N utilization necessitates feeding supplemental protein, increasing milk production costs and contributing to environmental N pollution. One of our major objectives in protein nutrition of lactating ruminants must be to maximize ruminal formation of this high quality microbial protein and minimize feeding of costly protein supplements under all feeding regimes.", "title": "" }, { "docid": "1a65b9d35bce45abeefe66882dcf4448", "text": "Data is nowadays an invaluable resource, indeed it guides all business decisions in most of the computer-aided human activities. Threats to data integrity are thus of paramount relevance, as tampering with data may maliciously affect crucial business decisions. This issue is especially true in cloud computing environments, where data owners cannot control fundamental data aspects, like the physical storage of data and the control of its accesses. Blockchain has recently emerged as a fascinating technology which, among others, provides compelling properties about data integrity. Using the blockchain to face data integrity threats seems to be a natural choice, but its current limitations of low throughput, high latency, and weak stability hinder the practical feasibility of any blockchain-based solutions. In this paper, by focusing on a case study from the European SUNFISH project, which concerns the design of a secure by-design cloud federation platform for the public sector, we precisely delineate the actual data integrity needs of cloud computing environments and the research questions to be tackled to adopt blockchain-based databases. First, we detail the open research questions and the difficulties inherent in addressing them. Then, we outline a preliminary design of an effective blockchain-based database for cloud computing environments.", "title": "" }, { "docid": "a24aef41aef5070575b4814e191f92cb", "text": "1 Parallel Evolution in Science As we survey the evolution of modern science, we find the remarkable phenomenon that similar general conceptions and viewpoints have evolved independently in the various branches of science, and to begin with these may be indicated as follows: in the past centuries, science tried to explain phenomena by reducing them to an interplay of elementary units which could be investigated independently of each other. In contemporary modern science, we find in all fields conceptions of what is rather vaguely termed ‘wholeness.’ It was the aim of classical physics eventually to resolve all natural phenomena into a play of elementary units, the characteristics of which remain unaltered whether they are investigated in isolation or in a complex. The expression of this conception is the ideal of the Laplacean spirit, which resolves the world into an aimless play of atoms, governed by the laws of nature. This conception was not changed but rather strengthened when deterministic laws were replaced by statistical laws in Boltzmann’s derivation of the second principle of thermodynamics. Physical laws appeared to be essentially ‘laws of disorder,’ a statistical result of unordered and fortuitous events. In contrast, the basic problems in modern physics are problems of organisation. Problems of this kind present themselves in atomic physics, in structural chemistry, in crystallography, and so forth. In microphysics, it becomes impossible to resolve phenomena into local events, as is shown by the Heisenberg relation and in quantum mechanics. Corresponding to the procedure in physics, the attempt has been made in biology to resolve the phenomena of life into parts and processes which could be investigated in isolation. This procedure is essentially the same in the various branches of biology. The organism is considered to be an aggregate of cells as elementary life-units, its activities are resolved into functions of isolated organs and finally physico-chemical processes, its behaviour into reflexes, the material substratum of heredity into genes, acting independently of each other, phylogenetic evolution into single fortuitous mutations, and so on. As opposed to the analytical, summative and machine [135]theoretical viewpoints, organismic conceptions1 have evolved in all branches of modern biology which assert the necessity of investigating not only parts but also relations of organisation resulting from a dynamic interaction and manifesting themselves by the difference in behaviour of parts in isolation and in the whole organism. The development in medicine follows a similar pattern.2 Virchow’s programme of ‘cellular pathology,’ claiming to resolve disease into functional disturbances of cells, is to be supplemented by the consideration of the organism-as-a-whole, as it appears clearly in such fields as theory of human constitutions, endocrinology, physical medicine and psychotherapy. Again we find the same trend in psychology. Classical association psychology tried to resolve mental phenomena into elementary units, sensations and the like, psychological atoms, as it were. Gestalt psychology has demonstrated the existence and primacy of psychological entities, which are not a simple summation of elementary units, and are governed by dynamical laws.", "title": "" }, { "docid": "3ca7b7b8e07eb5943d6ce2acf9a6fa82", "text": "Excessive heat generation and occurrence of partial discharge have been observed in end-turn stress grading (SG) system in form-wound machines under PWM voltage. In this paper, multi-winding stress grading (SG) system is proposed as a method to change resistance of SG per length. Although the maximum field at the edge of stator and CAT are in a trade-off relationship, analytical results suggest that we can suppress field and excessive heat generation at both stator and CAT edges by multi-winding of SG and setting the length of CAT appropriately. This is also experimentally confirmed by measuring potential distribution of model bar-coil and observing partial discharge and temperature rise.", "title": "" }, { "docid": "5fc6b0e151762560c8f09d0fe6983ca2", "text": "The increasing popularity of wearable devices that continuously capture video, and the prevalence of third-party applications that utilize these feeds have resulted in a new threat to privacy. In many situations, sensitive objects/regions are maliciously (or accidentally) captured in a video frame by third-party applications. However, current solutions do not allow users to specify and enforce fine grained access control over video feeds.\n In this paper, we describe MarkIt, a computer vision based privacy marker framework, that allows users to specify and enforce fine grained access control over video feeds. We present two example privacy marker systems -- PrivateEye and WaveOff. We conclude with a discussion of the computer vision, privacy and systems challenges in building a comprehensive system for fine grained access control over video feeds.", "title": "" }, { "docid": "5a4a75fbaef6e7760320502a583954bf", "text": "Policy decisions at the organizational, corporate, and governmental levels should be more heavily influenced by issues related to well-being-people's evaluations and feelings about their lives. Domestic policy currently focuses heavily on economic outcomes, although economic indicators omit, and even mislead about, much of what society values. We show that economic indicators have many shortcomings, and that measures of well-being point to important conclusions that are not apparent from economic indicators alone. For example, although economic output has risen steeply over the past decades, there has been no rise in life satisfaction during this period, and there has been a substantial increase in depression and distrust. We argue that economic indicators were extremely important in the early stages of economic development, when the fulfillment of basic needs was the main issue. As societies grow wealthy, however, differences in well-being are less frequently due to income, and are more frequently due to factors such as social relationships and enjoyment at work. Important noneconomic predictors of the average levels of well-being of societies include social capital, democratic governance, and human rights. In the workplace, noneconomic factors influence work satisfaction and profitability. It is therefore important that organizations, as well as nations, monitor the well-being of workers, and take steps to improve it. Assessing the well-being of individuals with mental disorders casts light on policy problems that do not emerge from economic indicators. Mental disorders cause widespread suffering, and their impact is growing, especially in relation to the influence of medical disorders, which is declining. Although many studies now show that the suffering due to mental disorders can be alleviated by treatment, a large proportion of persons with mental disorders go untreated. Thus, a policy imperative is to offer treatment to more people with mental disorders, and more assistance to their caregivers. Supportive, positive social relationships are necessary for well-being. There are data suggesting that well-being leads to good social relationships and does not merely follow from them. In addition, experimental evidence indicates that people suffer when they are ostracized from groups or have poor relationships in groups. The fact that strong social relationships are critical to well-being has many policy implications. For instance, corporations should carefully consider relocating employees because doing so can sever friendships and therefore be detrimental to well-being. Desirable outcomes, even economic ones, are often caused by well-being rather than the other way around. People high in well-being later earn higher incomes and perform better at work than people who report low well-being. Happy workers are better organizational citizens, meaning that they help other people at work in various ways. Furthermore, people high in well-being seem to have better social relationships than people low in well-being. For example, they are more likely to get married, stay married, and have rewarding marriages. Finally, well-being is related to health and longevity, although the pathways linking these variables are far from fully understood. Thus, well-being not only is valuable because it feels good, but also is valuable because it has beneficial consequences. This fact makes national and corporate monitoring of well-being imperative. In order to facilitate the use of well-being outcomes in shaping policy, we propose creating a national well-being index that systematically assesses key well-being variables for representative samples of the population. Variables measured should include positive and negative emotions, engagement, purpose and meaning, optimism and trust, and the broad construct of life satisfaction. A major problem with using current findings on well-being to guide policy is that they derive from diverse and incommensurable measures of different concepts, in a haphazard mix of respondents. Thus, current findings provide an interesting sample of policy-related findings, but are not strong enough to serve as the basis of policy. Periodic, systematic assessment of well-being will offer policymakers a much stronger set of findings to use in making policy decisions.", "title": "" }, { "docid": "50389f4ec27cf68af999ee33c3210edf", "text": "Rising water temperature associated with climate change is increasingly recognized as a potential stressor for aquatic organisms, particularly for tropical ectotherms that are predicted to have narrow thermal windows relative to temperate ectotherms. We used intermittent flow resting and swimming respirometry to test for effects of temperature increase on aerobic capacity and swim performance in the widespread African cichlid Pseudocrenilabrus multicolor victoriae, acclimated for a week to a range of temperatures (2°C increments) between 24 and 34°C. Standard metabolic rate (SMR) increased between 24 and 32°C, but fell sharply at 34°C, suggesting either an acclimatory reorganization of metabolism or metabolic rate depression. Maximum metabolic rate (MMR) was elevated at 28 and 30°C relative to 24°C. Aerobic scope (AS) increased between 24 and 28°C, then declined to a level comparable to 24°C, but increased dramatically 34°C, the latter driven by the drop in SMR in the warmest treatment. Critical swim speed (Ucrit) was highest at intermediate temperature treatments, and was positively related to AS between 24 and 32°C; however, at 34°C, the increase in AS did not correspond to an increase in Ucrit, suggesting a performance cost at the highest temperature.", "title": "" }, { "docid": "0d41fcc5ea57e42c87b4a3152d50f9d2", "text": "This paper is concerned with a continuous-time mean-variance portfolio selection model that is formulated as a bicriteria optimization problem. The objective is to maximize the expected terminal return and minimize the variance of the terminal wealth. By putting weights on the two criteria one obtains a single objective stochastic control problem which is however not in the standard form due to the variance term involved. It is shown that this nonstandard problem can be “embedded” into a class of auxiliary stochastic linear-quadratic (LQ) problems. The stochastic LQ control model proves to be an appropriate and effective framework to study the mean-variance problem in light of the recent development on general stochastic LQ problems with indefinite control weighting matrices. This gives rise to the efficient frontier in a closed form for the original portfolio selection problem.", "title": "" }, { "docid": "68dc61e0c6b33729f08cdd73e8e86096", "text": "Many important data analysis applications present with severely imbalanced datasets with respect to the target variable. A typical example is medical image analysis, where positive samples are scarce, while performance is commonly estimated against the correct detection of these positive examples. We approach this challenge by formulating the problem as anomaly detection with generative models. We train a generative model without supervision on the ‘negative’ (common) datapoints and use this model to estimate the likelihood of unseen data. A successful model allows us to detect the ‘positive’ case as low likelihood datapoints. In this position paper, we present the use of state-of-the-art deep generative models (GAN and VAE) for the estimation of a likelihood of the data. Our results show that on the one hand both GANs and VAEs are able to separate the ‘positive’ and ‘negative’ samples in the MNIST case. On the other hand, for the NLST case, neither GANs nor VAEs were able to capture the complexity of the data and discriminate anomalies at the level that this task requires. These results show that even though there are a number of successes presented in the literature for using generative models in similar applications, there remain further challenges for broad successful implementation.", "title": "" }, { "docid": "c08e33f44b8e27529385b1557906dc81", "text": "A key challenge in wireless cognitive radio networks is to maximize the total throughput also known as the sum rates of all the users while avoiding the interference of unlicensed band secondary users from overwhelming the licensed band primary users. We study the weighted sum rate maximization problem with both power budget and interference temperature constraints in a cognitive radio network. This problem is nonconvex and generally hard to solve. We propose a reformulation-relaxation technique that leverages nonnegative matrix theory to first obtain a relaxed problem with nonnegative matrix spectral radius constraints. A useful upper bound on the sum rates is then obtained by solving a convex optimization problem over a closed bounded convex set. It also enables the sum-rate optimality to be quantified analytically through the spectrum of specially-crafted nonnegative matrices. Furthermore, we obtain polynomial-time verifiable sufficient conditions that can identify polynomial-time solvable problem instances, which can be solved by a fixed-point algorithm. As a by-product, an interesting optimality equivalence between the nonconvex sum rate problem and the convex max-min rate problem is established. In the general case, we propose a global optimization algorithm by utilizing our convex relaxation and branch-and-bound to compute an ε-optimal solution. Our technique exploits the nonnegativity of the physical quantities, e.g., channel parameters, powers and rates, that enables key tools in nonnegative matrix theory such as the (linear and nonlinear) Perron-Frobenius theorem, quasi-invertibility, Friedland-Karlin inequalities to be employed naturally. Numerical results are presented to show that our proposed algorithms are theoretically sound and have relatively fast convergence time even for large-scale problems", "title": "" }, { "docid": "a9dbb873487081afcc2a24dd7cb74bfe", "text": "We presented the first single block collision attack on MD5 with complexity of 2 MD5 compressions and posted the challenge for another completely new one in 2010. Last year, Stevens presented a single block collision attack to our challenge, with complexity of 2 MD5 compressions. We really appreciate Stevens’s hard work. However, it is a pity that he had not found even a better solution than our original one, let alone a completely new one and the very optimal solution that we preserved and have been hoping that someone can find it, whose collision complexity is about 2 MD5 compressions. In this paper, we propose a method how to choose the optimal input difference for generating MD5 collision pairs. First, we divide the sufficient conditions into two classes: strong conditions and weak conditions, by the degree of difficulty for condition satisfaction. Second, we prove that there exist strong conditions in only 24 steps (one and a half rounds) under specific conditions, by utilizing the weaknesses of compression functions of MD5, which are difference inheriting and message expanding. Third, there should be no difference scaling after state word q25 so that it can result in the least number of strong conditions in each differential path, in such a way we deduce the distribution of strong conditions for each input difference pattern. Finally, we choose the input difference with the least number of strong conditions and the most number of free message words. We implement the most efficient 2-block MD5 collision attack, which needs only about 2 MD5 compressions to find a collision pair, and show a single-block collision attack with complexity 2.", "title": "" }, { "docid": "efc341c0a3deb6604708b6db361bfba5", "text": "In recent years, data analysis has become important with increasing data volume. Clustering, which groups objects according to their similarity, has an important role in data analysis. DBSCAN is one of the most effective and popular density-based clustering algorithm and has been successfully implemented in many areas. However, it is a challenging task to determine the input parameter values of DBSCAN algorithm which are neighborhood radius Eps and minimum number of points MinPts. The values of these parameters significantly affect clustering performance of the algorithm. In this study, we propose AE-DBSCAN algorithm which includes a new method to determine the value of neighborhood radius Eps automatically. The experimental evaluations showed that the proposed method outperformed the classical method.", "title": "" }, { "docid": "821cf807af74612ae3377a7651752ff9", "text": "This paper proposes the contactless measurement scheme using LIDAR (Light Detection And Ranging) and the modeling human body movment for the personal mobility interface including twisting motion. We have already proposed the saddle type human body motion interface. This interface uses not only conventional translational human motions but also twisting motion, namely it makes full use of the human motion characteristics. The mechanism of the interface consists of the saddle and universal joint connecting the saddle and personal mobility, and tracing loins motion. Due to these features, the proposed interface shows a potential to realize intuitive operation in the basic experiment. However, the problems have also remained: The height of the saddle should be adjuested for the users' height before riding on the PMV (Personal Mobility Vehicle). And there are plays between the saddle and buttocks of the user, and backlash of the saddle mechanism. This problem prevents a small human motion from measurment. This paper, therefore, proposes the contactless measurement using LIDAR and discusses the fitting methods from measured data points to human body movement.", "title": "" }, { "docid": "0f5511aaed3d6627671a5e9f68df422a", "text": "As people document more of their lives online, some recent systems are encouraging people to later revisit those recordings, a practice we're calling technology-mediated reflection (TMR). Since we know that unmediated reflection benefits psychological well-being, we explored whether and how TMR affects well-being. We built Echo, a smartphone application for recording everyday experiences and reflecting on them later. We conducted three system deployments with 44 users who generated over 12,000 recordings and reflections. We found that TMR improves well-being as assessed by four psychological metrics. By analyzing the content of these entries we discovered two mechanisms that explain this improvement. We also report benefits of very long-term TMR.", "title": "" }, { "docid": "ae0d8d1dec27539502cd7e3030a3fe42", "text": "Thee KL divergence is the most commonly used measure for comparing query and document language models in the language modeling framework to ad hoc retrieval. Since KL is rank equivalent to a specific weighted geometric mean, we examine alternative weighted means for language-model comparison, as well as alternative divergence measures. The study includes analysis of the inverse document frequency (IDF) effect of the language-model comparison methods. Empirical evaluation, performed with different types of queries (short and verbose) and query-model induction approaches, shows that there are methods that often outperform the KL divergence in some settings.", "title": "" }, { "docid": "fa818e3e2e88ef83e592cab1d5a1a1eb", "text": "This paper presents a literature review on the use of depth for hand tracking and gesture recognition. The survey examines 37 papers describing depth-based gesture recognition systems in terms of (1) the hand localization and gesture classification methods developed and used, (2) the applications where gesture recognition has been tested, and (3) the effects of the low-cost Kinect and OpenNI software libraries on gesture recognition research. The survey is organized around a novel model of the hand gesture recognition process. In the reviewed literature, 13 methods were found for hand localization and 11 were found for gesture classification. 24 of the papers included real-world applications to test a gesture recognition system, but only 8 application categories were found (and three applications accounted for 18 of the papers). The papers that use the Kinect and the OpenNI libraries for hand tracking tend to focus more on applications than on localization and classification methods, and show that the OpenNI hand tracking method is good enough for the applications tested thus far. However, the limitations of the Kinect and other depth sensors for gesture recognition have yet to be tested in challenging applications and environments.", "title": "" }, { "docid": "63cef4e93184c865e0d42970ca9de9db", "text": "Numerous applications such as stock market or medical information systems require that both historical and current data be logically integrated into a temporal database. The underlying access method must support different forms of “time-travel” queries, the migration of old record versions onto inexpensive archive media, and high insertion and update rates. This paper presents an access method for transaction-time temporal data, called the log-structured history data access method (LHAM) that meets these demands. The basic principle of LHAM is to partition the data into successive components based on the timestamps of the record versions. Components are assigned to different levels of a storage hierarchy, and incoming data is continuously migrated through the hierarchy. The paper discusses the LHAM concepts, including concurrency control and recovery, our full-fledged LHAM implementation, and experimental performance results based on this implementation. A detailed comparison with the TSB-tree, both analytically and based on experiments with real implementations, shows that LHAM is highly superior in terms of insert performance, while query performance is in almost all cases at least as good as for the TSB-tree; in many cases it is much better.", "title": "" } ]
scidocsrr
5a7b8d50a7c9c5ad5e3fd04e59d0b3a8
Methods for Reconstructing Causal Networks from Observed Time-Series: Granger-Causality, Transfer Entropy, and Convergent Cross-Mapping
[ { "docid": "800cabf6fbdf06c1f8fc6b65f503e13e", "text": "An information theoretic measure is derived that quantifies the statistical coherence between systems evolving in time. The standard time delayed mutual information fails to distinguish information that is actually exchanged from shared information due to common history and input signals. In our new approach, these influences are excluded by appropriate conditioning of transition probabilities. The resulting transfer entropy is able to distinguish effectively driving and responding elements and to detect asymmetry in the interaction of subsystems.", "title": "" } ]
[ { "docid": "4d14e2a47d68b6113466b1e096c924ee", "text": "In this paper, we experimentally realized a steering antenna using a type of active metamaterial with tunable refractive index. The metamaterial is realized by periodically printed subwavelength metallic resonant patterns with embedded microwave varactors. The effective refractive index can be controlled by low direct-current (dc) bias voltage applied to the varactors. In-phase electromagnetic waves transmitting in different zones of such metamaterial slab experience different phase delays, and, consequently, the output direction of the transmitted wave can be steered with progressive phase shift along the interface. This antenna has a simple structure, is very easy to configure the beam direction, and has a low cost. Compared with conventional phased-array antennas, the radome approach has more flexibility to operate with different feeding antennas for various applications.", "title": "" }, { "docid": "82c0292aa7717aaef617927eb83e07bd", "text": "Deutsch, Feynman, and Manin viewed quantum computing as a kind of universal physical simulation procedure. Much of the writing about quantum Turing machines has shown how these machines can simulate an arbitrary unitary transformation on a finite number of qubits. This interesting problem has been addressed most famously in a paper by Deutsch, and later by Bernstein and Vazirani. Quantum Turing machines form a class closely related to deterministic and probabilistic Turing machines and one might hope to find a universal machine in this class. A universal machine is the basis of a notion of programmability. The extent to which universality has in fact been established by the pioneers in the field is examined and a key notion in theoretical computer science (universality) is scrutinised. In a forthcoming paper, the authors will also consider universality in the quantum gate model.", "title": "" }, { "docid": "6c10d03fa49109182c95c36debaf06cc", "text": "Visual versus near infrared (VIS-NIR) face image matching uses an NIR face image as the probe and conventional VIS face images as enrollment. It takes advantage of the NIR face technology in tackling illumination changes and low-light condition and can cater for more applications where the enrollment is done using VIS face images such as ID card photos. Existing VIS-NIR techniques assume that during classifier learning, the VIS images of each target people have their NIR counterparts. However, since corresponding VIS-NIR image pairs of the same people are not always available, which is often the case, so those methods cannot be applied. To address this problem, we propose a transductive method named transductive heterogeneous face matching (THFM) to adapt the VIS-NIR matching learned from training with available image pairs to all people in the target set. In addition, we propose a simple feature representation for effective VIS-NIR matching, which can be computed in three steps, namely Log-DoG filtering, local encoding, and uniform feature normalization, to reduce heterogeneities between VIS and NIR images. The transduction approach can reduce the domain difference due to heterogeneous data and learn the discriminative model for target people simultaneously. To the best of our knowledge, it is the first attempt to formulate the VIS-NIR matching using transduction to address the generalization problem for matching. Experimental results validate the effectiveness of our proposed method on the heterogeneous face biometric databases.", "title": "" }, { "docid": "604619dd5f23569eaff40eabc8e94f52", "text": "Understanding the causes and effects of species invasions is a priority in ecology and conservation biology. One of the crucial steps in evaluating the impact of invasive species is to map changes in their actual and potential distribution and relative abundance across a wide region over an appropriate time span. While direct and indirect remote sensing approaches have long been used to assess the invasion of plant species, the distribution of invasive animals is mainly based on indirect methods that rely on environmental proxies of conditions suitable for colonization by a particular species. The aim of this article is to review recent efforts in the predictive modelling of the spread of both plant and animal invasive species using remote sensing, and to stimulate debate on the potential use of remote sensing in biological invasion monitoring and forecasting. Specifically, the challenges and drawbacks of remote sensing techniques are discussed in relation to: i) developing species distribution models, and ii) studying life cycle changes and phenological variations. Finally, the paper addresses the open challenges and pitfalls of remote sensing for biological invasion studies including sensor characteristics, upscaling and downscaling in species distribution models, and uncertainty of results.", "title": "" }, { "docid": "6b1bee85de8d95896636bd4e13a69156", "text": "Intrinsically motivated spontaneous exploration is a key enabler of autonomous lifelong learning in human children. It allows them to discover and acquire large repertoires of skills through self-generation, self-selection, self-ordering and self-experimentation of learning goals. We present the unsupervised multi-goal reinforcement learning formal framework as well as an algorithmic approach called intrinsically motivated goal exploration processes (IMGEP) to enable similar properties of autonomous learning in machines. The IMGEP algorithmic architecture relies on several principles: 1) self-generation of goals as parameterized reinforcement learning problems; 2) selection of goals based on intrinsic rewards; 3) exploration with parameterized time-bounded policies and fast incremental goal-parameterized policy search; 4) systematic reuse of information acquired when targeting a goal for improving other goals. We present a particularly efficient form of IMGEP that uses a modular representation of goal spaces as well as intrinsic rewards based on learning progress. We show how IMGEPs automatically generate a learning curriculum within an experimental setup where a real humanoid robot can explore multiple spaces of goals with several hundred continuous dimensions. While no particular target goal is provided to the system beforehand, this curriculum allows the discovery of skills of increasing complexity, that act as stepping stone for learning more complex skills (like nested tool use). We show that learning several spaces of diverse problems can be more efficient for learning complex skills than only trying to directly learn these complex skills. We illustrate the computational efficiency of IMGEPs as these robotic experiments use a simple memory-based low-level policy representations and search algorithm, enabling the whole system to learn online and incrementally on a Raspberry Pi 3.", "title": "" }, { "docid": "8801d5a28a098e1879d60838c1c9f108", "text": "On-line photo sharing services allow users to share their touristic experiences. Tourists can publish photos of interesting locations or monuments visited, and they can also share comments, annotations, and even the GPS traces of their visits. By analyzing such data, it is possible to turn colorful photos into metadata-rich trajectories through the points of interest present in a city. In this paper we propose a novel algorithm for the interactive generation of personalized recommendations of touristic places of interest based on the knowledge mined from photo albums and Wikipedia. The distinguishing features of our approach are multiple. First, the underlying recommendation model is built fully automatically in an unsupervised way and it can be easily extended with heterogeneous sources of information. Moreover, recommendations are personalized according to the places previously visited by the user. Finally, such personalized recommendations can be generated very efficiently even on-line from a mobile device.", "title": "" }, { "docid": "1d8cd516cec4ef74d72fa283059bf269", "text": "Current high-quality object detection approaches use the same scheme: salience-based object proposal methods followed by post-classification using deep convolutional features. This spurred recent research in improving object proposal methods [18, 32, 15, 11, 2]. However, domain agnostic proposal generation has the principal drawback that the proposals come unranked or with very weak ranking, making it hard to trade-off quality for running time. Also, it raises the more fundamental question of whether high-quality proposal generation requires careful engineering or can be derived just from data alone. We demonstrate that learning-based proposal methods can effectively match the performance of hand-engineered methods while allowing for very efficient runtime-quality trade-offs. Using our new multi-scale convolutional MultiBox (MSC-MultiBox) approach, we substantially advance the state-of-the-art on the ILSVRC 2014 detection challenge data set, with 0.5 mAP for a single model and 0.52 mAP for an ensemble of two models. MSC-Multibox significantly improves the proposal quality over its predecessor Multibox [4] method: AP increases from 0.42 to 0.53 for the ILSVRC detection challenge. Finally, we demonstrate improved bounding-box recall compared to Multiscale Combinatorial Grouping [18] with less proposals on the Microsoft-COCO [14] data set.", "title": "" }, { "docid": "b658ff9f576136c12a14ebd9b8aff1d7", "text": "The user expectations for usability and personalization along with decreasing size of handheld devices challenge traditional keypad layout design. We have developed a method for on-line adaptation of a touch pad keyboard layout. The method starts from an original layout and monitors the usage of the keyboard by recording and analyzing the keystrokes. An on-line learning algorithm subtly moves the keys according to the spatial distribution of keystrokes. In consequence, the keyboard matches better to the users physical extensions and grasp of the device, and makes the physical trajectories during typing more comfortable. We present two implementations that apply different vector quantization algorithms to produce an adaptive keyboard with visual on-line feedback. Both qualitative and quantitative results show that the changes in the keyboard are consistent, and related to the user's handedness and hand extensions. The testees found the on-line personalization positive. The method can either be applied for on-line personalization of keyboards or for ergonomics research", "title": "" }, { "docid": "4aa9a5e6b1a3c69282edb61c951308d2", "text": "Grey parrots (Psittacus erithacus) solve various cognitive tasks and acquire and use English speech in ways that often resemble those of very young children. Given that the psittacine brain is organized very differently from that of mammals, these results have intriguing implications for the study and evolution of vocal learning, communication, and cognition. For 25 years, I have taught Grey parrots meaningful use of English speech (e.g., to label objects, colors, shapes, categories, quantities, and absence). Using this code, my oldest subject, Alex, exhibits cognitive capacities comparable to those of marine mammals, apes, and sometimes 4-year-old children (Pepperberg, 1999). Thus, his abilities are inferred not from operant tasks common in animal research, but from vocal responses to vocal questions; that is, he demonstrates intriguing communicative parallels with young humans, despite his evolutionary distance. I doubt I taught Alex and other parrots these abilities de novo; their achievements likely derive from existent cognitive and neurological architectures. My research therefore uses interspecies communication as an investigative tool to unveil avian communicative capacities and an avian perspective on the evolution of communication. SIGNIFICANCE OF INTERSPECIES COMMUNICATION Parrots’ vocal plasticity enables direct interspecies communication (Pepperberg, 1999). But why study their ability to use English rather than their natural system? The answer involves their existent cognitive architecture. I believe parrots acquire those elements of human communication that can be mapped or adapted to their own code. By observing what is or is not acquired, I uncover these elements and interpret the avian system. I believe parrots could not learn aspects of reference (e.g., labels for particular colors, object classes such as “apple”) unless their natural code had such referentiality. Although this manner of determining nonhuman referentiality is inferential, direct determination also has difficulties (see Cheney & Seyfarth, 1992). Moreover, pushing avian systems to see what input engenders exceptional learning (i.e., learning that does not necessarily occur during normal development—in this case, acquiring another species’ code) further elucidates learning processes: Because richer input is needed for a bird to learn another species’ code (allospecific acquisition) than for it to learn its own species’ code (conspecific learning) (Pepperberg, 1999), this line of research can show how and whether “nurture” modifies “nature” (e.g., alters innate predispositions toward conspecific learning), and thus uncover additional mechanisms for, and the extent of, communicative learning. Again, these mechanisms are likely part of existent cognitive architectures, not taught de novo. Interspecies communication also has practical applications. It is a tool that (a) directly states question content—animals need not determine both a query’s aim and the answer via trial and error; (b) exploits research showing that social animals may respond more readily and accurately within ecologically valid social contexts than in other situations; (c) facilitates data comparisons among species, including humans; (d) allows rigorous testing of the acquired communication code that avoids expectation cuing (i.e., subjects must choose responses from their entire repertoire; they cannot expect the answer to come from a subset of choices relevant only to the topic under question); and, most important, (e) is also an open, arbitrary, creative code with enormous signal variety, enabling animals to respond in novel, possibly innovative ways that demonstrate greater competence than operant paradigms’ required responses, and (f) thereby allows examination of the nature and extent of information animals perceive. Interspecies communication facilely demonstrates nonhumans’ inherent capacities and may enable complex learning (Pepperberg, 1999). HOW GREYS LEARN: PARALLELS WITH HUMANS My Greys’ learning sometimes parallels human processes, suggesting insights into how acquisition of complex communication may have evolved. Referential, contextually applicable (functional), and socially rich input allows parrots, like young children, to acquire communication skills effectively (Pepperberg, 1999). Reference is an utterance’s meaning—the relationship between labels and objects to which they refer. Thus, in my research, utterances have reference because the birds are rewarded by being given the objects they label. Context (function) involves the situation in which an utterance is used and effects of its use. The utterances also are functional because they initially are used—and responded to—as requests; this initial use of labels as requests gives birds a reason to learn sounds constituting English labels. Social interaction, which is integral to the research, accents environmental components, emphasizes common attributes—and possible underlying rules—of diverse actions, and allows continuous adjustment of input to learners’ levels. Interaction engages subjects directly, provides contextual explanations for actions, and demonstrates actions’ consequences. In this section, I describe the primary training technique, then experiments my students and I have conducted to determine which input elements are necessary and sufficient to engender learning. Model/Rival Training My model/rival (M/R) training system (background in Pepperberg, 1999) uses three-way social interactions among two humans and a parrot to demonstrate targeted vocal behavior. The parrot observes two humans handling one or more objects, then watches the humans interact: The trainer presents, and queries the human model about, the item (or multiple items) (e.g., “What’s here?” “What color?”) and praises the model and gives him or her the object (or objects) as a referential reward for answers that are correct. Incorrect responses (like the bird may make) are punished by scolding and temporarily removing the item (or items) from sight. Thus, the second human is a model for the parrot’s responses, is its rival for the trainer’s attention, and also illustrates the consequences of making an error: The model is asked to try again or talk more clearly if the response was (deliberately) incorrect or garbled, so the method demonstrates corrective feedback. The bird is also queried and initially rewarded for approximations to “correct” responses. As training progresses, the criteria for what constitutes a correct response become increasingly strict; thus, training is adjusted to the parrot’s level. Unlike other laboratories’ M/R procedures (see Pepperberg, 1999), ours interchanges the roles of trainer and model, and includes the parrot in interactions, to emphasize that one being is not always the questioner and the other the respondent, and that the procedure can effect environmental change. Role reversal also counteracts an earlier methodological problem: Birds whose trainers always maintained their respective roles responded only to the human questioner. Our birds, however, respond to, interact with, and learn from all humans. M/R training exclusively uses intrinsic reinforcers: To ensure the closest possible correlations of labels or concepts to be learned with their appropriate referents, we reward a bird for uttering “X” by giving the bird X (i.e., the object to which the label or concept refers). Earlier unsuccessful programs for teaching birds to communicate with humans used extrinsic rewards (Pepperberg, 1999): The reward was one food that neither related to, nor varied with, the label or concept being taught. Use of extrinsic rewards delays label and concept acquisition because it confounds the label of the targeted exemplar or concept with that of the food. My birds never receive extrinsic rewards. Because Alex sometimes fails to focus on targeted objects, we trained him to say, “I want X” (i.e., to separate labeling and requesting; see Pepperberg, 1999), in order to request the reward he wants. That is, if he identifies something correctly, his reward can be the right to request something more desirable than what he has identified. This procedure provides flexibility but maintains referentiality. Thus, to receive X after identifying Y, Alex must state, “I want X,” and trainers will not comply until the original identification task involving Y is completed. His labels therefore are true identifiers, not merely emotional requests. Adding “want” provides additional advantages: First, trainers can distinguish incorrect labeling from appeals for other items, particularly during testing, when birds unable to use “want” might misidentify objects not because they do not know the correct label but because they are asking for treats, and their performance might reflect a lack of accuracy unrelated to their actual competence. Second, birds may demonstrate low-level intentionality: Alex rarely accepts substitutes when requesting X, and continues his demands (see Pepperberg, 1999), thus showing that he truly intends to obtain X when he says “want X.” Eliminating Aspects of Input M/R training with Alex successfully demonstrated that reference, functionality, and social interaction during training enabled label and concept acquisition, but not which or how many of these elements were necessary, sufficient, or both. What would happen if some of these elements were lacking from the input? Answering that question required training and testing additional parrots, because Alex might cease learning after a change in training merely because there was a change, not necessarily because of the type of change. With 3 new naive Greys—Kyaaro, Alo, and Griffin—students and I performed seven sets of experiments (see Pepperberg, 1999; Pepperberg, Sandefer, Noel, & Ellsworth, 2000) to test the relative importance of reference, functionality, and social interaction in training. In the first set of experiments, we compared simultan", "title": "" }, { "docid": "8e92ade2f4096cbfabd51e018138c2f6", "text": "Recent results by Martin et al. (2014) showed in 3D SPH simulations that tilted discs in binary systems can be unstable to the development of global, damped Kozai–Lidov (KL) oscillations in which the discs exchange tilt for eccentricity. We investigate the linear stability of KL modes for tilted inviscid discs under the approximations that the disc eccentricity is small and the disc remains flat. By using 1D equations, we are able to probe regimes of large ratios of outer to inner disc edge radii that are realistic for binary systems of hundreds of AU separations and are not easily probed by multidimensional simulations. For order unity binary mass ratios, KL instability is possible for a window of disc aspect ratios H/r in the outer parts of a disc that roughly scale as (nb/n) 2 < ∼ H/r< ∼ nb/n, for binary orbital frequency nb and orbital frequency n at the disc outer edge. We present a framework for understanding the zones of instability based on the determination of branches of marginally unstable modes. In general, multiple growing eccentric KL modes can be present in a disc. Coplanar apsidal-nodal precession resonances delineate instability branches. We determine the range of tilt angles for unstable modes as a function of disc aspect ratio. Unlike the KL instability for free particles that involves a critical (minimum) tilt angle, disc instability is possible for any nonzero tilt angle depending on the disc aspect ratio.", "title": "" }, { "docid": "ae585aae554c5fbe4a18f7f2996b7e93", "text": "UNLABELLED\nCaloric restriction occurs when athletes attempt to reduce body fat or make weight. There is evidence that protein needs increase when athletes restrict calories or have low body fat.\n\n\nPURPOSE\nThe aims of this review were to evaluate the effects of dietary protein on body composition in energy-restricted resistance-trained athletes and to provide protein recommendations for these athletes.\n\n\nMETHODS\nDatabase searches were performed from earliest record to July 2013 using the terms protein, and intake, or diet, and weight, or train, or restrict, or energy, or strength, and athlete. Studies (N = 6) needed to use adult (≥ 18 yrs), energy-restricted, resistance-trained (> 6 months) humans of lower body fat (males ≤ 23% and females ≤ 35%) performing resistance training. Protein intake, fat free mass (FFM) and body fat had to be reported.\n\n\nRESULTS\nBody fat percentage decreased (0.5-6.6%) in all study groups (N = 13) and FFM decreased (0.3-2.7kg) in nine of 13. Six groups gained, did not lose, or lost nonsignificant amounts of FFM. Five out of these six groups were among the highest in body fat, lowest in caloric restriction, or underwent novel resistance training stimuli. However, the one group that was not high in body fat that underwent substantial caloric restriction, without novel training stimuli, consumed the highest protein intake out of all the groups in this review (2.5-2.6g/kg).\n\n\nCONCLUSIONS\nProtein needs for energy-restricted resistance-trained athletes are likely 2.3-3.1g/kg of FFM scaled upwards with severity of caloric restriction and leanness.", "title": "" }, { "docid": "f97ed9ef35355feffb1ebf4242d7f443", "text": "Moore’s law has allowed the microprocessor market to innovate at an astonishing rate. We believe microchip implants are the next frontier for the integrated circuit industry. Current health monitoring technologies are large, expensive, and consume significant power. By miniaturizing and reducing power, monitoring equipment can be implanted into the body and allow 24/7 health monitoring. We plan to implement a new transmitter topology, compressed sensing, which can be used for wireless communications with microchip implants. This paper focuses on the ADC used in the compressed sensing signal chain. Using the Cadence suite of tools and a 32/28nm process, we produced simulations of our compressed sensing Analog to Digital Converter to feed into a Digital Compression circuit. Our results indicate that a 12-bit, 20Ksample, 9.8nW Successive Approximation ADC is possible for diagnostic resolution (10 bits). By incorporating a hybrid-C2C DAC with differential floating voltage shields, it is possible to obtain 9.7 ENOB. Thus, we recommend this ADC for use in compressed sensing for biomedical purposes. Not only will it be useful in digital compressed sensing, but this can also be repurposed for use in analog compressed sensing.", "title": "" }, { "docid": "9a29bcb5ca21c33140a199763ab4bc5f", "text": "The Stadtpilot project aims at autonomous driving on Braunschweig's inner city ring road. For this purpose, an autonomous vehicle called “Leonie” has been developed. In October 2010, after two years of research, “Leonie's” abilities were presented in a public demonstration. This vehicle is one of the first worldwide to show the ability of driving autonomously in real urban traffic scenarios. This paper describes the legal issues and the homologation process for driving autonomously in public traffic in Braunschweig, Germany. It also dwells on the Safety Concept, the system architecture and current research activities.", "title": "" }, { "docid": "d60d64c0fe0c6f70ccb1b934915861c2", "text": "This paper presents a single-stage flyback power-factor-correction circuit with a variable boost inductance for high-brightness light-emitting-diode applications for the universal input voltage (90-270 Vrms). The proposed circuit overcomes the limitations of the conventional single-stage PFC flyback with a constant boost inductance, which cannot be designed to achieve a practical bulk-capacitor voltage level (i.e., less than 450 V) at high line while meeting the IEC 61000-3-2 Class C line current harmonic limits at low line. According to the proposed variable boost inductance method, the boost inductance is constant in the high-voltage range and it is reduced in the low-voltage range, resulting in discontinuous-conduction-mode operation and a low total harmonic distortion (THD) in both the high-voltage and low-voltage ranges. Measurements obtained on a 24-V/91-W experimental prototype are as follows: PF = 0.9873, THD = 12%, and efficiency = 88% at nominal low line (120 Vrms); and PF = 0.9474, THD = 10.39%, and efficiency = 91% at nominal high line (230 Vrms). The line current harmonics satisfy the IEC 61000-3-2 Class C limits with enough margin.", "title": "" }, { "docid": "fdbcf90ffeebf9aab41833df0fff23e6", "text": "(Under the direction of Anselmo Lastra) For image synthesis in computer graphics, two major approaches for representing a surface's appearance are texture mapping, which provides spatial detail, such as wallpaper, or wood grain; and the 4D bi-directional reflectance distribution function (BRDF) which provides angular detail, telling how light reflects off surfaces. I combine these two modes of variation to form the 6D spatial bi-directional reflectance distribution function (SBRDF). My compact SBRDF representation simply stores BRDF coefficients at each pixel of a map. I propose SBRDFs as a surface appearance representation for computer graphics and present a complete system for their use. I acquire SBRDFs of real surfaces using a device that simultaneously measures the BRDF of every point on a material. The system has the novel ability to measure anisotropy (direction of threads, scratches, or grain) uniquely at each surface point. I fit BRDF parameters using an efficient nonlinear optimization approach specific to BRDFs. SBRDFs can be rendered using graphics hardware. My approach yields significantly more detailed, general surface appearance than existing techniques for a competitive rendering cost. I also propose an SBRDF rendering method for global illumination using prefiltered environment maps. This improves on existing prefiltered environment map techniques by decoupling the BRDF from the environment maps, so a single set of maps may be used to illuminate the unique BRDFs at each surface point. I demonstrate my results using measured surfaces including gilded wallpaper, plant leaves, upholstery fabrics, wrinkled gift-wrapping paper and glossy book covers. iv To Tiffany, who has worked harder and sacrificed more for this than have I. ACKNOWLEDGMENTS I appreciate the time, guidance and example of Anselmo Lastra, my advisor. I'm grateful to Steve Molnar for being my mentor throughout graduate school. I'm grateful to the other members of my committee, Henry Fuchs, Gary Bishop, and Lars Nyland for helping and teaching me and creating an environment that allows research to be done successfully and pleasantly. I am grateful for the effort and collaboration of Ben Cloward, who masterfully modeled the Carolina Inn lobby, patiently worked with my software, and taught me much of how artists use computer graphics. I appreciate the collaboration of Wolfgang Heidrich, who worked hard on this project and helped me get up to speed on shading with graphics hardware. I'm thankful to Steve Westin, for patiently teaching me a great deal about surface appearance and light measurement. I'm grateful for …", "title": "" }, { "docid": "d5ddc141311afb6050a58be88303b577", "text": "Given the ability to directly manipulate image pixels in the digital input space, an adversary can easily generate imperceptible perturbations to fool a Deep Neural Network (DNN) image classifier, as demonstrated in prior work. In this work, we propose ShapeShifter, an attack that tackles the more challenging problem of crafting physical adversarial perturbations to fool image-based object detectors like Faster R-CNN. Attacking an object detector is more difficult than attacking an image classifier, as it needs to mislead the classification results in multiple bounding boxes with different scales. Extending the digital attack to the physical world adds another layer of difficulty, because it requires the perturbation to be robust enough to survive real-world distortions due to different viewing distances and angles, lighting conditions, and camera limitations. We show that the Expectation over Transformation technique, which was originally proposed to enhance the robustness of adversarial perturbations in image classification, can be successfully adapted to the object detection setting. ShapeShifter can generate adversarially perturbed stop signs that are consistently mis-detected by Faster RCNN as other objects, posing a potential threat to autonomous vehicles and other safety-critical computer vision systems.", "title": "" }, { "docid": "e632895c1ab1b994f64ef03260b91acb", "text": "The modified Brostrom procedure is commonly recommended for reconstruction of the anterior talofibular ligament (ATF) and calcaneofibular ligament (CF) with an advancement of the inferior retinaculum. However, some surgeons perform the modified Bostrom procedure with an semi-single ATF ligament reconstruction and advancement of the inferior retinaculum for simplicity. This study evaluated the initial stability of the modified Brostrom procedure and compared a two ligaments (ATF + CF) reconstruction group with a semi-single ligament (ATF) reconstruction group. Sixteen paired fresh frozen cadaveric ankle joints were used in this study. The ankle joint laxity was measured on the plane radiographs with 150 N anterior drawer force and 150 N varus stress force. The anterior displacement distances and varus tilt angles were measured before and after cutting the ATF and CF ligaments. A two ligaments (ATF + CF) reconstruction with an advancement of the inferior retinaculum was performed on eight left cadaveric ankles, and an semi-single ligament (ATF) reconstruction with an advancement of the inferior retinaculum was performed on eight right cadaveric ankles. The ankle instability was rechecked after surgery. The decreases in instability of the ankle after surgery were measured and the difference in the decrease was compared using a Mann–Whitney U test. The mean decreases in anterior displacement were 3.4 and 4.0 mm in the two ligaments reconstruction and semi-single ligament reconstruction groups, respectively. There was no significant difference between the two groups (P = 0.489). The mean decreases in the varus tilt angle in the two ligaments reconstruction and semi-single ligament reconstruction groups were 12.6° and 12.2°, respectively. There was no significant difference between the two groups (P = 0.399). In this cadaveric study, a substantial level of initial stability can be obtained using an anatomical reconstruction of the anterior talofibular ligament only and reinforcement with the inferior retinaculum. The modified Brostrom procedure with a semi-single ligament (Anterior talofibular ligament) reconstruction with an advancement of the inferior retinaculum can provide as much initial stability as the two ligaments (Anterior talofibular ligament and calcaneofibular ligament) reconstruction procedure.", "title": "" }, { "docid": "eaf08b7ea5592617fe88bc713c3e874b", "text": "In this paper we propose, implement and evaluate OpenSample: a low-latency, sampling-based network measurement platform targeted at building faster control loops for software-defined networks. OpenSample leverages sFlow packet sampling to provide near-real-time measurements of both network load and individual flows. While OpenSample is useful in any context, it is particularly useful in an SDN environment where a network controller can quickly take action based on the data it provides. Using sampling for network monitoring allows OpenSample to have a 100 millisecond control loop rather than the 1-5 second control loop of prior polling-based approaches. We implement OpenSample in the Floodlight Open Flow controller and evaluate it both in simulation and on a test bed comprised of commodity switches. When used to inform traffic engineering, OpenSample provides up to a 150% throughput improvement over both static equal-cost multi-path routing and a polling-based solution with a one second control loop.", "title": "" }, { "docid": "e3546095a5d0bb39755355c7a3acc875", "text": "We propose to achieve explainable neural machine translation (NMT) by changing the output representation to explain itself. We present a novel approach to NMT which generates the target sentence by monotonically walking through the source sentence. Word reordering is modeled by operations which allow setting markers in the target sentence and move a target-side write head between those markers. In contrast to many modern neural models, our system emits explicit word alignment information which is often crucial to practical machine translation as it improves explainability. Our technique can outperform a plain text system in terms of BLEU score under the recent Transformer architecture on JapaneseEnglish and Portuguese-English, and is within 0.5 BLEU difference on Spanish-English.", "title": "" }, { "docid": "818db2be19d63a64856909dee5d76081", "text": "Generating captions for images is a task that has recently received considerable attention. In this work we focus on caption generation for abstract scenes, or object layouts where the only information provided is a set of objects and their locations. We propose OBJ2TEXT, a sequence-tosequence model that encodes a set of objects and their locations as an input sequence using an LSTM network, and decodes this representation using an LSTM language model. We show that our model, despite encoding object layouts as a sequence, can represent spatial relationships between objects, and generate descriptions that are globally coherent and semantically relevant. We test our approach in a task of object-layout captioning by using only object annotations as inputs. We additionally show that our model, combined with a state-of-the-art object detector, improves an image captioning model from 0.863 to 0.950 (CIDEr score) in the test benchmark of the standard MS-COCO Captioning task.", "title": "" } ]
scidocsrr
15d945238eaeba580d8063e3075ce2d4
A Cognitive Model for the Representation and Acquisition of Verb Selectional Preferences
[ { "docid": "66451aa5a41ec7f9246d749c0983fa60", "text": "A new method for automatically acquiring case frame patterns from large corpora is proposed. In particular, the problem of generalizing values of a case frame slot for a verb is viewed as that of estimating a conditional probability distribution over a partition of words, and a new generalization method based on the Minimum Description Length (MDL) principle is proposed. In order to assist with efficiency, the proposed method makes use of an existing thesaurus and restricts its attention to those partitions that are present as \"cuts\" in the thesaurus tree, thus reducing the generalization problem to that of estimating a \"tree cut model\" of the thesaurus tree. An efficient algorithm is given, which provably obtains the optimal tree cut model for the given frequency data of a case slot, in the sense of MDL. Case frame patterns obtained by the method were used to resolve PP-attachment ambiguity. Experimental results indicate that the proposed method improves upon or is at least comparable with existing methods.", "title": "" } ]
[ { "docid": "66cd5501be682957a2ee10ce91136c01", "text": "The use of inaccurate or outdated database statistics by the query optimizer in a relational DBMS often results in a poor choice of query execution plans and hence unacceptably long query processing times. Configuration and maintenance of these statistics has traditionally been a time-consuming manual operation, requiring that the database administrator (DBA) continually monitor query performance and data changes in order to determine when to refresh the statistics values and when and how to adjust the set of statistics that the DBMS maintains. In this paper we describe the new Automated Statistics Collection (ASC) component of IBM® DB2® Universal DatabaseTM (DB2 UDB). This autonomic technology frees the DBA from the tedious task of manually supervising the collection and maintenance of database statistics. ASC monitors both the update-delete-insert (UDI) activities on the data as well as query feedback (QF), i.e., the results of the queries that are executed on the data. ASC uses these two sources of information to automatically decide which statistics to collect and when to collect them. This combination of UDI-driven and QF-driven autonomic processes ensures that the system can handle unforeseen queries while also ensuring good performance for frequent and important queries. We present the basic concepts, architecture, and key implementation details of ASC in DB2 UDB, and present a case study showing how the use of ASC can speed up a query workload by orders of magnitude without requiring any DBA intervention.", "title": "" }, { "docid": "eb30c6946e802086ac6de5848897a648", "text": "To determine how age of acquisition influences perception of second-language speech, the Speech Perception in Noise (SPIN) test was administered to native Mexican-Spanish-speaking listeners who learned fluent English before age 6 (early bilinguals) or after age 14 (late bilinguals) and monolingual American-English speakers (monolinguals). Results show that the levels of noise at which the speech was intelligible were significantly higher and the benefit from context was significantly greater for monolinguals and early bilinguals than for late bilinguals. These findings indicate that learning a second language at an early age is important for the acquisition of efficient high-level processing of it, at least in the presence of noise.", "title": "" }, { "docid": "3bd94d483a4d3934982d60284a90f4c5", "text": "Internet addiction is an increasing concern among young adults. Self-presentational theory posits that the Internet offers a context in which individuals are able to control their image. Little is known about body image and eating concerns among pathological Internet users. The aim of this study was to explore the association between Internet addiction symptoms, body image esteem, body image avoidance, and disordered eating. A sample of 392 French young adults (68 percent women) completed an online questionnaire assessing time spent online, Internet addiction symptoms, disordered eating, and body image avoidance. Fourteen men (11 percent) and 26 women (9.7 percent) reported Internet addiction. Body image avoidance was associated with Internet addiction symptoms among both genders. Controlling for body-mass index, Internet addiction symptoms, and body image avoidance were both significant predictors of disordered eating among women. These findings support the self-presentational theory of Internet addiction and suggest that body image avoidance is an important factor.", "title": "" }, { "docid": "f4db297c70b1aba64ce3ed17b0837859", "text": "Despite the success of the automatic speech recognition framework in its own application field, its adaptation to the problem of acoustic event detection has resulted in limited success. In this paper, instead of treating the problem similar to the segmentation and classification tasks in speech recognition, we pose it as a regression task and propose an approach based on random forest regression. Furthermore, event localization in time can be efficiently handled as a joint problem. We first decompose the training audio signals into multiple interleaved superframes which are annotated with the corresponding event class labels and their displacements to the temporal onsets and offsets of the events. For a specific event category, a random-forest regression model is learned using the displacement information. Given an unseen superframe, the learned regressor will output the continuous estimates of the onset and offset locations of the events. To deal with multiple event categories, prior to the category-specific regression phase, a superframe-wise recognition phase is performed to reject the background superframes and to classify the event superframes into different event categories. While jointly posing event detection and localization as a regression problem is novel, the superior performance on two databases ITC-Irst and UPC-TALP demonstrates the efficiency and potential of the proposed approach.", "title": "" }, { "docid": "47785d2cbbc5456c0a2c32c329498425", "text": "Are there important cyclical fluctuations in bond market premiums and, if so, with what macroeconomic aggregates do these premiums vary? We use the methodology of dynamic factor analysis for large datasets to investigate possible empirical linkages between forecastable variation in excess bond returns and macroeconomic fundamentals. We find that “real” and “inflation” factors have important forecasting power for future excess returns on U.S. government bonds, above and beyond the predictive power contained in forward rates and yield spreads. This behavior is ruled out by commonly employed affine term structure models where the forecastability of bond returns and bond yields is completely summarized by the cross-section of yields or forward rates. An important implication of these findings is that the cyclical behavior of estimated risk premia in both returns and long-term yields depends importantly on whether the information in macroeconomic factors is included in forecasts of excess bond returns. Without the macro factors, risk premia appear virtually acyclical, whereas with the estimated factors risk premia have a marked countercyclical component, consistent with theories that imply investors must be compensated for risks associated with macroeconomic activity. ( JEL E0, E4, G10, G12)", "title": "" }, { "docid": "8dbb1906440f8a2a2a0ddf51527bb891", "text": "Recent studies have shown that people prefer to age in their familiar environments, thus guiding designers to provide a safe and functionally appropriate environment for ageing people, regardless of their physical conditions or limitations. Therefore, a participatory design model is proposed where human beings can improve their quality of life by promoting independence, as well as safety, useability and attractiveness of the residence. Brainstorming, scenario building, unstructured interviews, sketching and videotaping are used as techniques in the participatory design sessions. Quality deployment matrices are employed to find the relationships between the elderly user's requirements and design specifications. A case study was devised to apply and test the conceptual model phase of the proposed model.", "title": "" }, { "docid": "caf866341ad9f74b1ac1dc8572f6e95c", "text": "One important but often overlooked aspect of human contexts of ubiquitous computing environment is human’s emotional status. And, there are no realistic and robust humancentric contents services so far, because there are few considers about combining context awareness computing with wearable computing for improving suitability of contents to each user’s needs. In this paper, we discuss combining context awareness computing with wearable computing to develop more effective personalized services. And we propose new algorithms to develop efficiently personalized emotion based content service system.", "title": "" }, { "docid": "078843d5dbede66b6148c4d0d269b176", "text": "A randomized control trial was performed to evaluate the effectiveness and safety of absorbable polymeric clips for appendicular stump closure in laparoscopic appendectomy (LA). Patients were randomly enrolled into an experimental group (ligation of the appendicular base with Lapro-Clips, L-C group) or control group (ligation of the appendicular base with Hem-o-lok Clips, H-C group). We identified 1,100 patients who underwent LA between April 1, 2012 and February 3, 2015. Overall, 99 patients (9.0%, 99/1,100) developed a complication following LA (47 [8.5%] in the L-C group and 52 [9.5%] in the H-C group (P = 0.598). No statistically significant differences were observed in intra-abdominal abscesses, stump leakage, superficial wound infections, post-operative abdominal pain, overall adverse events, or the duration of the operations and hospital stays between the groups (all p > 0.05). Adverse risk factors associated with the use of absorbable clips in LA included body mass index ≥ 27.5 kg/m2, diabetes, American Society of Anesthesiologists degree ≥ III, gangrenous appendicitis, severe inflammation of the appendix base, appendix perforation, and the absence of peritoneal drainage. The results indicate that the Lapro-Clip is a safe and effective device for closing the appendicular stump in LA in select patients with appendicitis.", "title": "" }, { "docid": "86d1b98d64037a2ce992cdbfa4b908b4", "text": "This letter studies the transmission characteristics of coplanar waveguides (CPWs) loaded with single-layer S-shaped split-ring resonators (S-SRRs) for the first time. Two structures are analyzed: 1) a CPW simply loaded with an S-SRR, and 2) a CPW loaded with an S-SRR and a series gap. The former exhibits a stopband functionality related to the resonance of the S-SRR excited by the contra-directional magnetic fluxes through the two connected resonator loops; the latter is useful for the implementation of compact bandpass filters. In both cases, a lumped-element equivalent circuit model is proposed with an unequivocal physical interpretation of the circuit elements. These circuits are then validated by comparing the circuit response with extracted parameters to full-wave electromagnetic simulations. The last part of the letter illustrates application of the S-SRR/gap-loaded CPW unit cell to the design of a bandpass filter. The resulting filter is very compact and exhibits competitive performance.", "title": "" }, { "docid": "c8e83a1eb803d9e091c2cb3418577aa7", "text": "We review the literature on pathological narcissism and narcissistic personality disorder (NPD) and describe a significant criterion problem related to four inconsistencies in phenotypic descriptions and taxonomic models across clinical theory, research, and practice; psychiatric diagnosis; and social/personality psychology. This impedes scientific synthesis, weakens narcissism's nomological net, and contributes to a discrepancy between low prevalence rates of NPD and higher rates of practitioner-diagnosed pathological narcissism, along with an enormous clinical literature on narcissistic disturbances. Criterion issues must be resolved, including clarification of the nature of normal and pathological narcissism, incorporation of the two broad phenotypic themes of narcissistic grandiosity and narcissistic vulnerability into revised diagnostic criteria and assessment instruments, elimination of references to overt and covert narcissism that reify these modes of expression as distinct narcissistic types, and determination of the appropriate structure for pathological narcissism. Implications for the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders and the science of personality disorders are presented.", "title": "" }, { "docid": "ab4abd9033f87e08656f4363499bc09c", "text": "It is well known that, for most datasets, the use of large-size minibatches for Stochastic Gradient Descent (SGD) typically leads to slow convergence and poor generalization. On the other hand, large minibatches are of great practical interest as they allow for a better exploitation of modern GPUs. Previous literature on the subject concentrated on how to adjust the main SGD parameters (in particular, the learning rate) when using large minibatches. In this work we introduce an additional feature, that we call minibatch persistency, that consists in reusing the same minibatch for K consecutive SGD iterations. The computational conjecture here is that a large minibatch contains a significant sample of the training set, so one can afford to slightly overfitting it without worsening generalization too much. The approach is intended to speedup SGD convergence, and also has the advantage of reducing the overhead related to data loading on the internal GPU memory. We present computational results on CIFAR-10 with an AlexNet architecture, showing that even small persistency values (K = 2 or 5) already lead to a significantly faster convergence and to a comparable (or even better) generalization than the standard “disposable minibatch” approach (K = 1), in particular when large minibatches are used. The lesson learned is that minibatch persistency can be a simple yet effective way to deal with large minibatches.", "title": "" }, { "docid": "b80df19e67d2bbaabf4da18d7b5af4e2", "text": "This paper presents a data-driven approach for automatically generating cartoon faces in different styles from a given portrait image. Our stylization pipeline consists of two steps: an offline analysis step to learn about how to select and compose facial components from the databases; a runtime synthesis step to generate the cartoon face by assembling parts from a database of stylized facial components. We propose an optimization framework that, for a given artistic style, simultaneously considers the desired image-cartoon relationships of the facial components and a proper adjustment of the image composition. We measure the similarity between facial components of the input image and our cartoon database via image feature matching, and introduce a probabilistic framework for modeling the relationships between cartoon facial components. We incorporate prior knowledge about image-cartoon relationships and the optimal composition of facial components extracted from a set of cartoon faces to maintain a natural, consistent, and attractive look of the results. We demonstrate generality and robustness of our approach by applying it to a variety of portrait images and compare our output with stylized results created by artists via a comprehensive user study.", "title": "" }, { "docid": "da979af4e34855b9be1ce906acdd16e9", "text": "Learning analytics is the analysis of electronic learning data which allows teachers, course designers and administrators of virtual learning environments to search for unobserved patterns and underlying information in learning processes. The main aim of learning analytics is to improve learning outcomes and the overall learning process in electronic learning virtual classrooms and computer-supported education. The most basic unit of learning data in virtual learning environments for learning analytics is the interaction, but there is no consensus yet on which interactions are relevant for effective learning. Drawing upon extant literature, this research defines three system-independent classifications of interactions and evaluates the relation of their components with academic performance across two different learning modalities: virtual learning environment (VLE) supported face-to-face (F2F) and online learning. In order to do so, we performed an empirical study with data from six online and two VLE-supported F2F courses. Data extraction and analysis required the development of an ad hoc tool based on the proposed interaction classification. The main finding from this research is that, for each classification, there is a relation between some type of interactions and academic performance in online courses, whereas this relation is non-significant in the case of VLE-supported F2F courses. Implications for theory and practice are dis-", "title": "" }, { "docid": "564ec6a2d5748afc83592ac0371a3ead", "text": "Fine-grained vehicle classiflcation is a challenging task due to the subtle differences between vehicle classes. Several successful approaches to fine-grained image classification rely on part-based models, where the image is classified according to discriminative object parts. Such approaches require however that parts in the training images be manually annotated, a laborintensive process. We propose a convolutional architecture realizing a transform network capable of discovering the most discriminative parts of a vehicle at multiple scales. We experimentally show that our architecture outperforms a baseline reference if trained on class labels only, and performs closely to a reference based on a part-model if trained on loose vehicle localization bounding boxes.", "title": "" }, { "docid": "714843ca4a3c99bfc95e89e4ff82aeb1", "text": "The development of new technologies for mapping structural and functional brain connectivity has led to the creation of comprehensive network maps of neuronal circuits and systems. The architecture of these brain networks can be examined and analyzed with a large variety of graph theory tools. Methods for detecting modules, or network communities, are of particular interest because they uncover major building blocks or subnetworks that are particularly densely connected, often corresponding to specialized functional components. A large number of methods for community detection have become available and are now widely applied in network neuroscience. This article first surveys a number of these methods, with an emphasis on their advantages and shortcomings; then it summarizes major findings on the existence of modules in both structural and functional brain networks and briefly considers their potential functional roles in brain evolution, wiring minimization, and the emergence of functional specialization and complex dynamics.", "title": "" }, { "docid": "90897878038ac7cd3a51fdfa3397ce9f", "text": "A fundamental operation in many vision tasks, including motion understanding, stereopsis, visual odometry, or invariant recognition, is establishing correspondences between images or between images and data from other modalities. We present an analysis of the role that multiplicative interactions play in learning such correspondences, and we show how learning and inferring relationships between images can be viewed as detecting rotations in the eigenspaces shared among a set of orthogonal matrices. We review a variety of recent multiplicative sparse coding methods in light of this observation. We also review how the squaring operation performed by energy models and by models of complex cells can be thought of as a way to implement multiplicative interactions. This suggests that the main utility of including complex cells in computational models of vision may be that they can encode relations not invariances.", "title": "" }, { "docid": "34d8b9fa5159e161ee0050900be4fa62", "text": "Singular value decomposition (SVD), together with the expectation-maximization (EM) procedure, can be used to find a low-dimension model that maximizes the log-likelihood of observed ratings in recommendation systems. However, the computational cost of this approach is a major concern, since each iteration of the EM algorithm requires a new SVD computation. We present a novel algorithm that incorporates SVD approximation into the EM procedure to reduce the overall computational cost while maintaining accurate predictions. Furthermore, we propose a new framework for collaborating filtering in distributed recommendation systems that allows users to maintain their own rating profiles for privacy. A server periodically collects aggregate information from those users that are online to provide predictions for all users. Both theoretical analysis and experimental results show that this framework is effective and achieves almost the same prediction performance as that of centralized systems.", "title": "" }, { "docid": "d161ab557edb4268a0ebc606bb9dbcb6", "text": "Recommender systems play an important role in reducing the negative impact of information overload on those websites where users have the possibility of voting for their preferences on Ítems. The most normal technique for dealing with the recommendation mechanism is to use collaborative filtering, in which it is essential to discover the most similar users to whom you desire to make recommendations. The hypothesis of this paper is that the results obtained by applying traditional similarities measures can be improved by taking contextual information, drawn from the entire body of users, and using it to calcúlate the singularity which exists, for each item, in the votes cast by each pair of users that you wish to compare. As such, the greater the measure of singularity result between the votes cast by two given users, the greater the impact this will have on the similarity. The results, tested on the Movielens, Netflix and FilmAffinity databases, corrobórate the excellent behaviour of the singularity measure proposed.", "title": "" }, { "docid": "83060ef5605b19c14d8b0f41cbd61de5", "text": "We are at the beginning of the multicore era. Computers will have increasingly many cores (processors), but there is still no good programming framework for these architectures, and thus no simple and unified way for machine learning to take advantage of the potential speed up. In this paper, we develop a broadly applicable parallel programming method, one that is easily applied to manydifferent learning algorithms. Our work is in distinct contrast to the tradition in machine learning of designing (often ingenious) ways to speed up a singlealgorithm at a time. Specifically, we show that algorithms that fit the Statistical Query model [15] can be written in a certain “summation form,” which allows them to be easily parallelized on multicore computers. We adapt Google’s map-reduce [7] paradigm to demonstrate this parallel speed up technique on a variety of learning algorithms including locally weighted linear regression (LWLR), k-means, logistic regression (LR), naive Bayes (NB), SVM, ICA, PCA, gaussian discriminant analysis (GDA), EM, and backpropagation (NN). Our experimental results show basically linear speedup with an increasing number of processors.", "title": "" } ]
scidocsrr
155777b9568aa560cf4167a14c89cb13
Probabilistic Relations between Words : Evidence from Reduction in Lexical Production
[ { "docid": "187595fb12a5ca3bd665ffbbc9f47465", "text": "In order to acquire a lexicon, young children must segment speech into words, even though most words are unfamiliar to them. This is a non-trivial task because speech lacks any acoustic analog of the blank spaces between printed words. Two sources of information that might be useful for this task are distributional regularity and phonotactic constraints. Informally, distributional regularity refers to the intuition that sound sequences that occur frequently and in a variety of contexts are better candidates for the lexicon than those that occur rarely or in few contexts. We express that intuition formally by a class of functions called DR functions. We then put forth three hypotheses: First, that children segment using DR functions. Second, that they exploit phonotactic constraints on the possible pronunciations of words in their language. Specifically, they exploit both the requirement that every word must have a vowel and the constraints that languages impose on word-initial and word-final consonant clusters. Third, that children learn which word-boundary clusters are permitted in their language by assuming that all permissible word-boundary clusters will eventually occur at utterance boundaries. Using computational simulation, we investigate the effectiveness of these strategies for segmenting broad phonetic transcripts of child-directed English. The results show that DR functions and phonotactic constraints can be used to significantly improve segmentation. Further, the contributions of DR functions and phonotactic constraints are largely independent, so using both yields better segmentation than using either one alone. Finally, learning the permissible word-boundary clusters from utterance boundaries does not degrade segmentation performance.", "title": "" } ]
[ { "docid": "ca7269b97464c9b78aa0cb6727926e28", "text": "This paper argues that there has not been enough discussion in the field of applications of Gaussian Process for the fast moving consumer goods industry. Yet, this technique can be important as it e.g., can provide automatic feature relevance determination and the posterior mean can unlock insights on the data. Significant challenges are the large size and high dimensionality of commercial data at a point of sale. The study reviews approaches in the Gaussian Processes modeling for large data sets, evaluates their performance on commercial sales and shows value of this type of models as a decision-making tool for management.", "title": "" }, { "docid": "def621d47a8ead24754b1eebe590314a", "text": "Existing social-aware routing protocols for packet switched networks make use of the information about the social structure of the network deduced by state information of nodes (e.g., history of past encounters) to optimize routing. Although these approaches are shown to have superior performance to social-oblivious, stateless routing protocols (BinarySW, Epidemic), the improvement comes at the cost of considerable storage overhead required on the nodes. In this paper we present SANE, the first routing mechanism that combines the advantages of both social-aware and stateless approaches. SANE is based on the observation - that we validate on a real-world trace - that individuals with similar interests tend to meet more often. In SANE, individuals (network members) are characterized by their interest profile, a compact representation of their interests. By implementing a simple routing rule based on interest profile similarity, SANE is free of network state information, thus overcoming the storage capacity problem with existing social-aware approaches. Through thorough experiments, we show the superiority of SANE over existing approaches, both stateful, social-aware and stateless, social-oblivious. We discuss the statelessness of our approach in the supplementary file, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/TPDS.2014.2307857, of this manuscript. Our interest-based approach easily enables innovative networking services, such as interest-casting. An interest-casting protocol is also introduced in this paper, and evaluated through experiments based on both real-world and synthetic mobility traces.", "title": "" }, { "docid": "ebaedd43e151f13d1d4d779284af389d", "text": "This paper presents the state of art techniques in recommender systems (RS). The various techniques are diagrammatically illustrated which on one hand helps a naïve researcher in this field to accommodate the on-going researches and establish a strong base, on the other hand it focuses on different categories of the recommender systems with deep technical discussions. The review studies on RS are highlighted which helps in understanding the previous review works and their directions. 8 different main categories of recommender techniques and 19 sub categories have been identified and stated. Further, soft computing approach for recommendation is emphasized which have not been well studied earlier. The major problems of the existing area is reviewed and presented from different perspectives. However, solutions to these issues are rarely discussed in the previous works, in this study future direction for possible solutions are also addressed.", "title": "" }, { "docid": "43ee3d818b528081aadf6abdc23650fa", "text": "Cloud computing has become an increasingly important research topic given the strong evolution and migration of many network services to such computational environment. The problem that arises is related with efficiency management and utilization of the large amounts of computing resources. This paper begins with a brief retrospect of traditional scheduling, followed by a detailed review of metaheuristic algorithms for solving the scheduling problems by placing them in a unified framework. Armed with these two technologies, this paper surveys the most recent literature about metaheuristic scheduling solutions for cloud. In addition to applications using metaheuristics, some important issues and open questions are presented for the reference of future researches on scheduling for cloud.", "title": "" }, { "docid": "8b1b0ee79538a1f445636b0798a0c7ca", "text": "Much of the current activity in the area of intelligent vehicle-highway systems (IVHS) focuses on one simple objective: to collect more data. Clearly, improvements in sensor technology and communication systems will allow transportation agencies to more closely monitor the condition of the surface transportation system. However, monitoring alone cannot improve the safety or efficiency of the system. It is imperative that surveillance data be used to manage the system in a proactive rather than a reactive manner. 'Proactive traffic management will require the ability to predict traffic conditions. Previous predictive modeling approaches can be grouped into three categories: (a) historical, data-based algorithms; (b) time-series models; and (c) simulations. A relatively new mathematical model, the neural network, offers an attractive alternative because neural networks can model undefined, complex nonlinear surfaces. In a comparison of a backpropagation neural network model with the more traditional approaches of an historical, data-based algorithm and a time-series model, the backpropagation model· was clearly superior, although all three models did an adequate job of predicting future traffic volumes. The backpropagation model was more responsive to dynamic conditions than the historical, data-based algorithm, and it did not experience the lag and overprediction characteristics of the time-series model. Given these advantages and the backpropagation model's ability to run in a parallel computing environment, it appears that such neural network prediction models hold considerable potential for use in real-time IVHS applications.", "title": "" }, { "docid": "4688caf6a80463579f293b2b762da5b5", "text": "To accelerate the implementation of network functions/middle boxes and reduce the deployment cost, recently, the concept of network function virtualization (NFV) has emerged and become a topic of much interest attracting the attention of researchers from both industry and academia. Unlike the traditional implementation of network functions, a software-oriented approach for virtual network functions (VNFs) creates more flexible and dynamic network services to meet a more diversified demand. Software-oriented network functions bring along a series of research challenges, such as VNF management and orchestration, service chaining, VNF scheduling for low latency and efficient virtual network resource allocation with NFV infrastructure, among others. In this paper, we study the VNF scheduling problem and the corresponding resource optimization solutions. Here, the VNF scheduling problem is defined as a series of scheduling decisions for network services on network functions and activating the various VNFs to process the arriving traffic. We consider VNF transmission and processing delays and formulate the joint problem of VNF scheduling and traffic steering as a mixed integer linear program. Our objective is to minimize the makespan/latency of the overall VNFs' schedule. Reducing the scheduling latency enables cloud operators to service (and admit) more customers, and cater to services with stringent delay requirements, thereby increasing operators' revenues. Owing to the complexity of the problem, we develop a genetic algorithm-based method for solving the problem efficiently. Finally, the effectiveness of our heuristic algorithm is verified through numerical evaluation. We show that dynamically adjusting the bandwidths on virtual links connecting virtual machines, hosting the network functions, reduces the schedule makespan by 15%-20% in the simulated scenarios.", "title": "" }, { "docid": "6bc2837d4d1da3344f901a6d7d8502b5", "text": "Many researchers and professionals have reported nonsubstance addiction to online entertainments in adolescents. However, very few scales have been designed to assess problem Internet use in this population, in spite of their high exposure and obvious vulnerability. The aim of this study was to review the currently available scales for assessing problematic Internet use and to validate a new scale of this kind for use, specifically in this age group, the Problematic Internet Entertainment Use Scale for Adolescents. The research was carried out in Spain in a gender-balanced sample of 1131 high school students aged between 12 and 18 years. Psychometric analyses showed the scale to be unidimensional, with excellent internal consistency (Cronbach's alpha of 0.92), good construct validity, and positive associations with alternative measures of maladaptive Internet use. This self-administered scale can rapidly measure the presence of symptoms of behavioral addiction to online videogames and social networking sites, as well as their degree of severity. The results estimate the prevalence of this problematic behavior in Spanish adolescents to be around 5 percent.", "title": "" }, { "docid": "95afd1d83b5641a7dff782588348d2ec", "text": "Intensive repetitive therapy improves function and quality of life for stroke patients. Intense therapies to overcome upper extremity impairment are beneficial, however, they are expensive because, in part, they rely on individualized interaction between the patient and rehabilitation specialist. The development of a pneumatic muscle driven hand therapy device, the Mentor/spl trade/, reinforces the need for volitional activation of joint movement while concurrently offering knowledge of results about range of motion, muscle activity or resistance to movement. The device is well tolerated and has received favorable comments from stroke survivors, their caregivers, and therapists.", "title": "" }, { "docid": "f767e0a9711522b06b8d023453f42f3a", "text": "A novel low-cost method for generating circular polarization in a dielectric resonator antenna is proposed. The antenna comprises four rectangular dielectric layers, each one being rotated by an angle of 30 ° relative to its adjacent layers. Utilizing such an approach has provided a circular polarization over a bandwidth of 6% from 9.55 to 10.15 GHz. This has been achieved in conjunction with a 21% impedance-matching bandwidth over the same frequency range. Also, the radiation efficiency of the proposed circularly polarized dielectric resonator antenna is 93% in this frequency band of operation", "title": "" }, { "docid": "54a06cb39007b18833f191aeb7c600d7", "text": "Mobile ad-hoc networks (MANETs) and wireless sensor networks (WSNs) have gained remarkable appreciation and technological development over the last few years. Despite ease of deployment, tremendous applications and significant advantages, security has always been a challenging issue due to the nature of environments in which nodes operate. Nodes’ physical capture, malicious or selfish behavior cannot be detected by traditional security schemes. Trust and reputation based approaches have gained global recognition in providing additional means of security for decision making in sensor and ad-hoc networks. This paper provides an extensive literature review of trust and reputation based models both in sensor and ad-hoc networks. Based on the mechanism of trust establishment, we categorize the state-of-the-art into two groups namely node-centric trust models and system-centric trust models. Based on trust evidence, initialization, computation, propagation and weight assignments, we evaluate the efficacy of the existing schemes. Finally, we conclude our discussion with identification of some unresolved issues in pursuit of trust and reputation management.", "title": "" }, { "docid": "81919bc432dd70ed3e48a0122d91b9e4", "text": "Artemisinin resistance in Plasmodium falciparum has emerged as a major threat for malaria control and elimination worldwide. Mutations in the Kelch propeller domain of PfK13 are the only known molecular markers for artemisinin resistance in this parasite. Over 100 non-synonymous mutations have been identified in PfK13 from various malaria endemic regions. This study aimed to investigate the genetic diversity of PvK12, the Plasmodium vivax ortholog of PfK13, in parasite populations from Southeast Asia, where artemisinin resistance in P. falciparum has emerged. The PvK12 sequences in 120 P. vivax isolates collected from Thailand (22), Myanmar (32) and China (66) between 2004 and 2008 were obtained and 353 PvK12 sequences from worldwide populations were retrieved for further analysis. These PvK12 sequences revealed a very low level of genetic diversity (π = 0.00003) with only three single nucleotide polymorphisms (SNPs). Of these three SNPs, only G581R is nonsynonymous. The synonymous mutation S88S is present in 3% (1/32) of the Myanmar samples, while G704G and G581R are present in 1.5% (1/66) and 3% (2/66) of the samples from China, respectively. None of the mutations observed in the P. vivax samples were associated with artemisinin resistance in P. falciparum. Furthermore, analysis of 473 PvK12 sequences from twelve worldwide P. vivax populations confirmed the very limited polymorphism in this gene and detected only five distinct haplotypes. The PvK12 sequences from global P. vivax populations displayed very limited genetic diversity indicating low levels of baseline polymorphisms of PvK12 in these areas.", "title": "" }, { "docid": "8582c4a040e4dec8fd141b00eaa45898", "text": "Emerging airborne networks require domainspecific routing protocols to cope with the challenges faced by the highly-dynamic aeronautical environment. We present an ns-3 based performance comparison of the AeroRP protocol with conventional MANET routing protocols. To simulate a highly-dynamic airborne network, accurate mobility models are needed for the physical movement of nodes. The fundamental problem with many synthetic mobility models is their random, memoryless behavior. Airborne ad hoc networks require a flexible memory-based 3-dimensional mobility model. Therefore, we have implemented a 3-dimensional Gauss-Markov mobility model in ns-3 that appears to be more realistic than memoryless models such as random waypoint and random walk. Using this model, we are able to simulate the airborne networking environment with greater realism than was previously possible and show that AeroRP has several advantages over other MANET routing protocols.", "title": "" }, { "docid": "cc58f5adcf4cb0aa1feac0ef96c452b5", "text": "Machine-learning algorithms have shown outstanding image recognition/classification performance for computer vision applications. However, the compute and energy requirement for implementing such classifier models for large-scale problems is quite high. In this paper, we propose feature driven selective classification (FALCON) inspired by the biological visual attention mechanism in the brain to optimize the energy-efficiency of machine-learning classifiers. We use the consensus in the characteristic features (color/texture) across images in a dataset to decompose the original classification problem and construct a tree of classifiers (nodes) with a generic-to-specific transition in the classification hierarchy. The initial nodes of the tree separate the instances based on feature information and selectively enable the latter nodes to perform object specific classification. The proposed methodology allows selective activation of only those branches and nodes of the classification tree that are relevant to the input while keeping the remaining nodes idle. Additionally, we propose a programmable and scalable neuromorphic engine (NeuE) that utilizes arrays of specialized neural computational elements to execute the FALCON-based classifier models for diverse datasets. The structure of FALCON facilitates the reuse of nodes while scaling up from small classification problems to larger ones thus allowing us to construct classifier implementations that are significantly more efficient. We evaluate our approach for a 12-object classification task on the Caltech101 dataset and ten-object task on CIFAR-10 dataset by constructing FALCON models on the NeuE platform in 45-nm technology. Our results demonstrate up to $3.66\\boldsymbol \\times $ improvement in energy-efficiency for no loss in output quality, and even higher improvements of up to $5.91\\boldsymbol \\times $ with 3.9% accuracy loss compared to an optimized baseline network. In addition, FALCON shows an improvement in training time of up to $1.96\\boldsymbol \\times $ as compared to the traditional classification approach.", "title": "" }, { "docid": "78f2e1fc79a9c774e92452631d6bce7a", "text": "Adders are basic integral part of arithmetic circuits. The adders have been realized with two styles: fixed stage size and variable stage size. In this paper, fixed stage and variable stage carry skip adder configurations have been analyzed and then a new 16-bit high speed variable stage carry skip adder is proposed by modifying the existing structure. The proposed adder has seven stages where first and last stage are of 1 bit each, it keeps increasing steadily till the middle stage which is the bulkiest and hence is the nucleus stage. The delay and power consumption in the proposed adder is reduced by 61.75% and 8% respectively. The proposed adder is implemented and simulated using 90 nm CMOS technology in Cadence Virtuoso. It is pertinent to mention that the delay improvement in the proposed adder has been achieved without increase in any power consumption and circuit complexity. The adder proposed in this work is suitable for high speed and low power VLSI based arithmetic circuits.", "title": "" }, { "docid": "b8fa0ff5dc0b700c1f7dd334639572ec", "text": "This paper discusses about an ongoing project that serves the needs of people with physical disabilities at home. It uses the Bluetooth technology to establish communication between user's Smartphone and controller board. The prototype support manual controlling and microcontroller controlling to lock and unlock home door. By connecting the circuit with a relay board and connection to the Arduino controller board it can be controlled by a Bluetooth available to provide remote access from tablet or smartphone. This paper addresses the development and the functionality of the Android-based application (Android app) to assist disabled people gain control of their living area.", "title": "" }, { "docid": "5bece01bed7c5a9a2433d95379882a37", "text": "n The polarization of electromagnetic signals is an important feature in the design of modern radar and telecommunications. Standard electromagnetic theory readily shows that a linearly polarized plane wave propagating in free space consists of two equal but counter-rotating components of circular polarization. In magnetized media, these circular modes can be arranged to produce the nonreciprocal propagation effects that are the basic properties of isolator and circulator devices. Independent phase control of right-hand (+) and left-hand (–) circular waves is accomplished by splitting their propagation velocities through differences in the e ± m ± parameter. A phenomenological analysis of the permeability m and permittivity e in dispersive media serves to introduce the corresponding magneticand electric-dipole mechanisms of interaction length with the propagating signal. As an example of permeability dispersion, a Lincoln Laboratory quasi-optical Faradayrotation isolator circulator at 35 GHz (l ~ 1 cm) with a garnet-ferrite rotator element is described. At infrared wavelengths (l = 1.55 mm), where fiber-optic laser sources also require protection by passive isolation of the Faraday-rotation principle, e rather than m provides the dispersion, and the frequency is limited to the quantum energies of the electric-dipole atomic transitions peculiar to the molecular structure of the magnetic garnet. For optimum performance, bismuth additions to the garnet chemical formula are usually necessary. Spectroscopic and molecular theory models developed at Lincoln Laboratory to explain the bismuth effects are reviewed. In a concluding section, proposed advances in present technology are discussed in the context of future radar and telecommunications challenges.", "title": "" }, { "docid": "1ff5526e4a18c1e59b63a3de17101b11", "text": "Plug-in electric vehicles (PEVs) are equipped with onboard level-1 or level-2 chargers for home overnight or office daytime charging. In addition, off-board chargers can provide fast charging for traveling long distances. However, off-board high-power chargers are bulky, expensive, and require comprehensive evolution of charging infrastructures. An integrated onboard charger capable of fast charging of PEVs will combine the benefits of both the conventional onboard and off-board chargers, without additional weight, volume, and cost. In this paper, an innovative single-phase integrated charger, using the PEV propulsion machine and its traction converter, is introduced. The charger topology is capable of power factor correction and battery voltage/current regulation without any bulky add-on components. Ac machine windings are utilized as mutually coupled inductors, to construct a two-channel interleaved boost converter. The circuit analyses of the proposed technology, based on a permanent magnet synchronous machine (PMSM), are discussed in details. Experimental results of a 3-kW proof-of-concept prototype are carried out using a ${\\textrm{220-V}}_{{\\rm{rms}}}$, 3-phase, 8-pole PMSM. A nearly unity power factor and 3.96% total harmonic distortion of input ac current are acquired with a maximum efficiency of 93.1%.", "title": "" }, { "docid": "47db0fdd482014068538a00f7dc826a9", "text": "Importance\nThe use of palliative care programs and the number of trials assessing their effectiveness have increased.\n\n\nObjective\nTo determine the association of palliative care with quality of life (QOL), symptom burden, survival, and other outcomes for people with life-limiting illness and for their caregivers.\n\n\nData Sources\nMEDLINE, EMBASE, CINAHL, and Cochrane CENTRAL to July 2016.\n\n\nStudy Selection\nRandomized clinical trials of palliative care interventions in adults with life-limiting illness.\n\n\nData Extraction and Synthesis\nTwo reviewers independently extracted data. Narrative synthesis was conducted for all trials. Quality of life, symptom burden, and survival were analyzed using random-effects meta-analysis, with estimates of QOL translated to units of the Functional Assessment of Chronic Illness Therapy-palliative care scale (FACIT-Pal) instrument (range, 0-184 [worst-best]; minimal clinically important difference [MCID], 9 points); and symptom burden translated to the Edmonton Symptom Assessment Scale (ESAS) (range, 0-90 [best-worst]; MCID, 5.7 points).\n\n\nMain Outcomes and Measures\nQuality of life, symptom burden, survival, mood, advance care planning, site of death, health care satisfaction, resource utilization, and health care expenditures.\n\n\nResults\nForty-three RCTs provided data on 12 731 patients (mean age, 67 years) and 2479 caregivers. Thirty-five trials used usual care as the control, and 14 took place in the ambulatory setting. In the meta-analysis, palliative care was associated with statistically and clinically significant improvements in patient QOL at the 1- to 3-month follow-up (standardized mean difference, 0.46; 95% CI, 0.08 to 0.83; FACIT-Pal mean difference, 11.36] and symptom burden at the 1- to 3-month follow-up (standardized mean difference, -0.66; 95% CI, -1.25 to -0.07; ESAS mean difference, -10.30). When analyses were limited to trials at low risk of bias (n = 5), the association between palliative care and QOL was attenuated but remained statistically significant (standardized mean difference, 0.20; 95% CI, 0.06 to 0.34; FACIT-Pal mean difference, 4.94), whereas the association with symptom burden was not statistically significant (standardized mean difference, -0.21; 95% CI, -0.42 to 0.00; ESAS mean difference, -3.28). There was no association between palliative care and survival (hazard ratio, 0.90; 95% CI, 0.69 to 1.17). Palliative care was associated consistently with improvements in advance care planning, patient and caregiver satisfaction, and lower health care utilization. Evidence of associations with other outcomes was mixed.\n\n\nConclusions and Relevance\nIn this meta-analysis, palliative care interventions were associated with improvements in patient QOL and symptom burden. Findings for caregiver outcomes were inconsistent. However, many associations were no longer significant when limited to trials at low risk of bias, and there was no significant association between palliative care and survival.", "title": "" }, { "docid": "3509f90848c45ad34ebbd30b9d357c29", "text": "Explaining underlying causes or effects about events is a challenging but valuable task. We define a novel problem of generating explanations of a time series event by (1) searching cause and effect relationships of the time series with textual data and (2) constructing a connecting chain between them to generate an explanation. To detect causal features from text, we propose a novel method based on the Granger causality of time series between features extracted from text such as N-grams, topics, sentiments, and their composition. The generation of the sequence of causal entities requires a commonsense causative knowledge base with efficient reasoning. To ensure good interpretability and appropriate lexical usage we combine symbolic and neural representations, using a neural reasoning algorithm trained on commonsense causal tuples to predict the next cause step. Our quantitative and human analysis show empirical evidence that our method successfully extracts meaningful causality relationships between time series with textual features and generates appropriate explanation between them.", "title": "" }, { "docid": "93d40aa40a32edab611b6e8c4a652dbb", "text": "In this paper, we present a detailed design of dynamic video segmentation network (DVSNet) for fast and efficient semantic video segmentation. DVSNet consists of two convolutional neural networks: a segmentation network and a flow network. The former generates highly accurate semantic segmentations, but is deeper and slower. The latter is much faster than the former, but its output requires further processing to generate less accurate semantic segmentations. We explore the use of a decision network to adaptively assign different frame regions to different networks based on a metric called expected confidence score. Frame regions with a higher expected confidence score traverse the flow network. Frame regions with a lower expected confidence score have to pass through the segmentation network. We have extensively performed experiments on various configurations of DVSNet, and investigated a number of variants for the proposed decision network. The experimental results show that our DVSNet is able to achieve up to 70.4% mIoU at 19.8 fps on the Cityscape dataset. A high speed version of DVSNet is able to deliver an fps of 30.4 with 63.2% mIoU on the same dataset. DVSNet is also able to reduce up to 95% of the computational workloads.", "title": "" } ]
scidocsrr
b472806a09f6771505be8e7f72361802
Polynomial texture maps
[ { "docid": "5f89fb0df61770e83ca451900b947d43", "text": "We consider the rendering of diffuse objects under distant illumination, as specified by an environment map. Using an analytic expression for the irradiance in terms of spherical harmonic coefficients of the lighting, we show that one needs to compute and use only 9 coefficients, corresponding to the lowest-frequency modes of the illumination, in order to achieve average errors of only 1%. In other words, the irradiance is insensitive to high frequencies in the lighting, and is well approximated using only 9 parameters. In fact, we show that the irradiance can be procedurally represented simply as a quadratic polynomial in the cartesian components of the surface normal, and give explicit formulae. These observations lead to a simple and efficient procedural rendering algorithm amenable to hardware implementation, a prefiltering method up to three orders of magnitude faster than previous techniques, and new representations for lighting design and image-based rendering.", "title": "" } ]
[ { "docid": "8ce3fa727ff12f742727d5b80d8611b9", "text": "Algorithmic approaches endow deep learning systems with implicit bias that helps them generalize even in over-parametrized settings. In this paper, we focus on understanding such a bias induced in learning through dropout, a popular technique to avoid overfitting in deep learning. For single hidden-layer linear neural networks, we show that dropout tends to make the norm of incoming/outgoing weight vectors of all the hidden nodes equal. In addition, we provide a complete characterization of the optimization landscape induced by dropout.", "title": "" }, { "docid": "4d3ed5dd5d4f08c9ddd6c9b8032a77fd", "text": "The purpose of this study was to clarify the efficacy of stress radiography (stress X-P), ultrasonography (US), and magnetic resonance (MR) imaging in the detection of the anterior talofibular ligament (ATFL) injury. Thirty-four patients with ankle sprain were involved. In all patients, Stress X-P, US, MR imaging, and arthroscopy were performed. The arthroscopic results were considered to be the gold standard. The imaging results were compared with the arthroscopic results, and the accuracy calculated. Arthroscopic findings showed ATFL injury in 30 out of 34 cases. The diagnosis of ATFL injury with stress X-P, US, MR imaging were made with an accuracy of 67, 91 and 97%. US and MR imaging demonstrated the same location of the injury as arthroscopy in 63 and 93%. We have clarified the diagnostic value of stress X-P, US, and MR imaging in diagnosis of ATFL injury. We obtained satisfactory results with US and MR imaging.", "title": "" }, { "docid": "86dc000d7e78092a03d03ccd8cb670a0", "text": "Weighted deduction with aggregation is a powerful theoretical formalism that encompasses many NLP algorithms. This paper proposes a declarative specification language, Dyna; gives generalagenda-basedalgorithms for computing weights and gradients; briefly discusses Dyna-to-Dyna program transformations; and shows that a first implementation of a Dyna-to-C++ compiler produces code that is efficient enough for real NLP research, though still several times slower than hand-crafted code.", "title": "" }, { "docid": "3fce18c6e1f909b91f95667a563aa194", "text": "In this paper, we describe an approach to content-based retrieval of medical images from a database, and provide a preliminary demonstration of our approach as applied to retrieval of digital mammograms. Content-based image retrieval (CBIR) refers to the retrieval of images from a database using information derived from the images themselves, rather than solely from accompanying text indices. In the medical-imaging context, the ultimate aim of CBIR is to provide radiologists with a diagnostic aid in the form of a display of relevant past cases, along with proven pathology and other suitable information. CBIR may also be useful as a training tool for medical students and residents. The goal of information retrieval is to recall from a database information that is relevant to the user's query. The most challenging aspect of CBIR is the definition of relevance (similarity), which is used to guide the retrieval machine. In this paper, we pursue a new approach, in which similarity is learned from training examples provided by human observers. Specifically, we explore the use of neural networks and support vector machines to predict the user's notion of similarity. Within this framework we propose using a hierarchal learning approach, which consists of a cascade of a binary classifier and a regression module to optimize retrieval effectiveness and efficiency. We also explore how to incorporate online human interaction to achieve relevance feedback in this learning framework. Our experiments are based on a database consisting of 76 mammograms, all of which contain clustered microcalcifications (MCs). Our goal is to retrieve mammogram images containing similar MC clusters to that in a query. The performance of the retrieval system is evaluated using precision-recall curves computed using a cross-validation procedure. Our experimental results demonstrate that: 1) the learning framework can accurately predict the perceptual similarity reported by human observers, thereby serving as a basis for CBIR; 2) the learning-based framework can significantly outperform a simple distance-based similarity metric; 3) the use of the hierarchical two-stage network can improve retrieval performance; and 4) relevance feedback can be effectively incorporated into this learning framework to achieve improvement in retrieval precision based on online interaction with users; and 5) the retrieved images by the network can have predicting value for the disease condition of the query.", "title": "" }, { "docid": "327d071f71bf39bcd171f85746047a02", "text": "Advances in information and communication technologies have led to the emergence of Internet of Things (IoT). In the healthcare environment, the use of IoT technologies brings convenience to physicians and patients as they can be applied to various medical areas (such as constant real-time monitoring, patient information management, medical emergency management, blood information management, and health management). The radio-frequency identification (RFID) technology is one of the core technologies of IoT deployments in the healthcare environment. To satisfy the various security requirements of RFID technology in IoT, many RFID authentication schemes have been proposed in the past decade. Recently, elliptic curve cryptography (ECC)-based RFID authentication schemes have attracted a lot of attention and have been used in the healthcare environment. In this paper, we discuss the security requirements of RFID authentication schemes, and in particular, we present a review of ECC-based RFID authentication schemes in terms of performance and security. Although most of them cannot satisfy all security requirements and have satisfactory performance, we found that there are three recently proposed ECC-based authentication schemes suitable for the healthcare environment in terms of their performance and security.", "title": "" }, { "docid": "cb0803dfd3763199519ff3f4427e1298", "text": "Motion deblurring is a long standing problem in computer vision and image processing. In most previous approaches, the blurred image is modeled as the convolution of a latent intensity image with a blur kernel. However, for images captured by a real camera, the blur convolution should be applied to scene irradiance instead of image intensity and the blurred results need to be mapped back to image intensity via the camera’s response function (CRF). In this paper, we present a comprehensive study to analyze the effects of CRFs on motion deblurring. We prove that the intensity-based model closely approximates the irradiance model at low frequency regions. However, at high frequency regions such as edges, the intensity-based approximation introduces large errors and directly applying deconvolution on the intensity image will produce strong ringing artifacts even if the blur kernel is invertible. Based on the approximation error analysis, we further develop a dualimage based solution that captures a pair of sharp/blurred images for both CRF estimation and motion deblurring. Experiments on synthetic and real images validate our theories and demonstrate the robustness and accuracy of our approach.", "title": "" }, { "docid": "3731d6d00291c02913fa102292bf3cad", "text": "Real-world applications of text categorization often require a system to deal with tens of thousands of categories defined over a large taxonomy. This paper addresses the problem with respect to a set of popular algorithms in text categorization, including Support Vector Machines, k-nearest neighbor, ridge regression, linear least square fit and logistic regression. By providing a formal analysis of the computational complexity of each classification method, followed by an investigation on the usage of different classifiers in a hierarchical setting of categorization, we show how the scalability of a method depends on the topology of the hierarchy and the category distributions. In addition, we are able to obtain tight bounds for the complexities by using the power law to approximate category distributions over a hierarchy. Experiments with kNN and SVM classifiers on the OHSUMED corpus are reported on, as concrete examples.", "title": "" }, { "docid": "dbd06c81892bc0535e2648ee21cb00b4", "text": "This paper examines the causes of conflict in Burundi and discusses strategies for building peace. The analysis of the complex relationships between distribution and group dynamics reveals that these relationships are reciprocal, implying that distribution and group dynamics are endogenous. The nature of endogenously generated group dynamics determines the type of preferences (altruistic or exclusionist), which in turn determines the type of allocative institutions and policies that prevail in the political and economic system. While unequal distribution of resources may be socially inefficient, it nonetheless can be rational from the perspective of the ruling elite, especially because inequality perpetuates dominance. However, as the unequal distribution of resources generates conflict, maintaining a system based on inequality is difficult because it requires ever increasing investments in repression. It is therefore clear that if the new Burundian leadership is serious about building peace, it must engineer institutions that uproot the legacy of discrimination and promote equal opportunity for social mobility for all members of ethnic groups and regions.", "title": "" }, { "docid": "d7a9465ac031cf7be6f3e74276805f0f", "text": "Half of American workers have a level of education that does not match the level of education required for their job. Of these, a majority are overeducated, i.e. have more schooling than necessary to perform their job (see, e.g., Leuven & Oosterbeek, 2011). In this paper, we use data from the National Longitudinal Survey of Youth 1979 (NLSY79) combined with the pooled 1989-1991 waves of the CPS to provide some of the first evidence regarding the dynamics of overeducation over the life cyle. Shedding light on this question is key to disentangle the role played by labor market frictions versus other factors such as selection on unobservables, compensating differentials or career mobility prospects. Overall, our results suggest that overeducation is a fairly persistent phenomenon, with 79% of workers remaining overeducated after one year. Initial overeducation also has an impact on wages much later in the career, which points to the existence of scarring effects. Finally, we find some evidence of duration dependence, with a 6.5 point decrease in the exit rate from overeducation after having spent five years overeducated. JEL Classification: J24; I21 ∗Duke University †University of North Carolina at Chapel Hill and IZA ‡Duke University and IZA.", "title": "" }, { "docid": "de59e5e248b5df0f92d7fed8c699d68a", "text": "Most modern devices and cryptoalgorithms are vulnerable to a new class of attack called side-channel attack. It analyses physical parameters of the system in order to get secret key. Most spread techniques are simple and differential power attacks with combination of statistical tools. Few studies cover using machine learning methods for pre-processing and key classification tasks. In this paper, we investigate applicability of machine learning methods and their characteristic. Following theoretical results, we examine power traces of AES encryption with Support Vector Machines algorithm and decision trees and provide roadmap for further research.", "title": "" }, { "docid": "c1d8848317256214b76be3adb87a7d49", "text": "We are interested in estimating the average effect of a binary treatment on a scalar outcome. If assignment to the treatment is exogenous or unconfounded, that is, independent of the potential outcomes given covariates, biases associated with simple treatment-control average comparisons can be removed by adjusting for differences in the covariates. Rosenbaum and Rubin (1983) show that adjusting solely for differences between treated and control units in the propensity score removes all biases associated with differences in covariates. Although adjusting for differences in the propensity score removes all the bias, this can come at the expense of efficiency, as shown by Hahn (1998), Heckman, Ichimura and Todd (1998), and Robins, Mark and Newey (1992). We show that weighting by the inverse of a nonparametric estimate of the propensity score, rather than the true propensity score, leads to an efficient estimate of the average treatment effect. We provide intuition for this result by showing that this estimator can be interpreted as an empirical likelihood estimator that efficiently incorporates the information about the propensity score.", "title": "" }, { "docid": "b395aa3ae750ddfd508877c30bae3a38", "text": "This paper presents a technology review of voltage-source-converter topologies for industrial medium-voltage drives. In this highly active area, different converter topologies and circuits have found their application in the market. This paper covers the high-power voltage-source inverter and the most used multilevel-inverter topologies, including the neutral-point-clamped, cascaded H-bridge, and flying-capacitor converters. This paper presents the operating principle of each topology and a review of the most relevant modulation methods, focused mainly on those used by industry. In addition, the latest advances and future trends of the technology are discussed. It is concluded that the topology and modulation-method selection are closely related to each particular application, leaving a space on the market for all the different solutions, depending on their unique features and limitations like power or voltage level, dynamic performance, reliability, costs, and other technical specifications.", "title": "" }, { "docid": "48432393e1c320c051b59427db0620b5", "text": "The design of removable partial dentures (RPDs) is an important factor for good prognostication. The purpose of this study was to clarify the effectiveness of denture designs and to clarify the component that had high rates of failure and complications. A total of 91 RPDs, worn by 65 patients for 2-10 years, were assessed. Removable partial dentures were classified into four groups: telescopic dentures (TDs), ordinary clasp dentures (ODs), modified clasp dentures (MDs) and combination dentures (CDs). The failure rates of abutment teeth were the highest and those of retainers were the second highest. The failure rates of connectors were generally low, but they increased suddenly after 6 years. Complication and failure rates of denture bases and artificial teeth were generally low. Complication and failure rates of TDs were high at abutment teeth and low level at retainers. Complication and failure rates of ODs were high at retainers.", "title": "" }, { "docid": "660e8d6847d06970e37455b60198c6b6", "text": "Usually, if researchers want to understand research status of any field, they need to browse a great number of related academic literatures. Luckily, in order to work more efficiently, automatic documents summarization can be applied for taking a glance at specific scientific topics. In this paper, we focus on summary generation of citation content. An automatic tool named CitationAS is built, whose three core components are clustering algorithms, label generation and important sentences extraction methods. In experiments, we use bisecting Kmeans, Lingo and STC to cluster retrieved citation content. Then Word2Vec, WordNet and combination of them are applied to generate cluster label. Next, we employ two methods, TF-IDF and MMR, to extract important sentences, which are used to generate summaries. Finally, we adopt gold standard to evaluate summaries obtained from CitationAS. According to evaluations, we find the best label generation method for each clustering algorithm. We also discover that combination of Word2Vec and WordNet doesn’t have good performance compared with using them separately on three clustering algorithms. Combination of Ling algorithm, Word2Vec label generation method and TF-IDF sentences extraction approach will acquire the highest summary quality. Conference Topic Text mining and information extraction", "title": "" }, { "docid": "1c117c63455c2b674798af0e25e3947c", "text": "We are studying the manufacturing performance of semiconductor wafer fabrication plants in the US, Asia, and Europe. There are great similarities in production equipment, manufacturing processes, and products produced at semiconductor fabs around the world. However, detailed comparisons over multi-year intervals show that important quantitative indicators of productivity, including defect density (yield), major equipment production rates, wafer throughput time, and effective new process introduction to manufacturing, vary by factors of 3 to as much as 5 across an international sample of 28 fabs. We conduct on-site observations, and interviews with manufacturing personnel at all levels from operator to general manager, to better understand reasons for the observed wide variations in performance. We have identified important factors in the areas of information systems, organizational practices, process and technology improvements, and production control that correlate strongly with high productivity. Optimum manufacturing strategy is different for commodity products, high-value proprietary products, and foundry business.", "title": "" }, { "docid": "570e03101ae116e2f20ab6337061ec3f", "text": "This study explored the potential for using seed cake from hemp (Cannabis sativa L.) as a protein feed for dairy cows. The aim was to evaluate the effects of increasing the proportion of hempseed cake (HC) in the diet on milk production and milk composition. Forty Swedish Red dairy cows were involved in a 5-week dose-response feeding trial. The cows were allocated randomly to one of four experimental diets containing on average 494 g/kg of grass silage and 506 g/kg of concentrate on a dry matter (DM) basis. Diets containing 0 g (HC0), 143 g (HC14), 233 g (HC23) or 318 g (HC32) HC/kg DM were achieved by replacing an increasing proportion of compound pellets with cold-pressed HC. Increasing the proportion of HC resulted in dietary crude protein (CP) concentrations ranging from 126 for HC0 to 195 g CP/kg DM for HC32. Further effects on the composition of the diet with increasing proportions of HC were higher fat and NDF and lower starch concentrations. There were no linear or quadratic effects on DM intake, but increasing the proportion of HC in the diet resulted in linear increases in fat and NDF intake, as well as CP intake (P < 0.001), and a linear decrease in starch intake (P < 0.001). The proportion of HC had significant quadratic effects on the yields of milk, energy-corrected milk (ECM) and milk protein, fat and lactose. The curvilinear response of all yield parameters indicated maximum production from cows fed diet HC14. Increasing the proportion of HC resulted in linear decreases in both milk protein and milk fat concentration (P = 0.005 and P = 0.017, respectively), a linear increase in milk urea (P < 0.001), and a linear decrease in CP efficiency (milk protein/CP intake; P < 0.001). In conclusion, the HC14 diet, corresponding to a dietary CP concentration of 157 g/kg DM, resulted in the maximum yields of milk and ECM by dairy cows in this study.", "title": "" }, { "docid": "b206a5f5459924381ef6c46f692c7052", "text": "The Konstanz Information Miner is a modular environment, which enables easy visual assembly and interactive execution of a data pipeline. It is designed as a teaching, research and collaboration platform, which enables simple integration of new algorithms and tools as well as data manipulation or visualization methods in the form of new modules or nodes. In this paper we describe some of the design aspects of the underlying architecture, briey sketch how new nodes can be incorporated, and highlight some of the new features of version 2.0.", "title": "" }, { "docid": "f83ca1c2732011e9a661f8cf9a0516ac", "text": "We provide a characterization of pseudoentropy in terms of hardness of sampling: Let (X,B) be jointly distributed random variables such that B takes values in a polynomial-sized set. We show that B is computationally indistinguishable from a random variable of higher Shannon entropy given X if and only if there is no probabilistic polynomial-time S such that (X,S(X)) has small KL divergence from (X,B). This can be viewed as an analogue of the Impagliazzo Hardcore Theorem (FOCS '95) for Shannon entropy (rather than min-entropy).\n Using this characterization, we show that if f is a one-way function, then (f(Un),Un) has \"next-bit pseudoentropy\" at least n+log n, establishing a conjecture of Haitner, Reingold, and Vadhan (STOC '10). Plugging this into the construction of Haitner et al., this yields a simpler construction of pseudorandom generators from one-way functions. In particular, the construction only performs hashing once, and only needs the hash functions that are randomness extractors (e.g. universal hash functions) rather than needing them to support \"local list-decoding\" (as in the Goldreich--Levin hardcore predicate, STOC '89).\n With an additional idea, we also show how to improve the seed length of the pseudorandom generator to ~{O}(n3), compared to O(n4) in the construction of Haitner et al.", "title": "" }, { "docid": "3b06bc2d72e0ae7fa75873ed70e23fc3", "text": "Transaction traces analysis is a key utility for marketing, trend monitoring, and fraud detection purposes. However, they can also be used for designing and verification of contextual risk management systems for card-present transactions. In this paper, we presented a novel approach to collect detailed transaction traces directly from payment terminal. Thanks to that, it is possible to analyze each transaction step precisely, including its frequency and timing. We also demonstrated our approach to analyze such data based on real-life experiment. Finally, we concluded this paper with important findings for designers of such a system.", "title": "" }, { "docid": "d37f648a06d6418a0e816ce000056136", "text": "Recently it has been shown that policy-gradient methods for reinforcement learning can be utilized to train deep end-to-end systems directly on non-differentiable metrics for the task at hand. In this paper we consider the problem of optimizing image captioning systems using reinforcement learning, and show that by carefully optimizing our systems using the test metrics of the MSCOCO task, significant gains in performance can be realized. Our systems are built using a new optimization approach that we call self-critical sequence training (SCST). SCST is a form of the popular REINFORCE algorithm that, rather than estimating a baseline to normalize the rewards and reduce variance, utilizes the output of its own test-time inference algorithm to normalize the rewards it experiences. Using this approach, estimating the reward signal (as actor-critic methods must do) and estimating normalization (as REINFORCE algorithms typically do) is avoided, while at the same time harmonizing the model with respect to its test-time inference procedure. Empirically we find that directly optimizing the CIDEr metric with SCST and greedy decoding at test-time is highly effective. Our results on the MSCOCO evaluation sever establish a new state-of-the-art on the task, improving the best result in terms of CIDEr from 104.9 to 114.7.", "title": "" } ]
scidocsrr
245988ae1d9ae4110048135ec0581fb2
Multimethod Longitudinal HIV Drug Resistance Analysis in Antiretroviral-Therapy-Naive Patients.
[ { "docid": "7fe1cea4990acabf7bc3c199d3c071ce", "text": "Comparative analysis of molecular sequence data is essential for reconstructing the evolutionary histories of species and inferring the nature and extent of selective forces shaping the evolution of genes and species. Here, we announce the release of Molecular Evolutionary Genetics Analysis version 5 (MEGA5), which is a user-friendly software for mining online databases, building sequence alignments and phylogenetic trees, and using methods of evolutionary bioinformatics in basic biology, biomedicine, and evolution. The newest addition in MEGA5 is a collection of maximum likelihood (ML) analyses for inferring evolutionary trees, selecting best-fit substitution models (nucleotide or amino acid), inferring ancestral states and sequences (along with probabilities), and estimating evolutionary rates site-by-site. In computer simulation analyses, ML tree inference algorithms in MEGA5 compared favorably with other software packages in terms of computational efficiency and the accuracy of the estimates of phylogenetic trees, substitution parameters, and rate variation among sites. The MEGA user interface has now been enhanced to be activity driven to make it easier for the use of both beginners and experienced scientists. This version of MEGA is intended for the Windows platform, and it has been configured for effective use on Mac OS X and Linux desktops. It is available free of charge from http://www.megasoftware.net.", "title": "" } ]
[ { "docid": "1390f0c41895ecabbb16c54684b88ca1", "text": "Deep learning has proven to be a powerful tool for computer vision and has seen widespread adoption for numerous tasks. However, deep learning algorithms are known to be vulnerable to adversarial examples. These adversarial inputs are created such that, when provided to a deep learning algorithm, they are very likely to be mislabeled. This can be problematic when deep learning is used to assist in safety critical decisions. Recent research has shown that classifiers can be attacked by physical adversarial examples under various physical conditions. Given the fact that stateof-the-art objection detection algorithms are harder to be fooled by the same set of adversarial examples, here we show that these detectors can also be attacked by physical adversarial examples. In this note, we briefly show both static and dynamic test results. We design an algorithm that produces physical adversarial inputs, which can fool the YOLO object detector and can also attack Faster-RCNN with relatively high success rate based on transferability. Furthermore, our algorithm can compress the size of the adversarial inputs to stickers that, when attached to the targeted object, result in the detector either mislabeling or not detecting the object a high percentage of the time. This note provides a small set of results. Our upcoming paper will contain a thorough evaluation on other object detectors, and will present the algorithm.", "title": "" }, { "docid": "d6dba7a89bc123bc9bb616df6faee2bc", "text": "Continuing interest in digital games indicated that it would be useful to update [Authors’, 2012] systematic literature review of empirical evidence about the positive impacts an d outcomes of games. Since a large number of papers was identified in th e period from 2009 to 2014, the current review focused on 143 papers that provided higher quality evidence about the positive outcomes of games. [Authors’] multidimensional analysis of games and t heir outcomes provided a useful framework for organising the varied research in this area. The mo st frequently occurring outcome reported for games for learning was knowledge acquisition, while entertain me t games addressed a broader range of affective, behaviour change, perceptual and cognitive and phys iological outcomes. Games for learning were found across varied topics with STEM subjects and health the most popular. Future research on digital games would benefit from a systematic programme of experi m ntal work, examining in detail which game features are most effective in promoting engagement and supporting learning.", "title": "" }, { "docid": "7064d73864a64e2b76827e3252390659", "text": "Abstmct-In his original paper on the subject, Shannon found upper and lower bounds for the entropy of printed English based on the number of trials required for a subject to guess subsequent symbols in a given text. The guessing approach precludes asymptotic consistency of either the upper or lower bounds except for degenerate ergodic processes. Shannon’s technique of guessing the next symbol is altered by having the subject place sequential bets on the next symbol of text. lf S,, denotes the subject’s capital after n bets at 27 for 1 odds, and lf it is assumed that the subject hnows the underlying prpbabillty distribution for the process X, then the entropy estimate ls H,(X) =(l -(l/n) log,, S,) log, 27 bits/symbol. If the subject does npt hnow the true probabllty distribution for the stochastic process, then Z&(X! ls an asymptotic upper bound for the true entropy. ff X is stationary, EH,,(X)+H(X), H(X) bell the true entropy of the process. Moreovzr, lf X is ergodic, then by the SLOW McMilhm-Brebnan theorem H,,(X)+H(X) with probability one. Preliminary indications are that English text has au entropy of approximately 1.3 bits/symbol, which agrees well with Shannon’s estimate.", "title": "" }, { "docid": "ac1f2a1a96ab424d9b69276efd4f1ed4", "text": "This paper describes various systems from the University of Minnesota, Duluth that participated in the CLPsych 2015 shared task. These systems learned decision lists based on lexical features found in training data. These systems typically had average precision in the range of .70 – .76, whereas a random baseline attained .47 – .49.", "title": "" }, { "docid": "cf131167592f02790a1b4e38ed3b5375", "text": "Monocular 3D facial shape reconstruction from a single 2D facial image has been an active research area due to its wide applications. Inspired by the success of deep neural networks (DNN), we propose a DNN-based approach for End-to-End 3D FAce Reconstruction (UH-E2FAR) from a single 2D image. Different from recent works that reconstruct and refine the 3D face in an iterative manner using both an RGB image and an initial 3D facial shape rendering, our DNN model is end-to-end, and thus the complicated 3D rendering process can be avoided. Moreover, we integrate in the DNN architecture two components, namely a multi-task loss function and a fusion convolutional neural network (CNN) to improve facial expression reconstruction. With the multi-task loss function, 3D face reconstruction is divided into neutral 3D facial shape reconstruction and expressive 3D facial shape reconstruction. The neutral 3D facial shape is class-specific. Therefore, higher layer features are useful. In comparison, the expressive 3D facial shape favors lower or intermediate layer features. With the fusion-CNN, features from different intermediate layers are fused and transformed for predicting the 3D expressive facial shape. Through extensive experiments, we demonstrate the superiority of our end-to-end framework in improving the accuracy of 3D face reconstruction.", "title": "" }, { "docid": "7cd091555dd870cc1a71a4318bb5ff8d", "text": "This paper presents the design and simulation of a wideband, medium gain, light weight, wide bandwidth pyramidal horn antenna feed for microwave applications. The horn was designed using approximation method to calculate the gain in mat lab and simulated using CST microwave studio. The proposed antenna operates within 1-2 GHz (L-band). The horn is supported by a rectangular wave guide. It is linearly polarized and shows wide bandwidth with a gain of 15.3dB. The horn is excited with the monopole which is loaded with various top hat loading such as rectangular disc, circular disc, annular disc, L-type, T-type, Cone shape, U-shaped plates etc. and checked their performances for return loss as well as bandwidth. The circular disc and annular ring gives the low return loss and wide bandwidth as well as low VSWR. The annular ring gave good VSWR and return loss compared to the circular disc. The far field radiation pattern is obtained as well as Efield & H-field analysis for L-band pyramidal horn has been observed, simulated and optimized using CST Microwave Studio. The simulation results show that the pyramidal horn structure exhibits low VSWR as well as good radiation pattern over L-band.", "title": "" }, { "docid": "55ada092fd628aead0fd64d20eff7b69", "text": "BER estimation from measured EVM values is shown experimentally for QPSK and 16QAM optical signals with 28 GBd. Various impairments, such as gain imbalance, quadrature error and timing skew, are introduced into the transmitted signal in order to evaluate the robustness of the method. The EVM was measured using two different real-time sampling systems and the EVM measurement accuracy is discussed.", "title": "" }, { "docid": "6d552edc0d60470ce942b9d57b6341e3", "text": "A rich element of cooperative games are mechanics that communicate. Unlike automated awareness cues and synchronous verbal communication, cooperative communication mechanics enable players to share information and direct action by engaging with game systems. These include both explicitly communicative mechanics, such as built-in pings that direct teammates' attention to specific locations, and emergent communicative mechanics, where players develop their own conventions about the meaning of in-game activities, like jumping to get attention. We use a grounded theory approach with 40 digital games to identify and classify the types of cooperative communication mechanics game designers might use to enable cooperative play. We provide details on the classification scheme and offer a discussion on the implications of cooperative communication mechanics.", "title": "" }, { "docid": "aa5daa83656a2265dc27ec6ee5e3c1cb", "text": "Firms traditionally rely on interviews and focus groups to identify customer needs for marketing strategy and product development. User-generated content (UGC) is a promising alternative source for identifying customer needs. However, established methods are neither efficient nor effective for large UGC corpora because much content is non-informative or repetitive. We propose a machine-learning approach to facilitate qualitative analysis by selecting content for efficient review. We use a convolutional neural network to filter out non-informative content and cluster dense sentence embeddings to avoid sampling repetitive content. We further address two key questions: Are UGCbased customer needs comparable to interview-based customer needs? Do the machine-learning methods improve customer-need identification? These comparisons are enabled by a custom dataset of customer needs for oral care products identified by professional analysts using industry-standard experiential interviews. The analysts also coded 12,000 UGC sentences to identify which previously identified customer needs and/or new customer needs were articulated in each sentence. We show that (1) UGC is at least as valuable as a source of customer needs for product development, likely morevaluable, than conventional methods, and (2) machine-learning methods improve efficiency of identifying customer needs from UGC (unique customer needs per unit of professional services cost).", "title": "" }, { "docid": "4d5119db64e4e0a31064bd22b47e2534", "text": "Reliability and scalability of an application is dependent on how its application state is managed. To run applications at massive scale requires one to operate datastores that can scale to operate seamlessly across thousands of servers and can deal with various failure modes such as server failures, datacenter failures and network partitions. The goal of Amazon DynamoDB is to eliminate this complexity and operational overhead for our customers by offering a seamlessly scalable database service. In this talk, I will talk about how developers can build applications on DynamoDB without having to deal with the complexity of operating a large scale database.", "title": "" }, { "docid": "6d3410de121ffe037eafd5f30daa7252", "text": "One of the more important issues in the development of larger scale complex systems (product development period of two or more years) is accommodating changes to requirements. Requirements gathered for larger scale systems evolve during lengthy development periods due to changes in software and business environments, new user needs and technological advancements. Agile methods, which focus on accommodating change even late in the development lifecycle, can be adopted for the development of larger scale systems. However, as currently applied, these practices are not always suitable for the development of such systems. We propose a soft-structured framework combining the principles of agile and conventional software development that addresses the issue of rapidly changing requirements for larger scale systems. The framework consists of two parts: (1) a soft-structured requirements gathering approach that reflects the agile philosophy i.e., the Agile Requirements Generation Model and (2) a tailored development process that can be applied to either small or larger scale systems.", "title": "" }, { "docid": "ce99ce3fb3860e140164e7971291f0fa", "text": "We describe the development and psychometric characteristics of the Generalized Workplace Harassment Questionnaire (GWHQ), a 29-item instrument developed to assess harassing experiences at work in five conceptual domains: verbal aggression, disrespect, isolation/exclusion, threats/bribes, and physical aggression. Over 1700 current and former university employees completed the GWHQ at three time points. Factor analytic results at each wave of data suggested a five-factor solution that did not correspond to the original five conceptual factors. We suggest a revised scoring scheme for the GWHQ utilizing four of the empirically extracted factors: covert hostility, verbal hostility, manipulation, and physical hostility. Covert hostility was the most frequently experienced type of harassment, followed by verbal hostility, manipulation, and physical hostility. Verbal hostility, covert hostility, and manipulation were found to be significant predictors of psychological distress.", "title": "" }, { "docid": "e06646b7d2bd6ee83c4d557f4215e143", "text": "Deep generative models have been praised for their ability to learn smooth latent representation of images, text, and audio, which can then be used to generate new, plausible data. However, current generative models are unable to work with graphs due to their unique characteristics—their underlying structure is not Euclidean or grid-like, they remain isomorphic under permutation of the nodes labels, and they come with a different number of nodes and edges. In this paper, we propose NeVAE, a novel variational autoencoder for graphs, whose encoder and decoder are specially designed to account for the above properties by means of several technical innovations. In addition, by using masking, the decoder is able to guarantee a set of local structural and functional properties in the generated graphs. Experiments reveal that our model is able to learn and mimic the generative process of several well-known random graph models and can be used to discover new molecules more effectively than several state of the art methods. Moreover, by utilizing Bayesian optimization over the continuous latent representation of molecules our model finds, we can also identify molecules that maximize certain desirable properties more effectively than alternatives.", "title": "" }, { "docid": "ddb0a3bc0a9367a592403d0fc0cec0a5", "text": "Fluorescence microscopy is a powerful quantitative tool for exploring regulatory networks in single cells. However, the number of molecular species that can be measured simultaneously is limited by the spectral overlap between fluorophores. Here we demonstrate a simple but general strategy to drastically increase the capacity for multiplex detection of molecules in single cells by using optical super-resolution microscopy (SRM) and combinatorial labeling. As a proof of principle, we labeled mRNAs with unique combinations of fluorophores using fluorescence in situ hybridization (FISH), and resolved the sequences and combinations of fluorophores with SRM. We measured mRNA levels of 32 genes simultaneously in single Saccharomyces cerevisiae cells. These experiments demonstrate that combinatorial labeling and super-resolution imaging of single cells is a natural approach to bring systems biology into single cells.", "title": "" }, { "docid": "7a67bccffa6222f8129a90933962e285", "text": "BACKGROUND\nPast research has found that playing a classic prosocial video game resulted in heightened prosocial behavior when compared to a control group, whereas playing a classic violent video game had no effect. Given purported links between violent video games and poor social behavior, this result is surprising. Here our aim was to assess whether this finding may be due to the specific games used. That is, modern games are experienced differently from classic games (more immersion in virtual environments, more connection with characters, etc.) and it may be that playing violent video games impacts prosocial behavior only when contemporary versions are used.\n\n\nMETHODS AND FINDINGS\nExperiments 1 and 2 explored the effects of playing contemporary violent, non-violent, and prosocial video games on prosocial behavior, as measured by the pen-drop task. We found that slight contextual changes in the delivery of the pen-drop task led to different rates of helping but that the type of game played had little effect. Experiment 3 explored this further by using classic games. Again, we found no effect.\n\n\nCONCLUSIONS\nWe failed to find evidence that playing video games affects prosocial behavior. Research on the effects of video game play is of significant public interest. It is therefore important that speculation be rigorously tested and findings replicated. Here we fail to substantiate conjecture that playing contemporary violent video games will lead to diminished prosocial behavior.", "title": "" }, { "docid": "25c41bdba8c710b663cb9ad634b7ae5d", "text": "Massive data streams are now fundamental to many data processing applications. For example, Internet routers produce large scale diagnostic data streams. Such streams are rarely stored in traditional databases, and instead must be processed “on the fly” as they are produced. Similarly, sensor networks produce multiple data streams of observations from their sensors. There is growing focus on manipulating data streams, and hence, there is a need to identify basic operations of interest in managing data streams, and to support them efficiently. We propose computation of the Hamming norm as a basic operation of interest. The Hamming norm formalises ideas that are used throughout data processing. When applied to a single stream, the Hamming norm gives the number of distinct items that are present in that data stream, which is a statistic of great interest in databases. When applied to a pair of streams, the Hamming norm gives an important measure of (dis)similarity: the number of unequal item counts in the two streams. Hamming norms have many uses in comparing data streams. We present a novel approximation technique for estimating the Hamming norm for massive data streams; this relies on what we call the “ l0 sketch” and we prove its accuracy. We test our approximation method on a large quantity of synthetic and real stream data, and show that the estimation is accurate to within a few percentage points. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and/or special permission from the Endowment. Proceedings of the 28th VLDB Conference, Hong Kong, China, 2002", "title": "" }, { "docid": "fa63fbdfc0be5f2675c5f65ee0798f88", "text": "Twitter is a micro blogging site where users review or tweet their approach i.e., opinion towards the service providers twitter page in words and it is useful to analyze the sentiments from it. Analyze means finding approach of users or customers where it is positive, negative, neutral, or in between positive-neutral or in between negative-neutral and represent it. In such a system or tool tweets are fetch from twitter regarding shopping websites, or any other twitter pages like some business, mobile brands, cloth brands, live events like sport match, election etc. get the polarity of it. These results will help the service provider to find out about the customers view toward their products.", "title": "" }, { "docid": "db806183810547435075eb6edd28d630", "text": "Bilinear models provide an appealing framework for mixing and merging information in Visual Question Answering (VQA) tasks. They help to learn high level associations between question meaning and visual concepts in the image, but they suffer from huge dimensionality issues.,,We introduce MUTAN, a multimodal tensor-based Tucker decomposition to efficiently parametrize bilinear interactions between visual and textual representations. Additionally to the Tucker framework, we design a low-rank matrix-based decomposition to explicitly constrain the interaction rank. With MUTAN, we control the complexity of the merging scheme while keeping nice interpretable fusion relations. We show how the Tucker decomposition framework generalizes some of the latest VQA architectures, providing state-of-the-art results.", "title": "" }, { "docid": "a2c7ee4e586bc456ad6bfcdf3b1cc84b", "text": "We present a taxonomy of the Artificial Intelligence (AI) methods currently applied for algorithmic music composition. The area known as algorithmic music composition concerns the research on processes of composing music pieces automatically by a computer system. The use of AI for algorithmic music consists on the application of AI techniques as the main tools for the composition generation. There are several models of AI used in music composition such as: heuristics in evolutionary algorithms, neural networks, stochastic methods, generative models, agents, decision trees, declarative programming and grammatical representation. In this survey we present the trending in techniques for automatic music composition. We summarized several research projects of the last seven years and highlight the directions of music composition based on AI techniques.", "title": "" }, { "docid": "7feea3bcba08a889ba779a23f79556d7", "text": "In this report, monodispersed ultra-small Gd2O3 nanoparticles capped with hydrophobic oleic acid (OA) were synthesized with average particle size of 2.9 nm. Two methods were introduced to modify the surface coating to hydrophilic for bio-applications. With a hydrophilic coating, the polyvinyl pyrrolidone (PVP) coated Gd2O3 nanoparticles (Gd2O3-PVP) showed a reduced longitudinal T1 relaxation time compared with OA and cetyltrimethylammonium bromide (CTAB) co-coated Gd2O3 (Gd2O3-OA-CTAB) in the relaxation study. The Gd2O3-PVP was thus chosen for its further application study in MRI with an improved longitudinal relaxivity r1 of 12.1 mM(-1) s(-1) at 7 T, which is around 3 times as that of commercial contrast agent Magnevist(®). In vitro cell viability in HK-2 cell indicated negligible cytotoxicity of Gd2O3-PVP within preclinical dosage. In vivo MR imaging study of Gd2O3-PVP nanoparticles demonstrated considerable signal enhancement in the liver and kidney with a long blood circulation time. Notably, the OA capping agent was replaced by PVP through ligand exchange on the Gd2O3 nanoparticle surface. The hydrophilic PVP grants the Gd2O3 nanoparticles with a polar surface for bio-application, and the obtained Gd2O3-PVP could be used as an in vivo indicator of reticuloendothelial activity.", "title": "" } ]
scidocsrr