query_id
stringlengths
32
32
query
stringlengths
6
5.38k
positive_passages
listlengths
1
22
negative_passages
listlengths
9
100
subset
stringclasses
7 values
3d2a897a6ff59effdff33d1d3273fcce
A Model for Learning the Semantics of Pictures
[ { "docid": "f2603a583b63c1c8f350b3ddabe16642", "text": "We consider the problem of modeling annotated data---data with multiple types where the instance of one type (such as a caption) serves as a description of the other type (such as an image). We describe three hierarchical probabilistic mixture models which aim to describe such data, culminating in correspondence latent Dirichlet allocation, a latent variable model that is effective at modeling the joint distribution of both types and the conditional distribution of the annotation given the primary type. We conduct experiments on the Corel database of images and captions, assessing performance in terms of held-out likelihood, automatic annotation, and text-based image retrieval.", "title": "" }, { "docid": "7ebff2391401cef25b27d510675e9acd", "text": "We present a new approach for modeling multi-modal data sets, focusing on the specific case of segmented images with associated text. Learning the joint distribution of image regions and words has many applications. We consider in detail predicting words associated with whole images (auto-annotation) and corresponding to particular image regions (region naming). Auto-annotation might help organize and access large collections of images. Region naming is a model of object recognition as a process of translating image regions to words, much as one might translate from one language to another. Learning the relationships between image regions and semantic correlates (words) is an interesting example of multi-modal data mining, particularly because it is typically hard to apply data mining techniques to collections of images. We develop a number of models for the joint distribution of image regions and words, including several which explicitly learn the correspondence between regions and words. We study multi-modal and correspondence extensions to Hofmann’s hierarchical clustering/aspect model, a translation model adapted from statistical machine translation (Brown et al.), and a multi-modal extension to mixture of latent Dirichlet allocation (MoM-LDA). All models are assessed using a large collection of annotated images of real c ©2003 Kobus Barnard, Pinar Duygulu, David Forsyth, Nando de Freitas, David Blei and Michael Jordan. BARNARD, DUYGULU, FORSYTH, DE FREITAS, BLEI AND JORDAN scenes. We study in depth the difficult problem of measuring performance. For the annotation task, we look at prediction performance on held out data. We present three alternative measures, oriented toward different types of task. Measuring the performance of correspondence methods is harder, because one must determine whether a word has been placed on the right region of an image. We can use annotation performance as a proxy measure, but accurate measurement requires hand labeled data, and thus must occur on a smaller scale. We show results using both an annotation proxy, and manually labeled data.", "title": "" } ]
[ { "docid": "ddb01f456d904151238ecf695483a2f4", "text": "If there were only one truth, you couldn't paint a hundred canvases on the same theme.", "title": "" }, { "docid": "1423deed29f33cc6e81760b8306ffd15", "text": "In this paper, we describe RavenClaw, a plan-based, task-independent dialog management framework. RavenClaw isolates the domain-specific aspects of the dialog control logic from domain-independent conversational skills, and in the process facilitates rapid development of mixed-initiative systems operating in complex, task-oriented domains. System developers can focus exclusively on describing the dialog task control logic, while a large number of domain-independent conversational skills such as error handling, timing and turn-taking are transparently supported and enforced by the RavenClaw dialog engine. To date, RavenClaw has been used to construct and deploy a large number of systems, spanning different domains and interaction styles, such as information access, guidance through procedures, command-and-control, medical diagnosis, etc. The framework has easily adapted to all of these domains, indicating a high degree of versatility and scalability. 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "f6449c1e77e5310cd0cae5718ed9591f", "text": "Individuals with strong self-regulated learning (SRL) skills, characterized by the ability to plan, manage and control their learning process, can learn faster and outperform those with weaker SRL skills. SRL is critical in learning environments that provide low levels of support and guidance, as is commonly the case in Massive Open Online Courses (MOOCs). Learners can be trained to engage in SRL and actively supported with prompts and activities. However, effective implementation of learner support systems in MOOCs requires an understanding of which SRL strategies are most effective and how these strategies manifest in online behavior. Moreover, identifying learner characteristics that are predictive of weaker SRL skills can advance efforts to provide targeted support without obtrusive survey instruments. We investigated SRL in a sample of 4,831 learners across six MOOCs based on individual records of overall course achievement, interactions with course content, and survey responses. We found that goal setting and strategic planning predicted attainment of personal course goals, while help seeking was associated with lower goal attainment. Learners with stronger SRL skills were more likely to revisit previously studied course materials, especially course assessments. Several learner characteristics, including demographics and motivation, predicted learners’ SRL skills. We discuss implications for theory and the development of learning environments that provide", "title": "" }, { "docid": "e0079af0b45bf8d6fc194e59217e2a53", "text": "Acral peeling skin syndrome (APSS) is an autosomal recessive skin disorder characterized by acral blistering and peeling of the outermost layers of the epidermis. It is caused by mutations in the gene for transglutaminase 5, TGM5. Here, we report on clinical and molecular findings in 11 patients and extend the TGM5 mutation database by four, to our knowledge, previously unreported mutations: p.M1T, p.L41P, p.L214CfsX15, and p.S604IfsX9. The recurrent mutation p.G113C was found in 9 patients, but also in 3 of 100 control individuals in a heterozygous state, indicating that APSS might be more widespread than hitherto expected. Using quantitative real-time PCR, immunoblotting, and immunofluorescence analysis, we demonstrate that expression and distribution of several epidermal differentiation markers and corneodesmosin (CDSN) is altered in APSS keratinocytes and skin. Although the expression of transglutaminases 1 and 3 was not changed, we found an upregulation of keratin 1, keratin 10, involucrin, loricrin, and CDSN, probably as compensatory mechanisms for stabilization of the epidermal barrier. Our results give insights into the consequences of TGM5 mutations on terminal epidermal differentiation.", "title": "" }, { "docid": "5a82fe10b1c7e2f3d4838c91bba9e6a0", "text": "The ability to assess an area of interest in 3 dimensions might benefit both novice and experienced clinicians alike. High-resolution limited cone-beam volumetric tomography (CBVT) has been designed for dental applications. As opposed to sliced-image data of conventional computed tomography (CT) imaging, CBVT captures a cylindrical volume of data in one acquisition and thus offers distinct advantages over conventional medical CT. These advantages include increased accuracy, higher resolution, scan-time reduction, and dose reduction. Specific endodontic applications of CBVT are being identified as the technology becomes more prevalent. CBVT has great potential to become a valuable tool in the modern endodontic practice. The objectives of this article are to briefly review cone-beam technology and its advantages over medical CT and conventional radiography, to illustrate current and future clinical applications of cone-beam technology in endodontic practice, and to discuss medicolegal considerations pertaining to the acquisition and interpretation of 3-dimensional data.", "title": "" }, { "docid": "523983cad60a81e0e6694c8d90ab9c3d", "text": "Cognition and comportment are subserved by interconnected neural networks that allow high-level computational architectures including parallel distributed processing. Cognitive problems are not resolved by a sequential and hierarchical progression toward predetermined goals but instead by a simultaneous and interactive consideration of multiple possibilities and constraints until a satisfactory fit is achieved. The resultant texture of mental activity is characterized by almost infinite richness and flexibility. According to this model, complex behavior is mapped at the level of multifocal neural systems rather than specific anatomical sites, giving rise to brain-behavior relationships that are both localized and distributed. Each network contains anatomically addressed channels for transferring information content and chemically addressed pathways for modulating behavioral tone. This approach provides a blueprint for reexploring the neurological foundations of attention, language, memory, and frontal lobe function.", "title": "" }, { "docid": "ab45fd5e4aae81b5b6324651b035365b", "text": "The most popular way to use probabilistic models in vision is first to extract some descriptors of small image patches or object parts using well-engineered features, and then to use statistical learning tools to model the dependencies among these features and eventual labels. Learning probabilistic models directly on the raw pixel values has proved to be much more difficult and is typically only used for regularizing discriminative methods. In this work, we use one of the best, pixel-level, generative models of natural images–a gated MRF–as the lowest level of a deep belief network (DBN) that has several hidden layers. We show that the resulting DBN is very good at coping with occlusion when predicting expression categories from face images, and it can produce features that perform comparably to SIFT descriptors for discriminating different types of scene. The generative ability of the model also makes it easy to see what information is captured and what is lost at each level of representation.", "title": "" }, { "docid": "4621856b479672433f9f9dff86d4f4da", "text": "Reproducibility of computational studies is a hallmark of scientific methodology. It enables researchers to build with confidence on the methods and findings of others, reuse and extend computational pipelines, and thereby drive scientific progress. Since many experimental studies rely on computational analyses, biologists need guidance on how to set up and document reproducible data analyses or simulations. In this paper, we address several questions about reproducibility. For example, what are the technical and non-technical barriers to reproducible computational studies? What opportunities and challenges do computational notebooks offer to overcome some of these barriers? What tools are available and how can they be used effectively? We have developed a set of rules to serve as a guide to scientists with a specific focus on computational notebook systems, such as Jupyter Notebooks, which have become a tool of choice for many applications. Notebooks combine detailed workflows with narrative text and visualization of results. Combined with software repositories and open source licensing, notebooks are powerful tools for transparent, collaborative, reproducible, and reusable data analyses.", "title": "" }, { "docid": "37bcf5332a8dda4458b967442256e385", "text": "Unmanned aerial vehicles (UAVs) have shown promise in recent years for autonomous sensing. UAVs systems have been proposed for a wide range of applications such as mapping, surveillance, search, and tracking operations. The recent availability of low-cost UAVs suggests the use of teams of vehicles to perform sensing tasks. To leverage the capabilities of a team of vehicles, efficient methods of decentralized sensing and cooperative path planning are necessary. The goal of this work is to examine practical control strategies for a team of fixed-wing vehicles performing cooperative sensing. We seek to develop decentralized, autonomous control strategies that can account for a wide variety of sensing missions. Sensing goals are posed from an information theoretic standpoint to design strategies that explicitly minimize uncertainty. This work proposes a tightly coupled approach, in which sensor models and estimation objectives are used online for path planning.", "title": "" }, { "docid": "f369c4c547d98a721d64aabbd901f187", "text": "The failure and success of any software mainly depends on a technical document known as Software Requirement Specification (SRS) document, as it contains all requirements and features of the product. In the past, many developments had been done to improve the quality of the SRS, with respect to different attributes of the product, but the product success rate is not satisfactory and the room for improvement is still there. We have developed a different approach to resolve those issues. Our methodology consist of four processes i.e. Parsing Requirement (PR), Requirement Mapping using Matrix (RMM), Addition of Requirements in SRS template and Third Party Inspection. Requirement Engineering Process will provide the required inputs to PR after the implementation of its ontology rules completion of requirements will be achieved. RMM will be generated to minimize ambiguities and incorrectness with concerns of all stakeholders. Outputs of the previous processes will be added to IEEE standard format. A third party inspection will be conducted to check the requirements of the client and SRS. After inspecting SRS using inspection models and assigning Total Quality Score (TQS) third party will submit a detailed report to team of Requirement Engineers (RE). This practice will not only identify the problem but will solve the issue on its way.", "title": "" }, { "docid": "9326b7c1bd16e7db931131f77aaad687", "text": "We argue in this article that many common adverbial phrases generally taken to signal a discourse relation between syntactically connected units within discourse structure instead work anaphorically to contribute relational meaning, with only indirect dependence on discourse structure. This allows a simpler discourse structure to provide scaffolding for compositional semantics and reveals multiple ways in which the relational meaning conveyed by adverbial connectives can interact with that associated with discourse structure. We conclude by sketching out a lexicalized grammar for discourse that facilitates discourse interpretation as a product of compositional rules, anaphor resolution, and inference.", "title": "" }, { "docid": "31ffe37cba3976ef17ff2820f5a9431c", "text": "In this study, a variety of environmental monitoring, the Arduino sensing value retrieved and transmitted using the XBee wireless signal transmission to the monitoring client computer, information and data are displayed in the interface program Visual C # on, instant catch data sequentially stored in the database. In addition, when sensing value beyond the set threshold is the exception warning, Visual C # displays a warning and sends control signals transmitted to the wireless relay module control electrical appliances to make the switch, improve the environment. For example, when soil moisture deficits, Sprinklers open sprinkler function, to achieve our demands. The future will add another function, when the user is not in the scene, by the application using the Eclipse smartphone over the wireless network, the client computer to connect to the monitor interface program Visual C #, see real-time data, user need not be present also can watch, real data can be viewed through a smartphone, more convenient. This study has several experiments by a number of experimental analysis, combined with Arduino and XBee applied to a variety of situations: Plant care environment monitoring system, can be constructed of a variety of different wireless sensing applications allow users to do by the results of This research further WSN technological development and innovation.", "title": "" }, { "docid": "c47f251cc62b405be1eb1b105f443466", "text": "The conceptualization of gender variant populations within studies have consisted of imposed labels and a diversity of individual identities that preclude any attempt at examining the variations found among gender variant populations, while at the same time creating artificial distinctions between groups that may not actually exist. Data were collected from 90 transgender/transsexual people using confidential, self-administered questionnaires. Factors like age of transition, being out to others, and participant's race and class were associated with experiences of transphobic life events. Discrimination can have profound impact on transgender/transsexual people's lives, but different factors can influence one's experience of transphobia. Further studies are needed to examine how transphobia manifests, and how gender characteristics impact people's lives.", "title": "" }, { "docid": "265421a07efc8ab26a6766f90bf53245", "text": "Recently, there has been much excitement in the research community over using social networks to mitigate multiple identity, or Sybil, attacks. A number of schemes have been proposed, but they differ greatly in the algorithms they use and in the networks upon which they are evaluated. As a result, the research community lacks a clear understanding of how these schemes compare against each other, how well they would work on real-world social networks with different structural properties, or whether there exist other (potentially better) ways of Sybil defense.\n In this paper, we show that, despite their considerable differences, existing Sybil defense schemes work by detecting local communities (i.e., clusters of nodes more tightly knit than the rest of the graph) around a trusted node. Our finding has important implications for both existing and future designs of Sybil defense schemes. First, we show that there is an opportunity to leverage the substantial amount of prior work on general community detection algorithms in order to defend against Sybils. Second, our analysis reveals the fundamental limits of current social network-based Sybil defenses: We demonstrate that networks with well-defined community structure are inherently more vulnerable to Sybil attacks, and that, in such networks, Sybils can carefully target their links in order make their attacks more effective.", "title": "" }, { "docid": "2e42ab12b43022d22b9459cfaea6f436", "text": "Treemaps provide an interesting solution for representing hierarchical data. However, most studies have mainly focused on layout algorithms and paid limited attention to the interaction with treemaps. This makes it difficult to explore large data sets and to get access to details, especially to those related to the leaves of the trees. We propose the notion of zoomable treemaps (ZTMs), an hybridization between treemaps and zoomable user interfaces that facilitates the navigation in large hierarchical data sets. By providing a consistent set of interaction techniques, ZTMs make it possible for users to browse through very large data sets (e.g., 700,000 nodes dispatched amongst 13 levels). These techniques use the structure of the displayed data to guide the interaction and provide a way to improve interactive navigation in treemaps.", "title": "" }, { "docid": "7eed5e11e47807a3ff0af21461e88385", "text": "We propose Attentive Regularization (AR), a method to constrain the activation maps of kernels in Convolutional Neural Networks (CNNs) to specific regions of interest (ROIs). Each kernel learns a location of specialization along with its weights through standard backpropagation. A differentiable attention mechanism requiring no additional supervision is used to optimize the ROIs. Traditional CNNs of different types and structures can be modified with this idea into equivalent Targeted Kernel Networks (TKNs), while keeping the network size nearly identical. By restricting kernel ROIs, we reduce the number of sliding convolutional operations performed throughout the network in its forward pass, speeding up both training and inference. We evaluate our proposed architecture on both synthetic and natural tasks across multiple domains. TKNs obtain significant improvements over baselines, requiring less computation (around an order of magnitude) while achieving superior performance.", "title": "" }, { "docid": "984a6d4b9364eecbf3a762362aace9da", "text": "Ponder means \"to weigh in the mind with thoroughness and care\" [31]. Pander means \"to cater to the weaknesses and base desires of others\" [31]. We report on a course we have designed and delivered over a six year period. The course was originally designed as a technical writing course for majors, but has evolved into a non-major's version whose enrollment ranks it as one of the three most highly-enrolled and thus arguably most popular courses for undergraduates at our university. We have worked diligently to ensure that students ponder the topics and problems that comprise the material for the course --- and the material is deeply technical at many levels. We have also pandered to student needs in meeting curriculum requirements, offering the course at a time convenient for athletes and others, and using popular media when possible. We started with the goal of engendering interest and passion for computer science and how it affects the world. We report on our efforts to attain this goal while keeping material appropriately technical. We claim our students are engaged in a different type of computational thinking than that espoused in [32, 5, 15]. For the purposes of this paper and discussion we call our approach pander-to-ponder. We provide examples and illustrations of the material we cover, relate it to similar courses at other institutions, and show how we use problems to motivate learning. In the work we report on here the learning is specific to understanding how contributions from computer science are changing the world.", "title": "" }, { "docid": "197f4782bc11e18b435f4bc568b9de79", "text": "Protected-module architectures (PMAs) have been proposed to provide strong isolation guarantees, even on top of a compromised system. Unfortunately, Intel SGX – the only publicly available highend PMA – has been shown to only provide limited isolation. An attacker controlling the untrusted page tables, can learn enclave secrets by observing its page access patterns. Fortifying existing protected-module architectures in a realworld setting against side-channel attacks is an extremely difficult task as system software (hypervisor, operating system, . . . ) needs to remain in full control over the underlying hardware. Most stateof-the-art solutions propose a reactive defense that monitors for signs of an attack. Such approaches unfortunately cannot detect the most novel attacks, suffer from false-positives, and place an extraordinary heavy burden on enclave-developers when an attack is detected. We present Heisenberg, a proactive defense that provides complete protection against page table based side channels. We guarantee that any attack will either be prevented or detected automatically before any sensitive information leaks. Consequently, Heisenberg can always securely resume enclave execution – even when the attacker is still present in the system. We present two implementations. Heisenberg-HW relies on very limited hardware features to defend against page-table-based attacks. We use the x86/SGX platform as an example, but the same approach can be applied when protected-module architectures are ported to different platforms as well. Heisenberg-SW avoids these hardware modifications and can readily be applied. Unfortunately, it’s reliance on Intel Transactional Synchronization Extensions (TSX) may lead to significant performance overhead under real-life conditions.", "title": "" }, { "docid": "bf05dca7c0ac521045794c90c91eba9d", "text": "The optimization and analysis of new waveguide polarizers have been carried out on the basis of rigorous full-wave model. These polarizers transform the dominant mode of input rectangular waveguide into an elliptically polarized wave of output square waveguide. The phase-shifting module is realized on the basis of one or two sections of a square waveguide having two diagonally placed square ridges. It has been found out that polarizers with single-section phase shifter can provide the bandwidth from 11% to 15% at the axial ratio level of r < 2 dB and the return loss level of LR > 20 dB, whereas the two-section ones have the bandwidths more 23% at r < 1 dB and LR > 23 dB", "title": "" }, { "docid": "5431514a65d66d40e55b87a5d326d3b5", "text": "The authors describe a theoretical framework for understanding when people interacting with a member of a stereotyped group activate that group's stereotype and apply it to that person. It is proposed that both stereotype activation and stereotype application during interaction depend on the strength of comprehension and self-enhancement goals that can be satisfied by stereotyping one's interaction partner and on the strength of one's motivation to avoid prejudice. The authors explain how these goals can promote and inhibit stereotype activation and application, and describe diverse chronic and situational factors that can influence the intensity of these goals during interaction and, thereby, influence stereotype activation and application. This approach permits integration of a broad range of findings on stereotype activation and application.", "title": "" } ]
scidocsrr
ff1ac8eb6e6fe1a5c5b4060242cf1ccb
The Effect of the Agency and Anthropomorphism on Users' Sense of Telepresence, Copresence, and Social Presence in Virtual Environments
[ { "docid": "ffef173f4e0c757c6d780d0af5d9c00b", "text": "Minding the Body, the Primordial Communication Medium Embodiment: The Teleology of Interface Design Embodiment: Thinking through our Technologically Extended Bodies User Embodiment and Three Forms in Which the Body \"Feels\" Present in the Virtual Environment Presence: Emergence of a Design Goal and Theoretical Problem Being There: The Sense of Physical Presence in Cyberspace Being with another Body: Designing the Illusion of Social Presence Is This Body Really \"Me\"? Self Presence, Body Schema, Self-consciousness, and Identity The Cyborg's Dilemma Footnotes References About the Author The intrinsic relationship that arises between tools and organs, and one that is to be revealed and emphasized – although it is more one of unconscious discovery than of conscious invention – is that in the tool the human continually produces itself. Since the organ whose utility and power is to be increased is the controlling factor, the appropriate form of a tool can be derived only from that organ. Ernst Kapp, 1877, quoted in [Mitcham, 1994, p. 23] Abstract StudyW Academ Excellen Award Collab-U CMC Play E-Commerce Symposium Net Law InfoSpaces Usenet NetStudy VEs Page 1 of 29 THE CYBORG'S DILEMNA: PROGRESSIVE EMBODIMENT IN VIRTUAL ENVIR...StudyW Academ Excellen Award Collab-U CMC Play E-Commerce Symposium Net Law InfoSpaces Usenet NetStudy VEs Page 1 of 29 THE CYBORG'S DILEMNA: PROGRESSIVE EMBODIMENT IN VIRTUAL ENVIR... 9/11/2005 http://jcmc.indiana.edu/vol3/issue2/biocca2.html How does the changing representation of the body in virtual environments affect the mind? This article considers how virtual reality interfaces are evolving to embody the user progressively. The effect of embodiment on the sensation of physical presence, social presence, and self presence in virtual environments is discussed. The effect of avatar representation on body image and body schema distortion is also considered. The paper ends with the introduction of the cyborg's dilemma, a paradoxical situation in which the development of increasingly \"natural\" and embodied interfaces leads to \"unnatural\" adaptations or changes in the user. In the progressively tighter coupling of user to interface, the user evolves as a cyborg. Minding the Body, the Primordial Communication Medium In the twentieth century we have made a successful transition from the sooty iron surfaces of the industrial revolution to the liquid smooth surfaces of computer graphics. On our computer monitors we may be just beginning to see a reflective surface that looks increasingly like a mirror. In the virtual world that exists on the other side of the mirror's surface we can just barely make out the form of a body that looks like us, like another self. Like Narcissus looking into the pond, we are captured by the experience of this reflection of our bodies. But that reflected body looks increasingly like a cyborg. [2] This article explores an interesting pattern in media interface development that I will call progressive embodiment. Each progressive step in the development of sensor and display technology moves telecommunication technology towards a tighter coupling of the body to the interface. The body is becoming present in both physical space and cyberspace. The interface is adapting to the body; the body is adapting to the interface [(Biocca & Rolland, in press)]. Why is this occurring? One argument is that attempts to optimize the communication bandwidth of distributed, multi-user virtual environments such as social VRML worlds and collaborative virtual environments drives this steady augmentation of the body and the mind [(see Biocca, 1995)]. It has become a key to future stages of interface development. On the other hand, progressive embodiment may be part of a larger pattern, the cultural evolution of humans and communication artifacts towards a mutual integration and greater \"somatic flexibility\" [(Bateson, 1972)]. The pattern of progressive embodiment raises some fundamental and interesting questions. In this article we pause to consider these developments. New media like distributed immersive virtual environments sometimes force us to take a closer look at what is fundamental about communication. Inevitably, theorists interested in the fundamentals of communication return in some way or another to a discussion of the body and the mind. At the birth of new media, theories dwell on human factors in communication [(Biocca, 1995)] and are often more psychological than sociological. For example when radio and film appeared, [Arnheim (1957)] and [Munsterberg (1916)] used the perceptual theories of Gestalt psychology to try to make sense of how each medium affected the senses. In the 1960s McLuhan [(1966; McLuhan & McLuhan, 1988)] refocused our attention on media technology when he assembled a controversial psychological theory to examine electronic media and make pronouncements about the consequences of imbalances in the \"sensorium.\" Page 2 of 29 THE CYBORG'S DILEMNA: PROGRESSIVE EMBODIMENT IN VIRTUAL ENVIR... 9/11/2005 http://jcmc.indiana.edu/vol3/issue2/biocca2.html Before paper, wires, and silicon, the primordial communication medium is the body. At the center of all communication rests the body, the fleshy gateway to the mind. [Becker & Schoenbach (1989)] argue that \"a veritable 'new mass medium' for some experts, has to address new senses of new combinations of senses. It has to use new channels of information\" (p. 5). In other words, each new medium must somehow engage the body in a new way. But this leads us to ask, are all the media collectively addressing the body in some systematic way? Are media progressively embodying the user? 1.1 The senses as channels to the mind \"Each of us lives within ... the prison of his own brain. Projecting from it are millions of fragile sensory nerve fibers, in groups uniquely adapted to sample the energetic states of the world around us: heat, light, force, and chemical composition. That is all we ever know of it directly; all else is logical inference (1975, p. 131) [(see Sekuler & Blake, 1994 p. 2)]. The senses are the portals to the mind. Sekuler and Blake extend their observation to claim that the senses are \"communication channels to reality.\" Consider for a moment the body as an information acquisition system. As aliens from some distant planet we observe humans and see the body as an array of sensors propelled through space to scan, rub, and grab the environment. In some ways, that is how virtual reality designers see users [(Durlach & Mavor, 1994)]. Many immersive virtual reality designers tend to be implicitly or explicitly Gibsonian: they accept the perspective of the noted perceptual psychologist [J.J. Gibson (1966, 1979)]. Immersive virtual environments are places where vision and the other senses are meant to be active. Users make use of the affordances in the environments from which they perceive the structure of the virtual world in ways similar to the manner they construct the physical world. Through motion and collisions with objects the senses pick up invariances in energy fields flowing over the body's receptors. When we walk or reach for an object in the virtual or physical world, we guide the senses in this exploration of the space in same way that a blind man stretches out a white cane to explore the space while in motion. What we know about the world is embodied, it is constructed from patterns of energy detected by the body. The body is the surface on which all energy fields impinge, on which communication and telecommunication takes form. 1.2 The body as a display device for a mind The body is integrated with the mind as a representational system, or as the neuroscientist, Antonio Damasio, puts it, \"a most curious physiological arrangement ... has turned the brain into the body's captive audience\" [(Damasio, 1994, p. xv)]. In some ways, the body is a primordial display device, a kind of internal mental simulator. The body is a representational medium for the mind. Some would claim that thought is embodied or modeled by the body. Johnson and Lakoff [(Johnson, 1987; Lakoff & Johnson, 1980; Lakoff, 1987)] argue against a view of reasoning as manipulation of prepositional representations (the \"objectives position\"), a tabulation and manipulation of abstract symbols. They might suggest a kind of sensory-based \"image schemata\" that are critical to instantiating mental transformations associated with metaphor and analogy. In a way virtual environments are objectified metaphors and analogies delivered as sensory patterns instantiating \"image schemata.\" In his book, Decartes' Error, the neuroscientist Damasio explains how the body is used as a means of embodying thought: Page 3 of 29 THE CYBORG'S DILEMNA: PROGRESSIVE EMBODIMENT IN VIRTUAL ENVIR... 9/11/2005 http://jcmc.indiana.edu/vol3/issue2/biocca2.html \"...the body as represented in the brain, may constitute the indispensable frame of reference for the neural processes that we experience as the mind; that our very organism rather than some absolute experiential reality is used as the ground of reference for the constructions we make of the world around us and for the construction of the ever-present sense of subjectivity that is part and parcel of our experiences; that our most refined thoughts and best actions, our greatest joys and deepest sorrows, use the body as a yardstick\" [(Damasio, 1994, p. xvi)]. Damasio's title, Descartes' Error, warns against the misleading tendency to think of the body and mind, reason and emotion, as separate systems. Figure 1. Range of possible input (sensors) and output (effectors) devices for a virtual reality system. Illustrates the pattern of progressive embodiment in virtual reality systems. Source: Biocca & Delaney, 1995 1.3 The body as a communication device The body is also an expressive communication device [(Benthall & Polhemus, 1975)], a social semiotic vehicle for representing mental states (e.g., emotions, observations, plans, etc.)", "title": "" } ]
[ { "docid": "35463670bc80c009f811f97165db33e1", "text": "Framing is the process by which a communication source constructs and defines a social or political issue for its audience. While many observers of political communication and the mass media have discussed framing, few have explicitly described how framing affects public opinion. In this paper we offer a theory of framing effects, with a specific focus on the psychological mechanisms by which framing influences political attitudes. We discuss important conceptual differences between framing and traditional theories of persuasion that focus on belief change. We outline a set of hypotheses about the interaction between framing and audience sophistication, and test these in an experiment. The results support our argument that framing is not merely persuasion, as it is traditionally conceived. We close by reflecting on the various routes by which political communications can influence attitudes.", "title": "" }, { "docid": "4736ae77defc37f96b235b3c0c2e56ff", "text": "This review highlights progress over the past decade in research on the effects of mass trauma experiences on children and youth, focusing on natural disasters, war, and terrorism. Conceptual advances are reviewed in terms of prevailing risk and resilience frameworks that guide basic and translational research. Recent evidence on common components of these models is evaluated, including dose effects, mediators and moderators, and the individual or contextual differences that predict risk or resilience. New research horizons with profound implications for health and well-being are discussed, particularly in relation to plausible models for biological embedding of extreme stress. Strong consistencies are noted in this literature, suggesting guidelines for disaster preparedness and response. At the same time, there is a notable shortage of evidence on effective interventions for child and youth victims. Practical and theory-informative research on strategies to protect children and youth victims and promote their resilience is a global priority.", "title": "" }, { "docid": "f68a287156c2930f302c2ab7f5a2b2a5", "text": "Time series analysis and forecasting future values has been a major research focus since years ago. Time series analysis and forecasting in time series data finds it significance in many applications such as business, stock market and exchange, weather, electricity demand, cost and usage of products such as fuels, electricity, etc. and in any kind of place that has specific seasonal or trendy changes with time. The forecasting of time series data provides the organization with useful information that is necessary for making important decisions. In this paper, a detailed survey of the various techniques applied for forecasting different types of time series dataset is provided. This survey covers the overall forecasting models, the algorithms used within the model and other optimization techniques used for better performance and accuracy. The various performance evaluation parameters used for evaluating the forecasting models are also discussed in this paper. This study gives the reader an idea about the various researches that take place within forecasting using the time series data.", "title": "" }, { "docid": "9cd40ecccdadce54f46885466590303d", "text": "This paper considers the impact of uncertain wind forecasts on the value of stored energy (such as pumped hydro) in a future U.K. system, where wind supplies over 20% of the energy. Providing more of the increased requirement for reserves from standing reserve sources could increase system operation efficiency, enhance wind power absorption, achieve fuel cost savings, and reduce CO2 emissions. Generally, storage-based standing reserve's value is driven by the amount of installed wind and by generation system flexibility. Benefits are more significant in systems with low generation flexibility and with large installed wind capacity. Storage is uniquely able to stock up generated excesses during high-wind/low-demand periods, and subsequently discharge this energy as needed. When storage is combined with standing reserve provided from conventional generation (e.g., open-cycle gas turbines), it is valuable in servicing the highly frequent smaller imbalances", "title": "" }, { "docid": "53be2c41da023d9e2380e362bfbe7cce", "text": "A rich and  exible class of random probability measures, which we call stick-breaking priors, can be constructed using a sequence of independent beta random variables. Examples of random measures that have this characterization include the Dirichlet process, its two-parameter extension, the two-parameter Poisson–Dirichlet process, Ž nite dimensional Dirichlet priors, and beta two-parameter processes. The rich nature of stick-breaking priors offers Bayesians a useful class of priors for nonparametri c problems, while the similar construction used in each prior can be exploited to develop a general computational procedure for Ž tting them. In this article we present two general types of Gibbs samplers that can be used to Ž t posteriors of Bayesian hierarchical models based on stick-breaking priors. The Ž rst type of Gibbs sampler, referred to as a Pólya urn Gibbs sampler, is a generalized version of a widely used Gibbs sampling method currently employed for Dirichlet process computing. This method applies to stick-breaking priors with a known Pólya urn characterization, that is, priors with an explicit and simple prediction rule. Our second method, the blocked Gibbs sampler, is based on an entirely different approach that works by directly sampling values from the posterior of the random measure. The blocked Gibbs sampler can be viewed as a more general approach because it works without requiring an explicit prediction rule. We Ž nd that the blocked Gibbs avoids some of the limitations seen with the Pólya urn approach and should be simpler for nonexperts to use.", "title": "" }, { "docid": "a09248f7c017c532a3a0a580be14ba20", "text": "In the past ten years, the software aging phenomenon has been systematically researched, and recognized by both academic, and industry communities as an important obstacle to achieving dependable software systems. One of its main effects is the depletion of operating system resources, causing system performance degradation or crash/hang failures in running applications. When conducting experimental studies to evaluate the operational reliability of systems suffering from software aging, long periods of runtime are required to observe system failures. Focusing on this problem, we present a systematic approach to accelerate the software aging manifestation to reduce the experimentation time, and to estimate the lifetime distribution of the investigated system. First, we introduce the concept of ¿aging factor¿ that offers a fine control of the aging effects at the experimental level. The aging factors are estimated via sensitivity analyses based on the statistical design of experiments. Aging factors are then used together with the method of accelerated degradation test to estimate the lifetime distribution of the system under test at various stress levels. This approach requires us to estimate a relationship model between stress levels and aging degradation. Such models are called stress-accelerated aging relationships. Finally, the estimated relationship models enable us to estimate the lifetime distribution under use condition. The proposed approach is used in estimating the lifetime distribution of a web server with software aging symptoms. The main result is the reduction of the experimental time by a factor close to 685 in comparison with experiments executed without the use of our technique.", "title": "" }, { "docid": "19ab044ed5154b4051cae54387767c9b", "text": "An approach is presented for minimizing power consumption for digital systems implemented in CMOS which involves optimization at all levels of the design. This optimization includes the technology used to implement the digital circuits, the circuit style and topology, the architecture for implementing the circuits and at the highest level the algorithms that are being implemented. The most important technology consideration is the threshold voltage and its control which allows the reduction of supply voltage without signijcant impact on logic speed. Even further supply reductions can be made by the use of an architecture-based voltage scaling strategy, which uses parallelism and pipelining, to tradeoff silicon area and power reduction. Since energy is only consumed when capacitance is being switched, power can be reduced by minimizing this capacitance through operation reduction, choice of number representation, exploitation of signal correlations, resynchronization to minimize glitching, logic design, circuit design, and physical design. The low-power techniques that are presented have been applied to the design of a chipset for a portable multimedia terminal that supports pen input, speech I/O and fullmotion video. The entire chipset that perjorms protocol conversion, synchronization, error correction, packetization, buffering, video decompression and D/A conversion operates from a 1.1 V supply and consumes less than 5 mW.", "title": "" }, { "docid": "2af3d0d849d50e977864f4085062fdac", "text": "Personal space (PS), the flexible protective zone maintained around oneself, is a key element of everyday social interactions. It, e.g., affects people's interpersonal distance and is thus largely involved when navigating through social environments. However, the PS is regulated dynamically, its size depends on numerous social and personal characteristics and its violation evokes different levels of discomfort and physiological arousal. Thus, gaining more insight into this phenomenon is important. We contribute to the PS investigations by presenting the results of a controlled experiment in a CAVE, focusing on German males in the age of 18 to 30 years. The PS preferences of 27 participants have been sampled while they were approached by either a single embodied, computer-controlled virtual agent (VA) or by a group of three VAs. In order to investigate the influence of a VA's emotions, we altered their facial expression between angry and happy. Our results indicate that the emotion as well as the number of VAs approaching influence the PS: larger distances are chosen to angry VAs compared to happy ones; single VAs are allowed closer compared to the group. Thus, our study is a foundation for social and behavioral studies investigating PS preferences.", "title": "" }, { "docid": "4d5e72046bfd44b9dc06dfd02812f2d6", "text": "Recommender systems in the last decade opened new interactive channels between buyers and sellers leading to new concepts involved in the marketing strategies and remarkable positive gains in online sales. Businesses intensively aim to maintain customer loyalty, satisfaction and retention; such strategic longterm values need to be addressed by recommender systems in a more tangible and deeper manner. The reason behind the considerable growth of recommender systems is for tracking and analyzing the buyer behavior on the one to one basis to present items on the web that meet his preference, which is the core concept of personalization. Personalization is always related to the relationship between item and user leaving out the contextual information about this relationship. User's buying decision is not only affected by the presented item, but also influenced by its price and the context in which the item is presented, such as time or place. Recently, new system has been designed based on the concept of utilizing price personalization in the recommendation process. This system is newly coined as personalized pricing recommender system (PPRS). We propose personalized pricing recommender system with a novel approach of calculating consumer online real value to determine dynamically his personalized discount, which can be generically applied on the normal price of any recommend item through its predefined discount rules.", "title": "" }, { "docid": "784b59ad8529f62004d28ce2473368cb", "text": "In layer-based additive manufacturing (AM), supporting structures need to be inserted to support the overhanging regions. The adding of supporting structures slows down the speed of fabrication and introduces artifacts onto the finished surface. We present an orientation-driven shape optimizer to slim down the supporting structures used in single material-based AM. The optimizer can be employed as a tool to help designers to optimize the original model to achieve a more self-supported shape, which can be used as a reference for their further design. The model to be optimized is first enclosed in a volumetric mesh, which is employed as the domain of computation. The optimizer is driven by the operations of reorientation taken on tetrahedra with ‘facing-down’ surface facets. We formulate the demand on minimizing shape variation as global rigidity energy. The local optimization problem for determining a minimal rotation is analyzed on the Gauss sphere, which leads to a closed-form solution. Moreover, we also extend our approach to create the functions of controlling the deformation and searching for optimal printing directions.", "title": "" }, { "docid": "30ef95dffecc369aabdd0ea00b0ce299", "text": "The cloud seems to be an excellent companion of mobile systems, to alleviate battery consumption on smartphones and to backup user's data on-the-fly. Indeed, many recent works focus on frameworks that enable mobile computation offloading to software clones of smartphones on the cloud and on designing cloud-based backup systems for the data stored in our devices. Both mobile computation offloading and data backup involve communication between the real devices and the cloud. This communication does certainly not come for free. It costs in terms of bandwidth (the traffic overhead to communicate with the cloud) and in terms of energy (computation and use of network interfaces on the device). In this work we study the fmobile software/data backupseasibility of both mobile computation offloading and mobile software/data backups in real-life scenarios. In our study we assume an architecture where each real device is associated to a software clone on the cloud. We consider two types of clones: The off-clone, whose purpose is to support computation offloading, and the back-clone, which comes to use when a restore of user's data and apps is needed. We give a precise evaluation of the feasibility and costs of both off-clones and back-clones in terms of bandwidth and energy consumption on the real device. We achieve this through measurements done on a real testbed of 11 Android smartphones and an equal number of software clones running on the Amazon EC2 public cloud. The smartphones have been used as the primary mobile by the participants for the whole experiment duration.", "title": "" }, { "docid": "11a1c92620d58100194b735bfc18c695", "text": "Stabilization by static output feedback (SOF) is a long-standing open problem in control: given an n by n matrix A and rectangular matrices B and C, find a p by q matrix K such that A + BKC is stable. Low-order controller design is a practically important problem that can be cast in the same framework, with (p+k)(q+k) design parameters instead of pq, where k is the order of the controller, and k << n. Robust stabilization further demands stability in the presence of perturbation and satisfactory transient as well as asymptotic system response. We formulate two related nonsmooth, nonconvex optimization problems over K, respectively with the following objectives: minimization of the -pseudospectral abscissa of A+BKC, for a fixed ≥ 0, and maximization of the complex stability radius of A + BKC. Finding global optimizers of these functions is hard, so we use a recently developed gradient sampling method that approximates local optimizers. For modest-sized systems, local optimization can be carried out from a large number of starting points with no difficulty. The best local optimizers may then be investigated as candidate solutions to the static output feedback or low-order controller design problem. We show results for two problems published in the control literature. The first is a turbo-generator example that allows us to show how different choices of the optimization objective lead to stabilization with qualitatively different properties, conveniently visualized by pseudospectral plots. The second is a well known model of a Boeing 767 aircraft at a flutter condition. For this problem, we are not aware of any SOF stabilizing K published in the literature. Our method was not only able to find an SOF stabilizing K, but also to locally optimize the complex stability radius of A + BKC. We also found locally optimizing order–1 and order–2 controllers for this problem. All optimizers are visualized using pseudospectral plots.", "title": "" }, { "docid": "3bb48e5bf7cc87d635ab4958553ef153", "text": "This paper presents an in-depth study of young Swedish consumers and their impulsive online buying behaviour for clothing. The aim of the study is to develop the understanding of what factors affect impulse buying of clothing online and what feelings emerge when buying online. The study carried out was exploratory in nature, aiming to develop an understanding of impulse buying behaviour online before, under and after the actual purchase. The empirical data was collected through personal interviews. In the study, a pattern of the consumers recurrent feelings are identified through the impulse buying process; escapism, pleasure, reward, scarcity, security and anticipation. The escapism is particularly occurring since the study revealed that the consumers often carried out impulse purchases when they initially were bored, as opposed to previous studies. 1 University of Borås, Swedish Institute for Innovative Retailing, School of Business and IT, Allégatan 1, S-501 90 Borås, Sweden. Phone: +46732305934 Mail: malin.sundstrom@hb.se", "title": "" }, { "docid": "7d8b256565f44be75e5d23130573580c", "text": "Even the support vector machine (SVM) has been proposed to provide a good generalization performance, the classification result of the practically implemented SVM is often far from the theoretically expected level because their implementations are based on the approximated algorithms due to the high complexity of time and space. To improve the limited classification performance of the real SVM, we propose to use the SVM ensembles with bagging (bootstrap aggregating). Each individual SVM is trained independently using the randomly chosen training samples via a bootstrap technique. Then, they are aggregated into to make a collective decision in several ways such as the majority voting, the LSE(least squares estimation)-based weighting, and the double-layer hierarchical combining. Various simulation results for the IRIS data classification and the hand-written digit recognitionshow that the proposed SVM ensembles with bagging outperforms a single SVM in terms of classification accuracy greatly.", "title": "" }, { "docid": "06ac34a4909ab44872ee8dc4656b22e7", "text": "Moringa oleifera is an interesting plant for its use in bioactive compounds. In this manuscript, we review studies concerning the cultivation and production of moringa along with genetic diversity among different accessions and populations. Different methods of propagation, establishment and cultivation are discussed. Moringa oleifera shows diversity in many characters and extensive morphological variability, which may provide a resource for its improvement. Great genetic variability is present in the natural and cultivated accessions, but no collection of cultivated and wild accessions currently exists. A germplasm bank encompassing the genetic variability present in Moringa is needed to perform breeding programmes and develop elite varieties adapted to local conditions. Alimentary and medicinal uses of moringa are reviewed, alongside the production of biodiesel. Finally, being that the leaves are the most used part of the plant, their contents in terms of bioactive compounds and their pharmacological properties are discussed. Many studies conducted on cell lines and animals seem concordant in their support for these properties. However, there are still too few studies on humans to recommend Moringa leaves as medication in the prevention or treatment of diseases. Therefore, further studies on humans are recommended.", "title": "" }, { "docid": "021bc2449ca5e4d4e2d836f9872b5e46", "text": "We introduce an interactive user-driven method to reconstruct high-relief 3D geometry from a single photo. Particularly, we consider two novel but challenging reconstruction issues: i) common non-rigid objects whose shapes are organic rather than polyhedral/symmetric, and ii) double-sided structures, where front and back sides of some curvy object parts are revealed simultaneously on image. To address these issues, we develop a three-stage computational pipeline. First, we construct a 2.5D model from the input image by user-driven segmentation, automatic layering, and region completion, handling three common types of occlusion. Second, users can interactively mark-up slope and curvature cues on the image to guide our constrained optimization model to inflate and lift up the image layers. We provide real-time preview of the inflated geometry to allow interactive editing. Third, we stitch and optimize the inflated layers to produce a high-relief 3D model. Compared to previous work, we can generate high-relief geometry with large viewing angles, handle complex organic objects with multiple occluded regions and varying shape profiles, and reconstruct objects with double-sided structures. Lastly, we demonstrate the applicability of our method on a wide variety of input images with human, animals, flowers, etc.", "title": "" }, { "docid": "0f03c9bc5ff7e6f2a0fccff1f847aa51", "text": "OBJECTIVE\nWe sought to determine the long-term risk of type 2 diabetes following a pregnancy complicated by gestational diabetes mellitus (GDM) and assess what maternal antepartum, postpartum, and neonatal factors are predictive of later development of type 2 diabetes.\n\n\nRESEARCH DESIGN AND METHODS\nThis was a retrospective cohort study using survival analysis on 5,470 GDM patients and 783 control subjects who presented for postnatal follow-up at the Mercy Hospital for Women between 1971 and 2003.\n\n\nRESULTS\nRisk of developing diabetes increased with time of follow-up for both groups and was 9.6 times greater for patients with GDM. The cumulative risk of developing type 2 diabetes for the GDM patients was 25.8% at 15 years postdiagnosis. Predictive factors for the development of type 2 diabetes were use of insulin (hazard ratio 3.5), Asian origin compared with Caucasian (2.1), and 1-h blood glucose (1.3 for every 1 mmol increase above 10.1 mmol). BMI was associated with an increased risk of developing type 2 diabetes but did not meet the assumption of proportional hazards required for valid inference when using Cox proportional hazards.\n\n\nCONCLUSIONS\nWhile specific predictive factors for the later development of type 2 diabetes can be identified in the index pregnancy, women with a history of GDM, as a group, are worthy of long-term follow-up to ameliorate their excess cardiovascular risk.", "title": "" }, { "docid": "b83a061d5c4bbd7c38584f2fbf1060e0", "text": "Novelty detection in text streams is a challenging task that emerges in quite a few different scenarii, ranging from email threads to RSS news feeds on a cell phone. An efficient novelty detection algorithm can save the user a great deal of time when accessing interesting information. Most of the recent research for the detection of novel documents in text streams uses either geometric distances or distributional similarities with the former typically performing better but being slower as we need to compare an incoming document with all the previously seen ones. In this paper, we propose a new novelty detection algorithm based on the Inverse Document Frequency (IDF) scoring function. Computing novelty based on IDF enables us to avoid similarity comparisons with previous documents in the text stream, thus leading to faster execution times. At the same time, our proposed approach outperforms several commonly used baselines when applied on a real-world news articles dataset.", "title": "" }, { "docid": "98110985cd175f088204db452a152853", "text": "We propose an automatic method to infer high dynamic range illumination from a single, limited field-of-view, low dynamic range photograph of an indoor scene. In contrast to previous work that relies on specialized image capture, user input, and/or simple scene models, we train an end-to-end deep neural network that directly regresses a limited field-of-view photo to HDR illumination, without strong assumptions on scene geometry, material properties, or lighting. We show that this can be accomplished in a three step process: 1) we train a robust lighting classifier to automatically annotate the location of light sources in a large dataset of LDR environment maps, 2) we use these annotations to train a deep neural network that predicts the location of lights in a scene from a single limited field-of-view photo, and 3) we fine-tune this network using a small dataset of HDR environment maps to predict light intensities. This allows us to automatically recover high-quality HDR illumination estimates that significantly outperform previous state-of-the-art methods. Consequently, using our illumination estimates for applications like 3D object insertion, produces photo-realistic results that we validate via a perceptual user study.", "title": "" }, { "docid": "67e008db2a218b4e307003c919a32a8a", "text": "Relay deployment in Orthogonal Frequency Division Multipl e Access (OFDMA) based cellular networks helps in coverage extension and/or capacity improvement. To quantify capacity improvement, blocking probability of voice traffic is typically calculated using Erlang B formula. This calculation is based on the assumption that all users require same amount of resourc es to satisfy their rate requirement. However, in an OFDMA system, each user requires different number of su bcarriers to meet its rate requirement. This resource requirement depends on the Signal to Interference Ratio (SIR) experienced by a user. Therefore, the Erlang B formula can not be employed to compute blocking p robability in an OFDMA network.In this paper, we determine an analytical expression to comput e the blocking probability of relay based cellular OFDMA network. We determine an expression of the probability distribution of the user’s resource requirement based on its experienced SIR. Then, we classify the users into various classes depending upon their subcarrier requirement. We consider the system to be a multi-dimensional system with different classes and evaluate the blocking probabili ty of system using the multi-dimensional Erlang loss formulas. This model is useful in the performance evaluation, design, planning of resources and call admission control of relay based cellular OFDMA networks like LTE.", "title": "" } ]
scidocsrr
00857a2ae286ddce20b369aa0d29d72e
Incremental classification of invoice documents
[ { "docid": "b6de0b3fb29edff86afc4fadac687e9d", "text": "An incremental network model is introduced which is able to learn the important topological relations in a given set of input vectors by means of a simple Hebb-like learning rule. In contrast to previous approaches like the \"neural gas\" method of Martinetz and Schulten (1991, 1994), this model has no parameters which change over time and is able to continue learning, adding units and connections, until a performance criterion has been met. Applications of the model include vector quantization, clustering, and interpolation.", "title": "" }, { "docid": "fc5782aa3152ca914c6ca5cf1aef84eb", "text": "We introduce Learn++, an algorithm for incremental training of neural network (NN) pattern classifiers. The proposed algorithm enables supervised NN paradigms, such as the multilayer perceptron (MLP), to accommodate new data, including examples that correspond to previously unseen classes. Furthermore, the algorithm does not require access to previously used data during subsequent incremental learning sessions, yet at the same time, it does not forget previously acquired knowledge. Learn++ utilizes ensemble of classifiers by generating multiple hypotheses using training data sampled according to carefully tailored distributions. The outputs of the resulting classifiers are combined using a weighted majority voting procedure. We present simulation results on several benchmark datasets as well as a real-world classification task. Initial results indicate that the proposed algorithm works rather well in practice. A theoretical upper bound on the error of the classifiers constructed by Learn++ is also provided.", "title": "" } ]
[ { "docid": "7a6c15d1a8c6a1f00efa5b285aa8dca1", "text": "By relating musical sound to musical notation, these systems generate tireless, expressive musical accompaniment to follow and sometimes learn from a live human performance.", "title": "" }, { "docid": "f88bc4fda339a3ece32371e5a1a83848", "text": "This paper proposes an intent-aware multi-agent planning framework as well as a learning algorithm. Under this framework, an agent plans in the goal space to maximize the expected utility. The planning process takes the belief of other agents' intents into consideration. Instead of formulating the learning problem as a partially observable Markov decision process (POMDP), we propose a simple but effective linear function approximation of the utility function. It is based on the observation that for humans, other people's intents will pose an influence on our utility for a goal. The proposed framework has several major advantages: i) it is computationally feasible and guaranteed to converge. ii) It can easily integrate existing intent prediction and low-level planning algorithms. iii) It does not suffer from sparse feedbacks in the action space. We experiment our algorithm in a real-world problem that is non-episodic, and the number of agents and goals can vary over time. Our algorithm is trained in a scene in which aerial robots and humans interact, and tested in a novel scene with a different environment. Experimental results show that our algorithm achieves the best performance and human-like behaviors emerge during the dynamic process.", "title": "" }, { "docid": "841b9405bbe1be4b24dda383834e282a", "text": "Psychotropic drugs (antidepressants, antimanic drugs, antipsychotics, analgesic opioids, and others) are among the most frequently used medicines. Between these drugs and magnesium there are pharmacokinetic and pharmacodynamic interactions. Erythrocyte magnesium is decreased in patients with severe major depression (MD) vs normal subjects (44 +/- 2.7 mg/L in MD group vs 59.1 +/- 3.2 mg/L in control group, p < 0.01). Therapy with sertraline, 150 mg/day p.o. -21 days or with amitryptiline 3 x 25 mg/day p.o. 28 days increases significantly erythrocyte concentration of magnesium (56.9 +/- 5.22 mg/L after sertraline vs 44 +/- 2.7 mg/L before sertraline, p < 0.01). In patients with acute paranoid schizophrenia, erythrocyte magnesium concentration is decreased vs healthy subjects. Haloperidol, 8 mg/day, p.o. for 21 days or risperidone, 6 mg/day p.o. for 21 days have increased significantly erythrocyte magnesium concentration (46.21 +/- 3.1 mg/L before haloperidol and 54.6 +/- 2.7 mg/L after haloperidol, p < 0.05). Antimanic drugs (mood stabilizers) as carbamazepine, 600 mg/day, p.o., 4 weeks and sodium valproate, 900 mg/day p.o., 4 weeks, increased significantly magnesium in patients with bipolar disorder type I. Increased magnesium status positively correlated with enhancement of the clinical state. The existent data sustain the idea that an increase of erythrocyte magnesium is involved in the mechanism of action of some psychotropic drugs. Magnesium supply decreased the intensity of morphine-induced physical drug dependence. In heroin addicts, the plasma magnesium concentration is decreased.", "title": "" }, { "docid": "e542a8c62b09a6afcdf318f32c18b8d7", "text": "Autoimmune diseases are a range of diseases in which the immune response to self-antigens results in damage or dysfunction of tissues. Autoimmune diseases can be systemic or can affect specific organs or body systems. For most autoimmune diseases there is a clear sex difference in prevalence, whereby females are generally more frequently affected than males. In this review, we consider gender differences in systemic and organ-specific autoimmune diseases, and we summarize human data that outlines the prevalence of common autoimmune diseases specific to adult males and females in countries commonly surveyed. We discuss possible mechanisms for sex specific differences including gender differences in immune response and organ vulnerability, reproductive capacity including pregnancy, sex hormones, genetic predisposition, parental inheritance, and epigenetics. Evidence demonstrates that gender has a significant influence on the development of autoimmune disease. Thus, considerations of gender should be at the forefront of all studies that attempt to define mechanisms that underpin autoimmune disease.", "title": "" }, { "docid": "c86e4bf0577f49d6d4384379651c7d9a", "text": "The following paper discusses exploratory factor analysis and gives an overview of the statistical technique and how it is used in various research designs and applications. A basic outline of how the technique works and its criteria, including its main assumptions are discussed as well as when it should be used. Mathematical theories are explored to enlighten students on how exploratory factor analysis works, an example of how to run an exploratory factor analysis on SPSS is given, and finally a section on how to write up the results is provided. This will allow readers to develop a better understanding of when to employ factor analysis and how to interpret the tables and graphs in the output.", "title": "" }, { "docid": "da414d5fce36272332a1a558e35e4b9a", "text": "IoT service in home domain needs common and effective ways to manage various appliances and devices. So, the home environment needs a gateway that provides dynamical device registration and discovery. In this paper, we propose the IoT Home Gateway that supports abstracted device data to remove heterogeneity, device discovery by DPWS, Auto-configuration for constrained devices such as Arduino. Also, the IoT Home Gateway provides lightweight information delivery using MQTT protocol. In addition, we show implementation results that access and control the device according to the home energy saving scenario.", "title": "" }, { "docid": "a14edb268c5450ec22c6ede1486fa0fc", "text": "Two large problems faced by virtual environment designers are lack of haptic feedback and constraints imposed by limited tracker space. Passive haptic feedback has been used effectively to provide a sense of touch to users (Insko, et al., 2001). Redirected walking is a promising solution to the problem of limited tracker space (Razzaque, et al., 2001). However, these solutions to these two problems are typically mutually exclusive because their requirements conflict with one another. We introduce a method by which they can be combined to address both problems simultaneously.", "title": "" }, { "docid": "6be2ecf9323b04c5e93276c9a4ca4b96", "text": "A printed wide-slot antenna for wideband applications is proposed and experimentally investigated in this communication. A modified L-shaped microstrip line is used to excite the square slot. It consists of a horizontal line, a square patch, and a vertical line. For comparison, a simple L-shaped feed structure with the same line width is used as a reference geometry. The reference antenna exhibits dual resonance (lower resonant frequency <i>f</i><sub>1</sub>, upper resonant frequency <i>f</i><sub>2</sub>). When the square patch is embedded in the middle of the L-shaped line, <i>f</i><sub>1</sub> decreases, <i>f</i><sub>2</sub> remains unchanged, and a new resonance mode is formed between <i>f</i><sub>1</sub> and <i>f</i><sub>2</sub> . Moreover, if the size of the square patch is increased, an additional (fourth) resonance mode is formed above <i>f</i><sub>2</sub>. Thus, the bandwidth of a slot antenna is easily enhanced. The measured results indicate that this structure possesses a wide impedance bandwidth of 118.4%, which is nearly three times that of the reference antenna. Also, a stable radiation pattern is observed inside the operating bandwidth. The gain variation is found to be less than 1.7 dB.", "title": "" }, { "docid": "562df031fad2ed1583c1def457d74392", "text": "Social interaction is a cornerstone of human life, yet the neural mechanisms underlying social cognition are poorly understood. Recently, research that integrates approaches from neuroscience and social psychology has begun to shed light on these processes, and converging evidence from neuroimaging studies suggests a unique role for the medial frontal cortex. We review the emerging literature that relates social cognition to the medial frontal cortex and, on the basis of anatomical and functional characteristics of this brain region, propose a theoretical model of medial frontal cortical function relevant to different aspects of social cognitive processing.", "title": "" }, { "docid": "de83d02f5f120163ed86050ee6962f50", "text": "Researchers have recently questioned the benefits associated with having high self-esteem. The authors propose that the importance of self-esteem lies more in how people strive for it rather than whether it is high or low. They argue that in domains in which their self-worth is invested, people adopt the goal to validate their abilities and qualities, and hence their self-worth. When people have self-validation goals, they react to threats in these domains in ways that undermine learning; relatedness; autonomy and self-regulation; and over time, mental and physical health. The short-term emotional benefits of pursuing self-esteem are often outweighed by long-term costs. Previous research on self-esteem is reinterpreted in terms of self-esteem striving. Cultural roots of the pursuit of self-esteem are considered. Finally, the alternatives to pursuing self-esteem, and ways of avoiding its costs, are discussed.", "title": "" }, { "docid": "1aeeed59a3f10790e2a6d8d8e26ad964", "text": "Concurrency bugs are widespread in multithreaded programs. Fixing them is time-consuming and error-prone. We present CFix, a system that automates the repair of concurrency bugs. CFix works with a wide variety of concurrency-bug detectors. For each failure-inducing interleaving reported by a bug detector, CFix first determines a combination of mutual-exclusion and order relationships that, once enforced, can prevent the buggy interleaving. CFix then uses static analysis and testing to determine where to insert what synchronization operations to force the desired mutual-exclusion and order relationships, with a best effort to avoid deadlocks and excessive performance losses. CFix also simplifies its own patches by merging fixes for related bugs. Evaluation using four different types of bug detectors and thirteen real-world concurrency-bug cases shows that CFix can successfully patch these cases without causing deadlocks or excessive performance degradation. Patches automatically generated by CFix are of similar quality to those manually written by developers.", "title": "" }, { "docid": "11a2882124e64bd6b2def197d9dc811a", "text": "1 Abstract— Clustering is the most acceptable technique to analyze the raw data. Clustering can help detect intrusions when our training data is unlabeled, as well as for detecting new and unknown types of intrusions. In this paper we are trying to analyze the NSL-KDD dataset using Simple K-Means clustering algorithm. We tried to cluster the dataset into normal and four of the major attack categories i.e. DoS, Probe, R2L, U2R. Experiments are performed in WEKA environment. Results are verified and validated using test dataset. Our main objective is to provide the complete analysis of NSL-KDD intrusion detection dataset.", "title": "" }, { "docid": "1e2768be2148ff1fd102c6621e8da14d", "text": "Example-based learning for computer vision can be difficult when a large number of examples to represent each pattern or object class is not available. In such situations, learning from a small number of samples is of practical value. To study this issue, the task of face expression recognition with a small number of training images of each expression is considered. A new technique based on linear programming for both feature selection and classifier training is introduced. A pairwise framework for feature selection, instead of using all classes simultaneously, is presented. Experimental results compare the method with three others: a simplified Bayes classifier, support vector machine, and AdaBoost. Finally, each algorithm is analyzed and a new categorization of these algorithms is given, especially for learning from examples in the small sample case.", "title": "" }, { "docid": "c987a825c8f92030dcb932a3094c7a78", "text": "This paper is concerned with the problem of adaptive fuzzy tracking control for a class of multi-input and multi-output (MIMO) strict-feedback nonlinear systems with both unknown nonsymmetric dead-zone inputs and immeasurable states. In this research, fuzzy logic systems are utilized to evaluate the unknown nonlinear functions, and a fuzzy adaptive state observer is established to estimate the unmeasured states. Based on the information of the bounds of the dead-zone slopes as well as treating the time-varying inputs coefficients as a system uncertainty, a new adaptive fuzzy output feedback control approach is developed via the backstepping recursive design technique. It is shown that the proposed control approach can assure that all the signals of the resulting closed-loop system are semiglobally uniformly ultimately bounded. It is also shown that the observer and tracking errors converge to a small neighborhood of the origin by selecting appropriate design parameters. Simulation examples are also provided to illustrate the effectiveness of the proposed approach.", "title": "" }, { "docid": "493efb9d2edc9eeda696b9b4a3ff036b", "text": "BACKGROUND\nIn patients with previously untreated chronic lymphocytic leukemia (CLL) and comorbidities, treatment with the glycoengineered, type II anti-CD20 monoclonal antibody obinutuzumab (Gazyva®) (GA101) plus chlorambucil (Leukeran®) was associated with superior outcomes to rituximab (Rituxan®) plus chlorambucil, with a similar safety profile. However, a higher occurrence of infusion-related reactions (IRRs) was reported with obinutuzumab. These reactions typically require additional management.\n\n\nOBJECTIVES\nThe focus of this article is to provide oncology nurses and physicians with advice for obinutuzumab IRR management based on clinical trial data and nursing experience.\n\n\nMETHODS\nThe authors reviewed the published management strategies for IRRs with obinutuzumab that were identified during the phase III CLL11 trial and an expanded access phase IIb study (ML28979). Practical advice for obinutuzumab IRR management was developed based on available clinical trial information and nursing experience.\n\n\nFINDINGS\nIRRs with obinutuzumab are generally manageable. Most IRRs (all grades), and all grade 3-4 IRRs, occurred during the first infusion. Therefore, IRR management could be improved substantially with extra vigilance at this early stage.", "title": "" }, { "docid": "7f82ff12310f74b17ba01cac60762a8c", "text": "For worst case parameter mismatch, modest levels of unbalance are predicted through the use of minimum gate decoupling, dynamic load lines with high Q values, common source inductance or high yield screening. Each technique is evaluated in terms of current unbalance, transition energy, peak turn-off voltage and parasitic oscillations, as appropriate, for various pulse duty cycles and frequency ranges.", "title": "" }, { "docid": "3682143e9cfe7dd139138b3b533c8c25", "text": "In brushless excitation systems, the rotating diodes can experience open- or short-circuits. For a three-phase synchronous generator under no-load, we present theoretical development of effects of diode failures on machine output voltage. Thereby, we expect the spectral response faced with each fault condition, and we propose an original algorithm for state monitoring of rotating diodes. Moreover, given experimental observations of the spectral behavior of stray flux, we propose an alternative technique. Laboratory tests have proven the effectiveness of the proposed methods for detection of fault diodes, even when the generator has been fully loaded. However, their ability to distinguish between cases of diodes interrupted and short-circuited, has been limited to the no-load condition, and certain loads of specific natures.", "title": "" }, { "docid": "ae2ee04dda7ebe9284b44be0627a9a6b", "text": "Literacy is one of the great challenges in the developing world. But universal education is an unattainable dream for those children who lack access to quality educational resources such as well-prepared teachers and schools. Worse, many of them do not attend school regularly due to their need to work for the family in the agricultural fields or households. This work commitment puts formal education far out of their reach. On the other hand, educational games on cellphones hold the promise of making learning more accessible and enjoyable. In our projects 4th year, we reached a stage where we could implement a semester-long pilot on cellphone-based learning. The pilot study took the form of an after-school program in a village in India. This paper reports on this summative learning assessment. While we found learning benefits across the board, it seemed that more of the gains accrued to those children who were better equipped to take advantage of this opportunity. We conclude with future directions for designing educational games that target less well-prepared children in developing regions.", "title": "" }, { "docid": "db207eb0d5896c2aad1f8485bc597e45", "text": "One of the serious obstacles to the applications of speech emotion recognition systems in real-life settings is the lack of generalization of the emotion classifiers. Many recognition systems often present a dramatic drop in performance when tested on speech data obtained from different speakers, acoustic environments, linguistic content, and domain conditions. In this letter, we propose a novel unsupervised domain adaptation model, called Universum autoencoders, to improve the performance of the systems evaluated in mismatched training and test conditions. To address the mismatch, our proposed model not only learns discriminative information from labeled data, but also learns to incorporate the prior knowledge from unlabeled data into the learning. Experimental results on the labeled Geneva Whispered Emotion Corpus database plus other three unlabeled databases demonstrate the effectiveness of the proposed method when compared to other domain adaptation methods.", "title": "" } ]
scidocsrr
9751890a4be66d2e8c2b611f2fd46779
Modern robust data analysis methods: measures of central tendency.
[ { "docid": "5a362bb261748794a1cd0ce455f277ae", "text": "Hundreds of articles in statistical journals have pointed out that standard analysis of variance, Pearson productmoment correlations, and least squares regression can be highly misleading and can have relatively low power even under very small departures from normality. In practical terms, psychology journals are littered with nonsignificant results that would have been significant if a more modern method had been used. Modern robust techniques, developed during the past 30 years, provide very effective methods for dealing with nonnormality, and they compete very well with conventional procedures when standard assumptions are met. In addition, modern methods provide accurate confidence intervals for a much broader range of situations, they provide more effective methods for detecting and studying outliers, and they can be used to get a deeper understanding of how variables are related. This article outlines and illustrates these results.", "title": "" } ]
[ { "docid": "df764ac07ffb90296ddb7159a21bf898", "text": "The string-to-string correction problem asks for a sequence S of \"edit operations\" of minimal cost such that ~(A) = B, for given strings A and B. The edit operations previously investigated allow changing one symbol of a string into another single symbol, deleting one symbol from a string, or inserting a single symbol into a string. This paper extends the set of allowable edit operations to include the operation of interchanging the positions of two adjacent characters Under certain restrictions on edit-operation costs, it is shown that the extended problem can still be solved in time proportional to the product of the lengths of the given strings.", "title": "" }, { "docid": "c1a4921eb85dc51e690c10649a582bf1", "text": "System thinking skills are a prerequisite for acting successfully and responsibly in a complex world. However, traditional education largely fails to enhance system thinking skills whereas learner-centered educational methods seem more promising. Several such educational methods are compared with respect to their suitability for improving system thinking. It is proposed that integrated learning environments consisting of system dynamics models and additional didactical material have positive learning effects.This is exemplified by the illustration and validation of two learning sequences.", "title": "" }, { "docid": "8395401a437c7c3d0e474776d3603bc2", "text": "Refactoring is the process of applying behavior-preserving transformations (called \"refactorings\") in order to improve a program's design. Associated with a refactoring is a set of preconditions that must be satisfied to guarantee that program behavior is preserved, and a set of source code modifications. An important category of refactorings is concerned with generalization (e.g., Extract Interface for re-routing the access to a class via a newly created interface, and Pull Up Members for moving members into a superclass). For these refactorings, both the preconditions and the set of allowable source code modifications depend on interprocedural relationships between types of variables. We present an approach in which type constraints are used to verify the preconditions and to determine the allowable source code modifications for a number of generalization-related refactorings. This work is implemented in the standard distribution of Eclipse (see www.eclipse.org).", "title": "" }, { "docid": "00da663c6e2ff403d1526d28a2499aad", "text": "M.Suruthi Murugesan, R. Pavitha Devi, S. Deepthi, V.Sri Lavanya & Dr. Annie Princy Ph.D Department of information technology (B-tech), Panimalar Engineering College. Chennai, India . Abstract: The Forum is a huge virtual space where to express and share individual opinions, influencing any aspect of life, with implications for marketing and communication alike. Forums are influencing users preferences by shaping their attitudes and behaviours. Monitoring the suspicious activities is a good way to measure users loyalty, keeping a track on their sentiment towards their posts. The exponential advancement in information and communication technology has fostered the creation of new online forums for much online discussion and has also reduced distances between people. Unfortunately, malicious people use these online forums for illegal purposes. In online forums, the users produce several and various formats of suspicious posts (text, image, video...) and exchange them online with other people .The law enforcement agencies are looking for solutions to monitor these discussion forums for possible illegal activities and download suspected postings that are in text formats as evidence for investigation .The data in most online forums are stored in text format, so this system will focus only on text posts.", "title": "" }, { "docid": "15054343b43ae67e877e5bf0a9b93afd", "text": "We discuss Hinton's (1989) relative payoff procedure (RPP), a static reinforcement learning algorithm whose foundation is not stochastic gradient ascent. We show circumstances under which applying the RPP is guaranteed to increase the mean return, even though it can make large changes in the values of the parameters. The proof is based on a mapping between the RPP and a form of the expectation-maximization procedure of Dempster, Laird, and Rubin (1977).", "title": "" }, { "docid": "dfb3a6fea5c2b12e7865f8b6664246fb", "text": "We develop a new version of prospect theory that employs cumulative rather than separable decision weights and extends the theory in several respects. This version, called cumulative prospect theory, applies to uncertain as well as to risky prospects with any number of outcomes, and it allows different weighting functions for gains and for losses. Two principles, diminishing sensitivity and loss aversion, are invoked to explain the characteristic curvature of the value function and the weighting functions. A review of the experimental evidence and the results of a new experiment confirm a distinctive fourfold pattern of risk attitudes: risk aversion for gains and risk seeking for losses of high probability; risk seeking for gains and risk aversion for losses of low probability. Expected utility theory reigned for several decades as the dominant normative and descriptive model of decision making under uncertainty, but it has come under serious question in recent years. There is now general agreement that the theory does not provide an adequate description of individual choice: a substantial body of evidence shows that decision makers systematically violate its basic tenets. Many alternative models have been proposed in response to this empirical challenge (for reviews, see Camerer, 1989; Fishburn, 1988; Machina, 1987). Some time ago we presented a model of choice, called prospect theory, which explained the major violations of expected utility theory in choices between risky prospects with a small number of outcomes (Kahneman and Tversky, 1979; Tversky and Kahneman, 1986). The key elements of this theory are 1) a value function that is concave for gains, convex for losses, and steeper for losses than for gains, *An earlier version of this article was entitled \"Cumulative Prospect Theory: An Analysis of Decision under Uncertainty.\" This article has benefited from discussions with Colin Camerer, Chew Soo-Hong, David Freedman, and David H. Krantz. We are especially grateful to Peter P. Wakker for his invaluable input and contribution to the axiomatic analysis. We are indebted to Richard Gonzalez and Amy Hayes for running the experiment and analyzing the data. This work was supported by Grants 89-0064 and 88-0206 from the Air Force Office of Scientific Research, by Grant SES-9109535 from the National Science Foundation, and by the Sloan Foundation. 298 AMOS TVERSKY/DANIEL KAHNEMAN and 2) a nonlinear transformation of the probability scale, which overweights small probabilities and underweights moderate and high probabilities. In an important later development, several authors (Quiggin, 1982; Schmeidler, 1989; Yaari, 1987; Weymark, 1981) have advanced a new representation, called the rank-dependent or the cumulative functional, that transforms cumulative rather than individual probabilities. This article presents a new version of prospect theory that incorporates the cumulative functional and extends the theory to uncertain as well to risky prospects with any number of outcomes. The resulting model, called cumulative prospect theory, combines some of the attractive features of both developments (see also Luce and Fishburn, 1991). It gives rise to different evaluations of gains and losses, which are not distinguished in the standard cumulative model, and it provides a unified treatment of both risk and uncertainty. To set the stage for the present development, we first list five major phenomena of choice, which violate the standard model and set a minimal challenge that must be met by any adequate descriptive theory of choice. All these findings have been confirmed in a number of experiments, with both real and hypothetical payoffs. Framing effects. The rational theory of choice assumes description invariance: equivalent formulations of a choice problem should give rise to the same preference order (Arrow, 1982). Contrary to this assumption, there is much evidence that variations in the framing of options (e.g., in terms of gains or losses) yield systematically different preferences (Tversky and Kahneman, 1986). Nonlinear preferences. According to the expectation principle, the utility of a risky prospect is linear in outcome probabilities. Allais's (1953) famous example challenged this principle by showing that the difference between probabilities of .99 and 1.00 has more impact on preferences than the difference between 0.10 and 0.11. More recent studies observed nonlinear preferences in choices that do not involve sure things (Camerer and Ho, 1991). Source dependence. People's willingness to bet on an uncertain event depends not only on the degree of uncertainty but also on its source. Ellsberg (1961) observed that people prefer to bet on an urn containing equal numbers of red and green balls, rather than on an urn that contains red and green balls in unknown proportions. More recent evidence indicates that people often prefer a bet on an event in their area of competence over a bet on a matched chance event, although the former probability is vague and the latter is clear (Heath and Tversky, 1991). Risk seeking. Risk aversion is generally assumed in economic analyses of decision under uncertainty. However, risk-seeking choices are consistently observed in two classes of decision problems. First, people often prefer a small probability of winning a large prize over the expected value of that prospect. Second, risk seeking is prevalent when people must choose between a sure loss and a substantial probability of a larger loss. Loss' aversion. One of the basic phenomena of choice under both risk and uncertainty is that losses loom larger than gains (Kahneman and Tversky, 1984; Tversky and Kahneman, 1991). The observed asymmetry between gains and losses is far too extreme to be explained by income effects or by decreasing risk aversion. ADVANCES IN PROSPECT THEORY 299 The present development explains loss aversion, risk seeking, and nonlinear preferences in terms of the value and the weighting functions. It incorporates a framing process, and it can accommodate source preferences. Additional phenomena that lie beyond the scope of the theory--and of its alternatives--are discussed later. The present article is organized as follows. Section 1.1 introduces the (two-part) cumulative functional; section 1.2 discusses relations to previous work; and section 1.3 describes the qualitative properties of the value and the weighting functions. These properties are tested in an extensive study of individual choice, described in section 2, which also addresses the question of monetary incentives. Implications and limitations of the theory are discussed in section 3. An axiomatic analysis of cumulative prospect theory is presented in the appendix.", "title": "" }, { "docid": "2352771f157edd90a0f8d5ea390b000b", "text": "BACKGROUND\nCurrent evidence suggests that there is a prodromal stage in Parkinson disease characterized by a variety of nonmotor symptoms.\n\n\nMETHODS AND RESULTS\nA 69-year-old man presented to our sleep center with isolated rapid eye movement sleep behavior disorder. During a 10-year follow-up period, longitudinal clinical and laboratory assessments indicated the development of hyposmia, depression, mild cognitive impairment, and constipation. Parkinsonism was absent, but dopamine transporter imaging showed subclinical substantia nigra damage. Postmortem examination demonstrated neuronal loss and Lewy body pathology in the peripheral autonomic nervous system (eg, cardiac and myenteric plexus), olfactory bulb, medulla, pons, substantia nigra pars compacta (estimated cell loss, 20%-30%), nucleus basalis of Meynert, and amygdala, sparing the neocortex.\n\n\nCONCLUSIONS\nOur observations indicate that nonmotor symptoms plus widespread peripheral and central nervous system pathological changes occur before parkinsonism and dementia onset in diseases associated with Lewy pathology. The current diagnostic criteria for Parkinson's disease miss these patients, who present only with nonmotor symptoms.", "title": "" }, { "docid": "d2a94d4dc8d8d5d71fc5f838f692544f", "text": "This introductory chapter reviews the emergence, classification, and contemporary examples of cultural robots: social robots that are shaped by, producers of, or participants in culture. We review the emergence of social robotics as a field, and then track early references to the terminology and key lines of inquiry of Cultural Robotics. Four categories of the integration of culture with robotics are outlined; and the content of the contributing chapters following this introductory chapter are summarised within these categories.", "title": "" }, { "docid": "bdbbe079493bbfec7fb3cb577c926997", "text": "A large amount of information on the Web is contained in regularly structured objects, which we call data records. Such data records are important because they often present the essential information of their host pages, e.g., lists of products or services. It is useful to mine such data records in order to extract information from them to provide value-added services. Existing automatic techniques are not satisfactory because of their poor accuracies. In this paper, we propose a more effective technique to perform the task. The technique is based on two observations about data records on the Web and a string matching algorithm. The proposed technique is able to mine both contiguous and non-contiguous data records. Our experimental results show that the proposed technique outperforms existing techniques substantially.", "title": "" }, { "docid": "659f362b1f30c32cdaca90e3141596fb", "text": "Purpose – The paper aims to focus on so-called NoSQL databases in the context of cloud computing. Design/methodology/approach – Architectures and basic features of these databases are studied, particularly their horizontal scalability and concurrency model, that is mostly weaker than ACID transactions in relational SQL-like database systems. Findings – Some characteristics like a data model and querying capabilities of NoSQL databases are discussed in more detail. Originality/value – The paper shows vary different data models and query possibilities in a common terminology enabling comparison and categorization of NoSQL databases.", "title": "" }, { "docid": "eade87f676c023cd3024226b48131ffb", "text": "Finding the dense regions of a graph and relations among them is a fundamental task in network analysis. Nucleus decomposition is a principled framework of algorithms that generalizes the k-core and k-truss decompositions. It can leverage the higher-order structures to locate the dense subgraphs with hierarchical relations. Computation of the nucleus decomposition is performed in multiple steps, known as the peeling process, and it requires global information about the graph at any time. This prevents the scalable parallelization of the computation. Also, it is not possible to compute approximate and fast results by the peeling process, because it does not produce the densest regions until the algorithm is complete. In a previous work, Lu et al. proposed to iteratively compute the h-indices of vertex degrees to obtain the core numbers and prove that the convergence is obtained after a finite number of iterations. In this work, we generalize the iterative h-index computation for any nucleus decomposition and prove convergence bounds. We present a framework of local algorithms to obtain the exact and approximate nucleus decompositions. Our algorithms are pleasingly parallel and can provide approximations to explore time and quality trade-offs. Our shared-memory implementation verifies the efficiency, scalability, and effectiveness of our algorithms on real-world networks. In particular, using 24 threads, we obtain up to 4.04x and 7.98x speedups for k-truss and (3, 4) nucleus decompositions.", "title": "" }, { "docid": "5e8154a99b4b0cc544cab604b680ebd2", "text": "This work presents performance of robust wearable antennas intended to operate in Wireless Body Area Networks (W-BAN) in UHF, TETRAPOL communication band, 380-400 MHz. We propose a Planar Inverted F Antenna (PIFA) as reliable antenna type for UHF W-BAN applications. In order to satisfy the robustness requirements of the UHF band, both from communication and mechanical aspect, a new technology for building these antennas was proposed. The antennas are built out of flexible conductive sheets encapsulated inside a silicone based elastomer, Polydimethylsiloxane (PDMS). The proposed antennas are resistive to washing, bending and perforating. From the communication point of view, opting for a PIFA antenna type we solve the problem of coupling to the wearer and thus improve the overall communication performance of the antenna. Several different tests and comparisons were performed in order to check the stability of the proposed antennas when they are placed on the wearer or left in a common everyday environ- ment, on the ground, table etc. S11 deviations are observed and compared with the commercially available wearable antennas. As a final check, the antennas were tested in the frame of an existing UHF TETRAPOL communication system. All the measurements were performed in a real university campus scenario, showing reliable and good performance of the proposed PIFA antennas.", "title": "" }, { "docid": "2f00e99644bbf1ef1ff16246f4a0d556", "text": "A framework for automatic facial expression recognition combining Active Appearance Model (AAM) and Linear Discriminant Analysis (LDA) is proposed. Seven different expressions of several subjects, representing the neutral face and the facial emotions of happiness, sadness, surprise, anger, fear and disgust were analysed. The proposed solution starts by describing the human face by an AAM model, projecting the appearance results to a Fisherspace using LDA to emphasize the different expression categories. Finaly the performed classification is based on malahanobis distance.", "title": "" }, { "docid": "42d79800699b372489ad6c95ac91b21c", "text": "Being able to reason in an environment with a large number of discrete actions is essential to bringing reinforcement learning to a larger class of problems. Recommender systems, industrial plants and language models are only some of the many real-world tasks involving large numbers of discrete actions for which current methods can be difficult or even impossible to apply. An ability to generalize over the set of actions as well as sub-linear complexity relative to the size of the set are both necessary to handle such tasks. Current approaches are not able to provide both of these, which motivates the work in this paper. Our proposed approach leverages prior information about the actions to embed them in a continuous space upon which it can generalize. Additionally, approximate nearest-neighbor methods allow for logarithmic-time lookup complexity relative to the number of actions, which is necessary for time-wise tractable training. This combined approach allows reinforcement learning methods to be applied to large-scale learning problems previously intractable with current methods. We demonstrate our algorithm’s abilities on a series of tasks having up to one million actions.", "title": "" }, { "docid": "8bcf5693c512df2429b49521239f2d87", "text": "Reliable segmentation of cell nuclei from three dimensional (3D) microscopic images is an important task in many biological studies. We present a novel, fully automated method for the segmentation of cell nuclei from 3D microscopic images. It was designed specifically to segment nuclei in images where the nuclei are closely juxtaposed or touching each other. The segmentation approach has three stages: 1) a gradient diffusion procedure, 2) gradient flow tracking and grouping, and 3) local adaptive thresholding. Both qualitative and quantitative results on synthesized and original 3D images are provided to demonstrate the performance and generality of the proposed method. Both the over-segmentation and under-segmentation percentages of the proposed method are around 5%. The volume overlap, compared to expert manual segmentation, is consistently over 90%. The proposed algorithm is able to segment closely juxtaposed or touching cell nuclei obtained from 3D microscopy imaging with reasonable accuracy.", "title": "" }, { "docid": "727a53dad95300ee9749c13858796077", "text": "Device to device (D2D) communication underlaying LTE can be used to distribute traffic loads of eNBs. However, a conventional D2D link is controlled by an eNB, and it still remains burdens to the eNB. We propose a completely distributed power allocation method for D2D communication underlaying LTE using deep learning. In the proposed scheme, a D2D transmitter can decide the transmit power without any help from other nodes, such as an eNB or another D2D device. Also, the power set, which is delivered from each D2D node independently, can optimize the overall cell throughput. We suggest a distirbuted deep learning architecture in which the devices are trained as a group, but operate independently. The deep learning can optimize total cell throughput while keeping constraints such as interference to eNB. The proposed scheme, which is implemented model using Tensorflow, can provide same throughput with the conventional method even it operates completely on distributed manner.", "title": "" }, { "docid": "ff2322cee61da0ca6013037dce09bb27", "text": "In this paper, we propose to train a network with both binary weights and binary activations, designed specifically for mobile devices with limited computation capacity and power consumption. Previous works on quantizing CNNs uncritically assume the same architecture with fullprecision networks, which we term value approximation. Their objective is to preserve the floating-point information using a set of discrete values. However, we take a novel view—for best performance it is very likely that a different architecture may be better suited to deal with binary weights as well as binary activations. Thus we directly design such a highly accurate binary network structure, which is termed structure approximation. In particular, we propose a “network decomposition” strategy in which we divide the networks into groups and aggregate a set of homogeneous binary branches to implicitly reconstruct the full-precision intermediate feature maps. In addition, we also learn the connections between each group. We further provide a comprehensive comparison among all quantization categories. Experiments on ImageNet classification tasks demonstrate the superior performance of the proposed model, named Group-Net, over various popular architectures. In particular, we outperform the previous best binary neural network in terms of accuracy as well as saving huge computational complexity. Furthermore, the proposed Group-Net can effectively utilize task specific properties for strong generalization. In particular, we propose to extend Group-Net for lossless semantic segmentation. This is the first work proposed on solving dense pixels prediction based on BNNs in the literature. Actually, we claim that considering both value and structure approximation should be the future development direction of BNNs.", "title": "" }, { "docid": "c760e6db820733dc3f57306eef81e5c9", "text": "Recently, applying the novel data mining techniques for financial time-series forecasting has received much research attention. However, most researches are for the US and European markets, with only a few for Asian markets. This research applies Support-Vector Machines (SVMs) and Back Propagation (BP) neural networks for six Asian stock markets and our experimental results showed the superiority of both models, compared to the early researches.", "title": "" }, { "docid": "1d6b58df486d618341cea965724a7da9", "text": "The focus on human capital as a driver of economic growth for developing countries has led to undue attention on school attainment. Developing countries have made considerable progress in closing the gap with developed countries in terms of school attainment, but recent research has underscored the importance of cognitive skills for economic growth. This result shifts attention to issues of school quality, and there developing countries have been much less successful in closing the gaps with developed countries. Without improving school quality, developing countries will find it difficult to improve their long run economic performance. JEL Classification: I2, O4, H4 Highlights: ! ! Improvements in long run growth are closely related to the level of cognitive skills of the population. ! ! Development policy has inappropriately emphasized school attainment as opposed to educational achievement, or cognitive skills. ! ! Developing countries, while improving in school attainment, have not improved in quality terms. ! ! School policy in developing countries should consider enhancing both basic and advanced skills.", "title": "" }, { "docid": "3a798fac488b605c145d3ce171f4dcba", "text": "In the context of civil rights law, discrimination refers to unfair or unequal treatment of people based on membership to a category or a minority, without regard to individual merit. Discrimination in credit, mortgage, insurance, labor market, and education has been investigated by researchers in economics and human sciences. With the advent of automatic decision support systems, such as credit scoring systems, the ease of data collection opens several challenges to data analysts for the fight against discrimination. In this article, we introduce the problem of discovering discrimination through data mining in a dataset of historical decision records, taken by humans or by automatic systems. We formalize the processes of direct and indirect discrimination discovery by modelling protected-by-law groups and contexts where discrimination occurs in a classification rule based syntax. Basically, classification rules extracted from the dataset allow for unveiling contexts of unlawful discrimination, where the degree of burden over protected-by-law groups is formalized by an extension of the lift measure of a classification rule. In direct discrimination, the extracted rules can be directly mined in search of discriminatory contexts. In indirect discrimination, the mining process needs some background knowledge as a further input, for example, census data, that combined with the extracted rules might allow for unveiling contexts of discriminatory decisions. A strategy adopted for combining extracted classification rules with background knowledge is called an inference model. In this article, we propose two inference models and provide automatic procedures for their implementation. An empirical assessment of our results is provided on the German credit dataset and on the PKDD Discovery Challenge 1999 financial dataset.", "title": "" } ]
scidocsrr
977fc4b4d2da31bd60c7808406645c78
Barriers and benefits of quality management in the construction industry : An empirical study
[ { "docid": "5971934855f9d4dde2a7fc91e757606c", "text": "The use of total quality management (TQM), which creates a system of management procedures that focuses on customer satisfaction and transforms the corporate culture so as to guarantee continual improvement, is discussed. The team approach essential to its implementation is described. Two case studies of applying TQM at AT&T are presented.<<ETX>>", "title": "" } ]
[ { "docid": "8a3d5500299676e160f661d87c13d617", "text": "A novel method for visual place recognition is introduced and evaluated, demonstrating robustness to perceptual aliasing and observation noise. This is achieved by increasing discrimination through a more structured representation of visual observations. Estimation of observation likelihoods are based on graph kernel formulations, utilizing both the structural and visual information encoded in covisibility graphs. The proposed probabilistic model is able to circumvent the typically difficult and expensive posterior normalization procedure by exploiting the information available in visual observations. Furthermore, the place recognition complexity is independent of the size of the map. Results show improvements over the state-of-theart on a diverse set of both public datasets and novel experiments, highlighting the benefit of the approach.", "title": "" }, { "docid": "ad9cc87411f1f40ab2f5ee0e994479b8", "text": "Even though it has been over 20 years since Spence and Robbins (1992) first showed perfectionism and workaholism to be closely related, the relationship between perfectionism and workaholism is still under-researched. In particular, it has remained unclear why perfectionism is linked to workaholism. Using data from 131 employees, this study—examining self-oriented and socially prescribed perfectionism—investigated whether intrinsic–extrinsic work motivation could explain the positive relationship between perfectionism and workaholism. Whereas socially prescribed perfectionism was unrelated to workaholism, self-oriented perfectionism showed a positive correlation with workaholism. Furthermore autonomous (integrated and identified regulation) and controlled (introjected and external regulation) work motivation showed positive correlations. However, when all predictors were entered in a regression analysis, only self-oriented perfectionism, identified regulation, and introjected regulation positively predicted workaholism. In addition, a mediation analysis showed that identified and introjected regulation fully mediated the effect of self-oriented perfectionism on workaholism. The findings suggest that high levels of work motivation explain why many self-oriented perfectionists are workaholic. 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "033fb4c857f79fc593bd9a7e12269b49", "text": "Within any Supply Chain Risk Management (SCRM) approach, the concept “Risk” occupies a central interest. Numerous frameworks which differ by the provided definitions and relationships between supply chain risk dimensions and metrics are available. This article provides an outline of the most common SCRM methodologies, in order to suggest an “integrated conceptual model”. The objective of such an integrated model is not to describe yet another conceptual model of Risk, but rather to offer a concrete structure incorporating the characteristics of the supply chain in the risk management process. The proposed alignment allows a better understanding of the dynamic of risk management strategies. Firstly, the model was analyzed through its positioning and its contributions compared to existing tools and models in the literature. This comparison highlights the critical points overlooked in the past. Secondly, the model was applied on case studies of major supply chain crisis.", "title": "" }, { "docid": "0850f46a4bcbe1898a6a2dca9f61ea61", "text": "Public opinion polarization is here conceived as a process of alignment along multiple lines of potential disagreement and measured as growing constraint in individuals' preferences. Using NES data from 1972 to 2004, the authors model trends in issue partisanship-the correlation of issue attitudes with party identification-and issue alignment-the correlation between pairs of issues-and find a substantive increase in issue partisanship, but little evidence of issue alignment. The findings suggest that opinion changes correspond more to a resorting of party labels among voters than to greater constraint on issue attitudes: since parties are more polarized, they are now better at sorting individuals along ideological lines. Levels of constraint vary across population subgroups: strong partisans and wealthier and politically sophisticated voters have grown more coherent in their beliefs. The authors discuss the consequences of partisan realignment and group sorting on the political process and potential deviations from the classic pluralistic account of American politics.", "title": "" }, { "docid": "0110e37c5525520a4db4b1a775dacddd", "text": "This paper presents a study of Linux API usage across all applications and libraries in the Ubuntu Linux 15.04 distribution. We propose metrics for reasoning about the importance of various system APIs, including system calls, pseudo-files, and libc functions. Our metrics are designed for evaluating the relative maturity of a prototype system or compatibility layer, and this paper focuses on compatibility with Linux applications. This study uses a combination of static analysis to understand API usage and survey data to weight the relative importance of applications to end users.\n This paper yields several insights for developers and researchers, which are useful for assessing the complexity and security of Linux APIs. For example, every Ubuntu installation requires 224 system calls, 208 ioctl, fcntl, and prctl codes and hundreds of pseudo files. For each API type, a significant number of APIs are rarely used, if ever. Moreover, several security-relevant API changes, such as replacing access with faccessat, have met with slow adoption. Finally, hundreds of libc interfaces are effectively unused, yielding opportunities to improve security and efficiency by restructuring libc.", "title": "" }, { "docid": "bf3d63ca8736d65c9b88976c18cd2e58", "text": "The increasing demand for traveler clearance at international border crossing points (BCPs) has motivated research for finding more efficient solutions. Automated border control (ABC) is emerging as a solution to enhance the convenience of travelers, the throughput of BCPs, and national security. This is the first comprehensive survey on the biometric techniques and systems that enable automatic identity verification in ABC. We survey the biometric literature relevant to identity verification and summarize the best practices and biometric techniques applicable to ABC, relying on real experience collected in the field. Furthermore, we select some of the major biometric issues raised and highlight the open research areas.", "title": "" }, { "docid": "9a836cca9eed9d933bae4df345d8e583", "text": "Customer churn prediction in Telecom industry is one of the most prominent research topics in recent years. It consists of detecting customers who are likely to cancel a subscription to a service. Recently, the mobile telecommunication market has changed from a rapidly growing market into a state of saturation and fierce competition. The focus of telecommunication companies has therefore shifted from building a large customer base into keeping customers in house. For that reason, it is valuable to know which customers are likely to switch to a competitor in the near future. The data extracted from telecom industry can help analyze the reasons of customer churn and use that information to retain the customers. We have proposed to build a model for churn prediction for telecommunication companies using data mining and machine learning techniques namely logistic regression and decision trees. A comparison is made based on efficiency of these algorithms on the available dataset.", "title": "" }, { "docid": "b4bca1a35fca1cca92b4f2e2f77152e1", "text": "This paper proposed design and development of a flexible UWB wearable antenna using flexible and elastic polymer substrate. Polydimethylsiloxane (PDMS) was chosen to be used as flexible substrate for the proposed antenna which is a kind of silicone elastic, it has attractive mechanical and electrical properties such as flexibility, softness, water resistance low permittivity and transparency. The proposed antenna consists of a rectangular patch with two steps notches in the upper side of the patch, resulting in a more compact and increase in the bandwidth. In addition, the proposed antenna has an elliptical slot for an enhancement of the bandwidth and gain. The bottom side edges of the patch have been truncated to provide an additional surface current path. The proposed UWB wearable antenna functions from 2.5 GHz to 12.4 GHz frequency range and it was successfully designed and the simulated result showed that the return loss was maintained less than -10 dB and VSWR kept less than 2 over the entire desired frequency range (2.5 GHz - 12.4 GHz). The gain of the proposed antenna varies with frequency and the maximum gain recorded is 4.56 dB at 6.5 GHz. Simultaneously, The radiation patterns of the proposed antenna are also presented. The performance of the antenna under bending condition is comparable with the normal condition's performance.", "title": "" }, { "docid": "e0f4670762f2df2b6e9af3d86ec62e2b", "text": "We address the task of pixel-level hand detection in the context of ego-centric cameras. Extracting hand regions in ego-centric videos is a critical step for understanding hand-object manipulation and analyzing hand-eye coordination. However, in contrast to traditional applications of hand detection, such as gesture interfaces or sign-language recognition, ego-centric videos present new challenges such as rapid changes in illuminations, significant camera motion and complex hand-object manipulations. To quantify the challenges and performance in this new domain, we present a fully labeled indoor/outdoor ego-centric hand detection benchmark dataset containing over 200 million labeled pixels, which contains hand images taken under various illumination conditions. Using both our dataset and a publicly available ego-centric indoors dataset, we give extensive analysis of detection performance using a wide range of local appearance features. Our analysis highlights the effectiveness of sparse features and the importance of modeling global illumination. We propose a modeling strategy based on our findings and show that our model outperforms several baseline approaches.", "title": "" }, { "docid": "50bd58b07a2cf7bf51ff291b17988a2c", "text": "A wideband linearly polarized antenna element with complementary sources is proposed and exploited for array antennas. The element covers a bandwidth of 38.7% from 50 to 74 GHz with an average gain of 8.7 dBi. The four-way broad wall coupler is applied for the 2 <inline-formula> <tex-math notation=\"LaTeX\">$\\times $ </tex-math></inline-formula> 2 subarray, which suppresses the cross-polarization of a single element. Based on the designed 2 <inline-formula> <tex-math notation=\"LaTeX\">$ \\times $ </tex-math></inline-formula> 2 subarray, two larger arrays have been designed and measured. The <inline-formula> <tex-math notation=\"LaTeX\">$4 \\times 4$ </tex-math></inline-formula> array exhibits 26.7% bandwidth, fully covering the 57–71 GHz unlicensed band. The <inline-formula> <tex-math notation=\"LaTeX\">$8 \\times 8$ </tex-math></inline-formula> array antenna covers a bandwidth of 14.5 GHz (22.9%) from 56.1 to 70.6 GHz with a peak gain of 26.7 dBi, and the radiation efficiency is around 80% within the matching band. It is demonstrated that the proposed antenna element and arrays can be used for future 5G applications to cover the 22% bandwidth of the unlicensed band with high gain and low loss.", "title": "" }, { "docid": "7f02090e896afacd6b70537c03956078", "text": "Although the literature on Asian Americans and racism has been emerging, few studies have examined how coping influences one's encounters with racism. To advance the literature, the present study focused on the psychological impact of Filipino Americans' experiences with racism and the role of coping as a mediator using a community-based sample of adults (N = 199). Two multiple mediation models were used to examine the mediating effects of active, avoidance, support-seeking, and forbearance coping on the relationship between perceived racism and psychological distress and self-esteem, respectively. Separate analyses were also conducted for men and women given differences in coping utilization. For men, a bootstrap procedure indicated that active, support-seeking, and avoidance coping were mediators of the relationship between perceived racism and psychological distress. Active coping was negatively associated with psychological distress, whereas both support seeking and avoidance were positively associated with psychological distress. A second bootstrap procedure for men indicated that active and avoidance coping mediated the relationship between perceived racism and self-esteem such that active coping was positively associated with self-esteem, and avoidance was negatively associated with self-esteem. For women, only avoidance coping had a significant mediating effect that was associated with elevations in psychological distress and decreases in self-esteem. The results highlight the importance of examining the efficacy of specific coping responses to racism and the need to differentiate between the experiences of men and women.", "title": "" }, { "docid": "e7e1fd16be5186474dc9e1690347716a", "text": "One-stage object detectors such as SSD or YOLO already have shown promising accuracy with small memory footprint and fast speed. However, it is widely recognized that one-stage detectors have difficulty in detecting small objects while they are competitive with two-stage methods on large objects. In this paper, we investigate how to alleviate this problem starting from the SSD framework. Due to their pyramidal design, the lower layer that is responsible for small objects lacks strong semantics(e.g contextual information). We address this problem by introducing a feature combining module that spreads out the strong semantics in a top-down manner. Our final model StairNet detector unifies the multi-scale representations and semantic distribution effectively. Experiments on PASCAL VOC 2007 and PASCAL VOC 2012 datasets demonstrate that Stair-Net significantly improves the weakness of SSD and outperforms the other state-of-the-art one-stage detectors.", "title": "" }, { "docid": "253b607fb5c22e1b8abdcc74ecc603ba", "text": "This paper surveys blockchain-based approaches for several security services. These services include authentication, confidentiality, privacy and access control list, data and resource provenance, and integrity assurance. All these services are critical for the current distributed applications, especially due to the large amount of data being processed over the networks and the use of cloud computing. Authentication ensures that the user is who he/she claims to be. Confidentiality guarantees that data cannot be read by unauthorized users. Privacy provides the users the ability to control who can access their data. Provenance allows an efficient tracking of the data and resources along with their ownership and utilization over the network. Integrity helps in verifying that the data has not been modified or altered. These services are currently managed by centralized controllers, for example, a certificate authority. Therefore, the services are prone to attacks on the centralized controller. On the other hand, blockchain is a secured and distributed ledger that can help resolve many of the problems with centralization. The objectives of this paper are to give insights on the use of security services for current applications, to highlight the state of the art techniques that are currently used to provide these services, to describe their challenges, and to discuss how the blockchain technology can resolve these challenges. Further, several blockchain-based approaches providing such security services are compared thoroughly. Challenges associated with using blockchain-based security services are also discussed to spur further research in this area.", "title": "" }, { "docid": "c84ba1b49ddc68c9d35897cf8572c9b0", "text": "The extraordinary growth of interconnected computer network & pervasive trends of using this network as new field for conducting Business process are stimulating the demand for new payment methods. These new methods attain high level of security, speed, privacy, decentralization & internationalization. This seminar surveys the state of art in payment technology & we proposed model for electronic payment gateway, On the basis of requirement of electronic payment gateway. E-payment is now one of most central research areas in E-commerce. E-payment system the automated process of exchanging monetary values among parties in business transactions. In this paper a brief overview of electronic payment gateway is provided. This, addresses the requirements for an electronic payment gateway from both the customers and the merchants' point of view. Most of the population doesn’t trust on the local existing online payment gateway because it is not very secure. Mostly people want to adopt electronic payment system as it has lots of advantages. They need such a gateway that fulfil their all requirements and provide security, privacy etc. On the basis of these requirements and the local infrastructure, we propose an electronic payment gateway for local environment.. Keywords— E-payment, Gateway", "title": "" }, { "docid": "a991cf65cd79abf578a935e1a28a9abb", "text": "Till now, neural abstractive summarization methods have achieved great success for single document summarization (SDS). However, due to the lack of large scale multi-document summaries, such methods can be hardly applied to multi-document summarization (MDS). In this paper, we investigate neural abstractive methods for MDS by adapting a state-of-the-art neural abstractive summarization model for SDS. We propose an approach to extend the neural abstractive model trained on large scale SDS data to the MDS task. Our approach only makes use of a small number of multi-document summaries for fine tuning. Experimental results on two benchmark DUC datasets demonstrate that our approach can outperform a variety of base-", "title": "" }, { "docid": "c07f7baed3648b190eca0f4753027b57", "text": "Objective: An autoencoder-based framework that simultaneously reconstruct and classify biomedical signals is proposed. Previous work has treated reconstruction and classification as separate problems. This is the first study that proposes a combined framework to address the issue in a holistic fashion. Methods: For telemonitoring purposes, reconstruction techniques of biomedical signals are largely based on compressed sensing (CS); these are “designed” techniques where the reconstruction formulation is based on some “assumption” regarding the signal. In this study, we propose a new paradigm for reconstruction—the reconstruction is “learned,” using an autoencoder; it does not require any assumption regarding the signal as long as there is sufficiently large training data. But since the final goal is to analyze/classify the signal, the system can also learn a linear classification map that is added inside the autoencoder. The ensuing optimization problem is solved using the Split Bregman technique. Results: Experiments were carried out on reconstructing and classifying electrocardiogram (ECG) (arrhythmia classification) and EEG (seizure classification) signals. Conclusion: Our proposed tool is capable of operating in a semi-supervised fashion. We show that our proposed method is better in reconstruction and more than an order magnitude faster than CS based methods; it is capable of real-time operation. Our method also yields better results than recently proposed classification methods. Significance: This is the first study offering an alternative to CS-based reconstruction. It also shows that the representation learning approach can yield better results than traditional methods that use hand-crafted features for signal analysis.", "title": "" }, { "docid": "e786d22cd1c30014d1a1dcdc655a56fb", "text": "Chemical fingerprints are used to represent chemical molecules by recording the presence or absence, or by counting the number of occurrences, of particular features or substructures, such as labeled paths in the 2D graph of bonds, of the corresponding molecule. These fingerprint vectors are used to search large databases of small molecules, currently containing millions of entries, using various similarity measures, such as the Tanimoto or Tversky's measures and their variants. Here, we derive simple bounds on these similarity measures and show how these bounds can be used to considerably reduce the subset of molecules that need to be searched. We consider both the case of single-molecule and multiple-molecule queries, as well as queries based on fixed similarity thresholds or aimed at retrieving the top K hits. We study the speedup as a function of query size and distribution, fingerprint length, similarity threshold, and database size |D| and derive analytical formulas that are in excellent agreement with empirical values. The theoretical considerations and experiments show that this approach can provide linear speedups of one or more orders of magnitude in the case of searches with a fixed threshold, and achieve sublinear speedups in the range of O(|D|0.6) for the top K hits in current large databases. This pruning approach yields subsecond search times across the 5 million compounds in the ChemDB database, without any loss of accuracy.", "title": "" }, { "docid": "f5e45faf2fb30e73a09a6d34fc642997", "text": "In this paper, we analyze the Common Platform Enumeration (CPE) dictionary and the Common Vulnerabilities and Exposures (CVE) feeds. These repositories are widely used in Vulnerability Management Systems (VMSs) to check for known vulnerabilities in software products. The analysis shows, among other issues, a lack of synchronization between both datasets that can lead to incorrect results output by VMSs relying on those datasets. To deal with these problems, we developed a method that recommends to a user a prioritized list of CPE identifiers for a given software product. The user can then assign (and, if necessary, adapt) the most suitable CPE identifier to the software so that regular (e.g., daily) checks can find known vulnerabilities for this software in the CVE feeds. Our evaluation of this method shows that this interaction is indeed necessary because a fully automated CPE assignment is prone to errors due to the CPE and CVE shortcomings. We implemented an open-source VMS that employs the proposed method and published it on GitHub.", "title": "" }, { "docid": "f1dae479a7ccfc4484d1a88f843e5815", "text": "When robots cooperate with humans it is necessary for robots to move safely on sudden impact. Joint torque sensing is vital for robots to realize safe behavior and enhance physical performance. Firstly, this paper describes a new torque sensor with linear encoders which demonstrates electro magnetic noise immunity and is unaffected temperature changes. Secondly, we propose a friction compensation method using a disturbance observer to improve the positioning accuracy. In addition, we describe a torque feedback control method which scales down the motor inertia and enhances the joint flexibility. Experimental results of the proposed controller are presented.", "title": "" }, { "docid": "a38e20a392e7f03509e29839196628d5", "text": "We investigate the hypothesis that the combination of three related innovations—1) information technology (IT), 2) complementary workplace reorganization, and 3) new products and services—constitute a significant skill-biased technical change affecting labor demand in the United States. Using detailed firm-level data, we find evidence of complementarities among all three of these innovations in factor demand and productivity regressions. In addition, firms that adopt these innovations tend to use more skilled labor. The effects of IT on labor demand are greater when IT is combined with the particular organizational investments we identify, highlighting the importance of IT-enabled organizational change. Disciplines Business Administration, Management, and Operations | Economics | Labor Economics | Other Business | Technology and Innovation This journal article is available at ScholarlyCommons: http://repository.upenn.edu/oid_papers/108 For more information, ebusiness@mit.edu or 617-253-7054 please visit our website at http://ebusiness.mit.edu or contact the Center directly at A research and education initiative at the MIT Sloan School of Management Information Technology, Workplace Organization, and the Demand for Skilled Labor: Firm-level Evidence", "title": "" } ]
scidocsrr
0969241fe974828318168f75a54390ce
A statistical information extraction system for Turkish
[ { "docid": "5f606838b7158075a4b13871c5b6ec89", "text": "The sentence is a standard textual unit in natural language processing applications. In many languages the punctuation mark that indicates the end-of-sentence boundary is ambiguous; thus the tokenizers of most NLP systems must be equipped with special sentence boundary recognition rules for every new text collection. As an alternative, this article presents an efficient, trainable system for sentence boundary disambiguation. The system, called Satz, makes simple estimates of the parts of speech of the tokens immediately preceding and following each punctuation mark, and uses these estimates as input to a machine learning algorithm that then classifies the punctuation mark. Satz is very fast both in training and sentence analysis, and its combined robustness and accuracy surpass existing techniques. The system needs only a small lexicon and training corpus, and has been shown to transfer quickly and easily from English to other languages, as demonstrated on French and German.", "title": "" } ]
[ { "docid": "06e2fec87a501d234e494238cdff6eda", "text": "Dopamine (DA) is required for hippocampal-dependent memory and long-term potentiation (LTP) at CA1 Schaffer collateral (SC) synapses. It is therefore surprising that exogenously applied DA has little effect on SC synapses, but suppresses CA1 perforant path (PP) inputs. To examine DA actions under more physiological conditions, we used optogenetics to release DA from ventral tegmental area inputs to hippocampus. Unlike exogenous DA application, optogenetic release of DA caused a bidirectional, activity-dependent modulation of SC synapses, with no effect on PP inputs. Low levels of DA release, simulating tonic DA neuron firing, depressed the SC response through a D4 receptor–dependent enhancement of feedforward inhibition mediated by parvalbumin-expressing interneurons. Higher levels of DA release, simulating phasic firing, increased SC responses through a D1 receptor–dependent enhancement of excitatory transmission. Thus, tonic-phasic transitions in DA neuron firing in response to motivational demands may cause a modulatory switch from inhibition to enhancement of hippocampal information flow.", "title": "" }, { "docid": "d65ccb1890bdc597c19d11abad6ae7af", "text": "The traditional view of agent modelling is to infer the explicit parameters of another agent’s strategy (i.e., their probability of taking each action in each situation). Unfortunately, in complex domains with high dimensional strategy spaces, modelling every parameter often requires a prohibitive number of observations. Furthermore, given a model of such a strategy, computing a response strategy that is robust to modelling error may be impractical to compute online. Instead, we propose an implicit modelling framework where agents aim to estimate the utility of a fixed portfolio of pre-computed strategies. Using the domain of heads-up limit Texas hold’em poker, this work describes an end-to-end approach for building an implicit modelling agent. We compute robust response strategies, show how to select strategies for the portfolio, and apply existing variance reduction and online learning techniques to dynamically adapt the agent’s strategy to its opponent. We validate the approach by showing that our implicit modelling agent would have won the heads-up limit opponent exploitation event in the 2011 Annual Computer Poker Competition.", "title": "" }, { "docid": "7b7289900ac45f4ee5357084f16a4c0d", "text": "We present a simple and accurate span-based model for semantic role labeling (SRL). Our model directly takes into account all possible argument spans and scores them for each label. At decoding time, we greedily select higher scoring labeled spans. One advantage of our model is to allow us to design and use spanlevel features, that are difficult to use in tokenbased BIO tagging approaches. Experimental results demonstrate that our ensemble model achieves the state-of-the-art results, 87.4 F1 and 87.0 F1 on the CoNLL-2005 and 2012 datasets, respectively.", "title": "" }, { "docid": "0dd9fc4317dc99a2ca55a822cfc5c36e", "text": "Recently, research has shown that it is possible to spoof a variety of fingerprint scanners using some simple techniques with molds made from plastic, clay, Play-Doh, silicone or gelatin materials. To protect against spoofing, methods of liveness detection measure physiological signs of life from fingerprints ensuring only live fingers are captured for enrollment or authentication. In this paper, a new liveness detection method is proposed which is based on noise analysis along the valleys in the ridge-valley structure of fingerprint images. Unlike live fingers which have a clear ridge-valley structure, artificial fingers have a distinct noise distribution due to the material’s properties when placed on a fingerprint scanner. Statistical features are extracted in multiresolution scales using wavelet decomposition technique. Based on these features, liveness separation (live/non-live) is performed using classification trees and neural networks. We test this method on the dataset which contains about 58 live, 80 spoof (50 made from Play-Doh and 30 made from gelatin), and 25 cadaver subjects for 3 different scanners. Also, we test this method on a second dataset which contains 28 live and 28 spoof (made from silicone) subjects. Results show that we can get approximately 90.9-100% classification of spoof and live fingerprints. The proposed liveness detection method is purely software based and application of this method can provide anti-spoofing protection for fingerprint scanners.", "title": "" }, { "docid": "f6d08e76bfad9c4988253b643163671a", "text": "This paper proposes a technique for unwanted lane departure detection. Initially, lane boundaries are detected using a combination of the edge distribution function and a modified Hough transform. In the tracking stage, a linear-parabolic lane model is used: in the near vision field, a linear model is used to obtain robust information about lane orientation; in the far field, a quadratic function is used, so that curved parts of the road can be efficiently tracked. For lane departure detection, orientations of both lane boundaries are used to compute a lane departure measure at each frame, and an alarm is triggered when such measure exceeds a threshold. Experimental results indicate that the proposed system can fit lane boundaries in the presence of several image artifacts, such as sparse shadows, lighting changes and bad conditions of road painting, being able to detect in advance involuntary lane crossings. q 2005 Elsevier Ltd All rights reserved.", "title": "" }, { "docid": "00801556f47ccd22804a81babd53dca7", "text": "BACKGROUND\nFood product reformulation is seen as one among several tools to promote healthier eating. Reformulating the recipe for a processed food, e.g. reducing the fat, sugar or salt content of the foods, or increasing the content of whole-grains, can help the consumers to pursue a healthier life style. In this study, we evaluate the effects on calorie sales of a 'silent' reformulation strategy, where a retail chain's private-label brands are reformulated to a lower energy density without making specific claims on the product.\n\n\nMETHODS\nUsing an ecological study design, we analyse 52 weeks' sales data - enriched with data on products' energy density - from a Danish retail chain. Sales of eight product categories were studied. Within each of these categories, specific products had been reformulated during the 52 weeks data period. Using econometric methods, we decompose the changes in calorie turnover and sales value into direct and indirect effects of product reformulation.\n\n\nRESULTS\nFor all considered products, the direct effect of product reformulation was a reduction in the sale of calories from the respective product categories - between 0.5 and 8.2%. In several cases, the reformulation led to indirect substitution effects that were counterproductive with regard to reducing calorie turnover. However, except in two insignificant cases, these indirect substitution effects were dominated by the direct effect of the reformulation, leading to net reductions in calorie sales between -3.1 and 7.5%. For all considered product reformulations, the reformulation had either positive, zero or very moderate negative effects on the sales value of the product category to which the reformulated product belonged.\n\n\nCONCLUSIONS\nBased on these findings, 'silent' reformulation of retailer's private brands towards lower energy density seems to contribute to lowering the calorie intake in the population (although to a moderate extent) with moderate losses in retailer's sales revenues.", "title": "" }, { "docid": "9a438856b2cce32bf4e9bcbdc93795a2", "text": "By balancing the spacing effect against the effects of recency and frequency, this paper explains how practice may be scheduled to maximize learning and retention. In an experiment, an optimized condition using an algorithm determined with this method was compared with other conditions. The optimized condition showed significant benefits with large effect sizes for both improved recall and recall latency. The optimization method achieved these benefits by using a modeling approach to develop a quantitative algorithm, which dynamically maximizes learning by determining for each item when the balance between increasing temporal spacing (that causes better long-term recall) and decreasing temporal spacing (that reduces the failure related time cost of each practice) means that the item is at the spacing interval where long-term gain per unit of practice time is maximal. As practice repetitions accumulate for each item, items become stable in memory and this optimal interval increases.", "title": "" }, { "docid": "5d5c4225b67ad8ca31f2d4f005dfa6ce", "text": "Nurse residency programs have been developed with the goal of helping newly licensed nurses successfully transition to independent practice. The authors propose that all newly licensed nurses hired in acute care hospitals be required to complete an accredited residency program. An evidence table examines the state of the science related to transition-to-practice programs and provides the basis for recommendations.", "title": "" }, { "docid": "5554bea693ba285e74f72b8a7b13230a", "text": "Multitasking is the result of time allocation decisions made by individuals faced with multiple tasks. Multitasking research is important in order to improve the design of systems and applications. Since people typically use computers to perform multiple tasks at the same time, insights into this type of behavior can help develop better systems and ideal types of computer environments for modern multitasking users. In this paper, we define multitasking based on the principles of task independence and performance concurrency and develop a set of metrics for computer-based multitasking. The theoretical foundation of this metric development effort stems from an application of key principles of Activity Theory and a systematic analysis of computer usage from the perspective of the user, the task and the technology. The proposed metrics, which range from a lean dichotomous variable to a richer measure based on switches, were validated with data from a sample of users who self-reported their activities during a computer usage session. This set of metrics can be used to establish a conceptual and methodological foundation for future multitasking studies.", "title": "" }, { "docid": "338af8ad05468f3205c0078d56f5bd74", "text": "Once a color image is converted to grayscale, it is a common belief that the original color cannot be fully restored, even with the state-of-the-art colorization methods. In this paper, we propose an innovative method to synthesize invertible grayscale. It is a grayscale image that can fully restore its original color. The key idea here is to encode the original color information into the synthesized grayscale, in a way that users cannot recognize any anomalies. We propose to learn and embed the color-encoding scheme via a convolutional neural network (CNN). It consists of an encoding network to convert a color image to grayscale, and a decoding network to invert the grayscale to color. We then design a loss function to ensure the trained network possesses three required properties: (a) color invertibility, (b) grayscale conformity, and (c) resistance to quantization error. We have conducted intensive quantitative experiments and user studies over a large amount of color images to validate the proposed method. Regardless of the genre and content of the color input, convincing results are obtained in all cases.", "title": "" }, { "docid": "6f0ebd6314cd5c012f791d0e5c448045", "text": "This paper presents a framework of discriminative least squares regression (LSR) for multiclass classification and feature selection. The core idea is to enlarge the distance between different classes under the conceptual framework of LSR. First, a technique called ε-dragging is introduced to force the regression targets of different classes moving along opposite directions such that the distances between classes can be enlarged. Then, the ε-draggings are integrated into the LSR model for multiclass classification. Our learning framework, referred to as discriminative LSR, has a compact model form, where there is no need to train two-class machines that are independent of each other. With its compact form, this model can be naturally extended for feature selection. This goal is achieved in terms of L2,1 norm of matrix, generating a sparse learning model for feature selection. The model for multiclass classification and its extension for feature selection are finally solved elegantly and efficiently. Experimental evaluation over a range of benchmark datasets indicates the validity of our method.", "title": "" }, { "docid": "c1c961753459a914c04fc71988c48586", "text": "We have begun work on a framework for abstractive summarization and decided to focus on a module for text generation. For TAC 2010, we thus move away from sentence extraction. Each sentence in the summary we generate is based on a document sentence but it usually contains a smaller amount of information and uses fewer words. The system uses the output of a syntactic parser for a sentence and then regenerates part of the sentence using a Natural Language Generation engine. The sentences of the summary are selected among regenerated sentences based on the document frequency of contained words, while avoiding redundancy. Date and location were handled and generated especially for cluster categories 1 and 2. Even though our initial scores were not outstanding, we intend to continue work on this approach in the coming years.", "title": "" }, { "docid": "62425652b113c72c668cf9c73b7c8480", "text": "Knowledge graph (KG) completion aims to fill the missing facts in a KG, where a fact is represented as a triple in the form of (subject, relation, object). Current KG completion models compel twothirds of a triple provided (e.g., subject and relation) to predict the remaining one. In this paper, we propose a new model, which uses a KGspecific multi-layer recurrent neutral network (RNN) to model triples in a KG as sequences. It outperformed several state-of-the-art KG completion models on the conventional entity prediction task for many evaluation metrics, based on two benchmark datasets and a more difficult dataset. Furthermore, our model is enabled by the sequential characteristic and thus capable of predicting the whole triples only given one entity. Our experiments demonstrated that our model achieved promising performance on this new triple prediction task.", "title": "" }, { "docid": "d8979a16e0f04fbc0f1d8dbdc618c4f0", "text": "Bexarotene is an oral retinoid shown to be active against the cutaneous manifestations of cutaneous T-cell lym-phoma (CTCL). Literature on the efficacy, dosing and side-effects of bexarotene is sparse. We present here data on 37 Finnish patients with CTCL treated with bexarotene during the last 10 years. Bexarotene was equally effective as monotherapy or when combined with other treatment modalities, resulting in overall responses of approximately 75%. Early-stage CTCL responded better than advanced-stage CTCL (83% vs. 33%). The mean time to observable response was 3 months and the mean duration of the response was 21 months. The dose of bexarotene was generally lower than recommended due to side-effects. Abrupt elevation of liver transaminases, resulting in cessation of treatment, was observed in 4 (11%) patients. We conclude that the dose of bexarotene should be titrated individually to achieve optimal results. Maintenance therapy with low-dose bexarotene is a feasible alternative.", "title": "" }, { "docid": "555edc1ec905a1e6dfd2bfb9c6acdd84", "text": "Half a decade after Bitcoin became the first widely used cryptocurrency, blockchains are receiving considerable interest from industry and the research community. Modern blockchains feature services such as name registration and smart contracts. Some employ new forms of consensus, such as proof-of-stake instead of proof-of-work. However, these blockchains are so far relatively poorly investigated, despite the fact that they move considerable assets. In this paper, we explore three representative, modern blockchains—Ethereum, Namecoin, and Peercoin. Our focus is on the features that set them apart from the pure currency use case of Bitcoin. We investigate the blockchains’ activity in terms of transactions and usage patterns, identifying some curiosities in the process. For Ethereum, we are mostly interested in the smart contract functionality it offers. We also carry out a brief analysis of issues that are introduced by negligent design of smart contracts. In the case of Namecoin, our focus is how the name registration is used and has developed over time. For Peercoin, we are interested in the use of proofof-stake, as this consensus algorithm is poorly understood yet used to move considerable value. Finally, we relate the above to the fundamental characteristics of the underlying peer-to-peer networks. We present a crawler for Ethereum and give statistics on the network size. For Peercoin and Namecoin, we identify the relatively small size of the networks and the weak bootstrapping process.", "title": "" }, { "docid": "2746acb7d620802e949bef7fb855bfa7", "text": "Our research approach is to design and develop reliable, efficient, flexible, economical, real-time and realistic wellness sensor networks for smart home systems. The heterogeneous sensor and actuator nodes based on wireless networking technologies are deployed into the home environment. These nodes generate real-time data related to the object usage and movement inside the home, to forecast the wellness of an individual. Here, wellness stands for how efficiently someone stays fit in the home environment and performs his or her daily routine in order to live a long and healthy life. We initiate the research with the development of the smart home approach and implement it in different home conditions (different houses) to monitor the activity of an inhabitant for wellness detection. Additionally, our research extends the smart home system to smart buildings and models the design issues related to the smart building environment; these design issues are linked with system performance and reliability. This research paper also discusses and illustrates the possible mitigation to handle the ISM band interference and attenuation losses without compromising optimum system performance.", "title": "" }, { "docid": "d1e263fe19496590e1c6e5e64de1ac20", "text": "Deformable image registration is an important tool in medical image analysis. In the case of lung computed tomography (CT) registration there are three major challenges: large motion of small features, sliding motions between organs, and changing image contrast due to compression. Recently, Markov random field (MRF)-based discrete optimization strategies have been proposed to overcome problems involved with continuous optimization for registration, in particular its susceptibility to local minima. However, to date the simplifications made to obtain tractable computational complexity reduced the registration accuracy. We address these challenges and preserve the potentially higher quality of discrete approaches with three novel contributions. First, we use an image-derived minimum spanning tree as a simplified graph structure, which copes well with the complex sliding motion and allows us to find the global optimum very efficiently. Second, a stochastic sampling approach for the similarity cost between images is introduced within a symmetric, diffeomorphic B-spline transformation model with diffusion regularization. The complexity is reduced by orders of magnitude and enables the minimization of much larger label spaces. In addition to the geometric transform labels, hyper-labels are introduced, which represent local intensity variations in this task, and allow for the direct estimation of lung ventilation. We validate the improvements in accuracy and performance on exhale-inhale CT volume pairs using a large number of expert landmarks.", "title": "" }, { "docid": "0c529c9a9f552f89e0c0ad3e000cbd37", "text": "In this article, I introduce an emotion paradox: People believe that they know an emotion when they see it, and as a consequence assume that emotions are discrete events that can be recognized with some degree of accuracy, but scientists have yet to produce a set of clear and consistent criteria for indicating when an emotion is present and when it is not. I propose one solution to this paradox: People experience an emotion when they conceptualize an instance of affective feeling. In this view, the experience of emotion is an act of categorization, guided by embodied knowledge about emotion. The result is a model of emotion experience that has much in common with the social psychological literature on person perception and with literature on embodied conceptual knowledge as it has recently been applied to social psychology.", "title": "" }, { "docid": "00e8fe5b7a7ad5cd7a743533a444f2e5", "text": "In this paper we present a novel instance segmentation algorithm that extends a fully convolutional network to learn to label objects separately without prediction of regions of interest. We trained the new algorithm on a challenging CCTV recording of beef cattle, as well as benchmark MS COCO and Pascal VOC datasets. Extensive experimentation showed that our approach outperforms the state-of-the-art solutions by up to 8% on our data.", "title": "" } ]
scidocsrr
2cd2b097db4c9c03aefda20a73bb637e
Feature selection using genetic algorithms for premature ventricular contraction classification
[ { "docid": "45be193fe04064886615367dd9225c92", "text": "Automatic electrocardiogram (ECG) beat classification is essential to timely diagnosis of dangerous heart conditions. Specifically, accurate detection of premature ventricular contractions (PVCs) is imperative to prepare for the possible onset of life-threatening arrhythmias. Although many groups have developed highly accurate algorithms for detecting PVC beats, results have generally been limited to relatively small data sets. Additionally, many of the highest classification accuracies (>90%) have been achieved in experiments where training and testing sets overlapped significantly. Expanding the overall data set greatly reduces overall accuracy due to significant variation in ECG morphology among different patients. As a result, we believe that morphological information must be coupled with timing information, which is more constant among patients, in order to achieve high classification accuracy for larger data sets. With this approach, we combined wavelet-transformed ECG waves with timing information as our feature set for classification. We used select waveforms of 18 files of the MIT/BIH arrhythmia database, which provides an annotated collection of normal and arrhythmic beats, for training our neural-network classifier. We then tested the classifier on these 18 training files as well as 22 other files from the database. The accuracy was 95.16% over 93,281 beats from all 40 files, and 96.82% over the 22 files outside the training set in differentiating normal, PVC, and other beats", "title": "" }, { "docid": "b53c46bc41237333f68cf96208d0128c", "text": "Practical pattern classi cation and knowledge discovery problems require selection of a subset of attributes or features (from a much larger set) to represent the patterns to be classi ed. This paper presents an approach to the multi-criteria optimization problem of feature subset selection using a genetic algorithm. Our experiments demonstrate the feasibility of this approach for feature subset selection in the automated design of neural networks for pattern classi cation and knowledge discovery.", "title": "" } ]
[ { "docid": "fb2ff96dbfe584f450dd19f8d3cea980", "text": "[1] Nondestructive imaging methods such as X-ray computed tomography (CT) yield high-resolution, three-dimensional representations of pore space and fluid distribution within porous materials. Steadily increasing computational capabilities and easier access to X-ray CT facilities have contributed to a recent surge in microporous media research with objectives ranging from theoretical aspects of fluid and interfacial dynamics at the pore scale to practical applications such as dense nonaqueous phase liquid transport and dissolution. In recent years, significant efforts and resources have been devoted to improve CT technology, microscale analysis, and fluid dynamics simulations. However, the development of adequate image segmentation methods for conversion of gray scale CT volumes into a discrete form that permits quantitative characterization of pore space features and subsequent modeling of liquid distribution and flow processes seems to lag. In this paper we investigated the applicability of various thresholding and locally adaptive segmentation techniques for industrial and synchrotron X-ray CT images of natural and artificial porous media. A comparison between directly measured and image-derived porosities clearly demonstrates that the application of different segmentation methods as well as associated operator biases yield vastly differing results. This illustrates the importance of the segmentation step for quantitative pore space analysis and fluid dynamics modeling. Only a few of the tested methods showed promise for both industrial and synchrotron tomography. Utilization of local image information such as spatial correlation as well as the application of locally adaptive techniques yielded significantly better results.", "title": "" }, { "docid": "bc0294e230abff5c47d5db0d81172bbc", "text": "Pulse radiolysis experiments were used to characterize the intermediates formed from ibuprofen during electron beam irradiation in a solution of 0.1mmoldm(-3). For end product characterization (60)Co γ-irradiation was used and the samples were evaluated either by taking their UV-vis spectra or by HPLC with UV or MS detection. The reactions of OH resulted in hydroxycyclohexadienyl type radical intermediates. The intermediates produced in further reactions hydroxylated the derivatives of ibuprofen as final products. The hydrated electron attacked the carboxyl group. Ibuprofen degradation is more efficient under oxidative conditions than under reductive conditions. The ecotoxicity of the solution was monitored by Daphnia magna standard microbiotest and Vibrio fischeri luminescent bacteria test. The toxic effect of the aerated ibuprofen solution first increased upon irradiation indicating a higher toxicity of the first degradation products, then decreased with increasing absorbed dose.", "title": "" }, { "docid": "3d739c3679ef22679ceddce6b3912e83", "text": "In this paper, we propose a method for semantic parsing the 3D point cloud of an entire building using a hierarchical approach: first, the raw data is parsed into semantically meaningful spaces (e.g. rooms, etc) that are aligned into a canonical reference coordinate system. Second, the spaces are parsed into their structural and building elements (e.g. walls, columns, etc). Performing these with a strong notation of global 3D space is the backbone of our method. The alignment in the first step injects strong 3D priors from the canonical coordinate system into the second step for discovering elements. This allows diverse challenging scenarios as man-made indoor spaces often show recurrent geometric patterns while the appearance features can change drastically. We also argue that identification of structural elements in indoor spaces is essentially a detection problem, rather than segmentation which is commonly used. We evaluated our method on a new dataset of several buildings with a covered area of over 6, 000m2 and over 215 million points, demonstrating robust results readily useful for practical applications.", "title": "" }, { "docid": "b4743e08bf9b20e3d82a77229cced73d", "text": "Spatial memory tasks, performance of which is known to be sensitive to hippocampal lesions in the rat, or to medial temporal lesions in the human, were administered in order to investigate the effects of selective damage to medial temporal lobe structures of the human brain. The patients had undergone thermo-coagulation with a single electrode along the amygdalo-hippocampal axis in an attempt to alleviate their epilepsy. With this surgical technique, lesions to single medial temporal lobe structures can be carried out. The locations of the lesions were assessed by means of digital high-resolution magnetic resonance imaging and software allowing a 3-D reconstruction of the brain. A break in the collateral sulcus, dividing it into the anterior collateral sulcus and the posterior collateral sulcus is reported. This division may correspond to the end of the entorhinal/perirhinal cortex and the start of the parahippocampal cortex. The results confirmed the role of the right hippocampus in visuo-spatial memory tasks (object location, Rey-Osterrieth Figure with and without delay) and the left for verbal memory tasks (Rey Auditory Verbal Learning Task with delay). However, patients with lesions either to the right or to the left hippocampus were unimpaired on several memory tasks, including a spatial one, with a 30 min delay, designed to be analogous to the Morris water maze. Patients with lesions to the right parahippocampal cortex were impaired on this task with a 30 min delay, suggesting that the parahippocampal cortex itself may play an important role in spatial memory.", "title": "" }, { "docid": "3b06ce783d353cff3cdbd9a60037162e", "text": "The ability to abstract principles or rules from direct experience allows behaviour to extend beyond specific circumstances to general situations. For example, we learn the ‘rules’ for restaurant dining from specific experiences and can then apply them in new restaurants. The use of such rules is thought to depend on the prefrontal cortex (PFC) because its damage often results in difficulty in following rules. Here we explore its neural basis by recording from single neurons in the PFC of monkeys trained to use two abstract rules. They were required to indicate whether two successively presented pictures were the same or different depending on which rule was currently in effect. The monkeys performed this task with new pictures, thus showing that they had learned two general principles that could be applied to stimuli that they had not yet experienced. The most prevalent neuronal activity observed in the PFC reflected the coding of these abstract rules.", "title": "" }, { "docid": "fe95e139aab1453750224bd856059fcf", "text": "IMPORTANCE\nChronic sinusitis is a common inflammatory condition defined by persistent symptomatic inflammation of the sinonasal cavities lasting longer than 3 months. It accounts for 1% to 2% of total physician encounters and is associated with large health care expenditures. Appropriate use of medical therapies for chronic sinusitis is necessary to optimize patient quality of life (QOL) and daily functioning and minimize the risk of acute inflammatory exacerbations.\n\n\nOBJECTIVE\nTo summarize the highest-quality evidence on medical therapies for adult chronic sinusitis and provide an evidence-based approach to assist in optimizing patient care.\n\n\nEVIDENCE REVIEW\nA systematic review searched Ovid MEDLINE (1947-January 30, 2015), EMBASE, and Cochrane Databases. The search was limited to randomized clinical trials (RCTs), systematic reviews, and meta-analyses. Evidence was categorized into maintenance and intermittent or rescue therapies and reported based on the presence or absence of nasal polyps.\n\n\nFINDINGS\nTwenty-nine studies met inclusion criteria: 12 meta-analyses (>60 RCTs), 13 systematic reviews, and 4 RCTs that were not included in any of the meta-analyses. Saline irrigation improved symptom scores compared with no treatment (standardized mean difference [SMD], 1.42 [95% CI, 1.01 to 1.84]; a positive SMD indicates improvement). Topical corticosteroid therapy improved overall symptom scores (SMD, -0.46 [95% CI, -0.65 to -0.27]; a negative SMD indicates improvement), improved polyp scores (SMD, -0.73 [95% CI, -1.0 to -0.46]; a negative SMD indicates improvement), and reduced polyp recurrence after surgery (relative risk, 0.59 [95% CI, 0.45 to 0.79]). Systemic corticosteroids and oral doxycycline (both for 3 weeks) reduced polyp size compared with placebo for 3 months after treatment (P < .001). Leukotriene antagonists improved nasal symptoms compared with placebo in patients with nasal polyps (P < .01). Macrolide antibiotic for 3 months was associated with improved QOL at a single time point (24 weeks after therapy) compared with placebo for patients without polyps (SMD, -0.43 [95% CI, -0.82 to -0.05]).\n\n\nCONCLUSIONS AND RELEVANCE\nEvidence supports daily high-volume saline irrigation with topical corticosteroid therapy as a first-line therapy for chronic sinusitis. A short course of systemic corticosteroids (1-3 weeks), short course of doxycycline (3 weeks), or a leukotriene antagonist may be considered in patients with nasal polyps. A prolonged course (3 months) of macrolide antibiotic may be considered for patients without polyps.", "title": "" }, { "docid": "0beb3bcdaf54ba896ba68e1733b3b3c8", "text": "systems use digital models to provide interactive viewing. We present a 3D digital video system that attempts to provide the same capabilities for actual performances such as dancing. Recreating the original dynamic scene in 3D, the system allows photorealistic interactive playback from arbitrary viewpoints using video streams of a given scene from multiple perspectives. V irtual reality can become a medium of great usefulness in entertainment and education if it can incorporate recordings of actual events. While much work has gone into creating synthetic environments that correspond to counterparts in the real world, few have attempted to incorporate real people and events into such environments. To achieve this, an event of interest must be truthfully captured by real-time sensors such as video. From this recording, faithful digital replications must then be created such that the original performances can be presented using standard computer graphics methods and viewed from arbitrary perspectives. We present a system for generating and replaying photorealistic 3D digital video sequences of real events and performances. 3D video embodies the truthfulness of video recordings and the inter-activity of 3D graphics (see Figure 1). This system employs Multiple Perspective Interactive Video (MPI-Video), 1 an infrastructure for the analysis and management of, and interactive access to, multiple video cameras monitoring a dynamically evolving scene such as a football game. The problem of virtual view creation, or view synthesis or interpolation of real scenes, has received increasing attention in recent years. Current approaches divide into two classes: image-based and model-based. Image-domain methods employ warping or morphing techniques to interpolate intermediate views from real images. Model-based methods first recover the geometry of the real scene; the resulting 3D model can then be rendered from desired viewpoints. Our method belongs to the latter class. Image-based methods The best-known image-domain method is Apple's QuickTime VR. 2 By capturing the 360-degree views (cylindrical panoramic images) of an environment from a fixed position, you can interactively adjust view orientation by rendering the corresponding portion of the panorama. Other approaches use image warping. Chen and Williams 3 determined camera transformations with pixel correspondences, then used morphing to generate intermediate views. Skerjanc and Liu 4 used known camera positions to obtain depth information and generate virtual views. Chang and Zakhor 5 obtained depth information by using an uncalibrated camera that \" scans \" a stationary scene and transforms points on camera image planes onto the plane of the …", "title": "" }, { "docid": "8f0801de787ccea72bb0c61aefbd0ec8", "text": "Recent fMRI studies demonstrated that functional connectivity is altered following cognitive tasks (e.g., learning) or due to various neurological disorders. We tested whether real-time fMRI-based neurofeedback can be a tool to voluntarily reconfigure brain network interactions. To disentangle learning-related from regulation-related effects, we first trained participants to voluntarily regulate activity in the auditory cortex (training phase) and subsequently asked participants to exert learned voluntary self-regulation in the absence of feedback (transfer phase without learning). Using independent component analysis (ICA), we found network reconfigurations (increases in functional network connectivity) during the neurofeedback training phase between the auditory target region and (1) the auditory pathway; (2) visual regions related to visual feedback processing; (3) insula related to introspection and self-regulation and (4) working memory and high-level visual attention areas related to cognitive effort. Interestingly, the auditory target region was identified as the hub of the reconfigured functional networks without a-priori assumptions. During the transfer phase, we again found specific functional connectivity reconfiguration between auditory and attention network confirming the specific effect of self-regulation on functional connectivity. Functional connectivity to working memory related networks was no longer altered consistent with the absent demand on working memory. We demonstrate that neurofeedback learning is mediated by widespread changes in functional connectivity. In contrast, applying learned self-regulation involves more limited and specific network changes in an auditory setup intended as a model for tinnitus. Hence, neurofeedback training might be used to promote recovery from neurological disorders that are linked to abnormal patterns of brain connectivity.", "title": "" }, { "docid": "7ff0020129c7887bfa33c1477f4e3069", "text": "This paper investigates learning a ranking function using pairwise constraints in the context of human-machine interaction. As the performance of a learnt ranking model is predominantly determined by the quality and quantity of training data, in this work we explore an active learning to rank approach. Furthermore, since humans may not be able to confidently provide an order for a pair of similar instances we explore two types of pairwise supervision: (i) a set of “strongly” ordered pairs which contains confidently ranked instances, and (ii) a set of “weakly” ordered pairs which consists of similar or closely ranked instances. Our active knowledge injection is performed by querying domain experts on pairwise orderings, where informative pairs are located by considering both local and global uncertainties. Under this active scheme, querying of pairs which are uninformative or outliers instances would not occur. We evaluate the proposed approach on three real world datasets and compare with representative methods. The promising experimental results demonstrate the superior performance of our approach, and validate the effectiveness of actively using pairwise orderings to improve ranking performance.", "title": "" }, { "docid": "81c3ad88730ee76769aef8a4fd24fb87", "text": "On-chip decoupling capacitors (decaps) in the form of MOS transistors are widely used to reduce power supply noise. This paper provides guidelines for standard cell layouts of decaps for use within Intellectual Property (IP) blocks in application-specific integrated circuit (ASIC) designs. At 90-nm CMOS technology and below, a tradeoff exists between high-frequency effects and electrostatic discharge (ESD) reliability when designing the layout of such decaps. In this paper, the high-frequency effects are modeled using simple equations. A metric is developed to determine the optimal number of fingers based on the frequency response. Then, a cross-coupled design is described that has been recently introduced by cell library developers to handle ESD problems. Unfortunately, it suffers from poor response times due the large resistance inherent in its design. Improved cross-coupled designs are presented that properly balance issues of frequency response with ESD performance, while greatly reducing thin-oxide gate leakage.", "title": "" }, { "docid": "12d6aab2ecf0802fd59b77ed8a209e99", "text": "This paper reviews the econometric issues in efforts to estimate the impact of the death penalty on murder, focusing on six recent studies published since 2003. We highlight the large number of choices that must be made when specifying the various panel data models that have been used to address this question. There is little clarity about the knowledge potential murderers have concerning the risk of execution: are they influenced by the passage of a death penalty statute, the number of executions in a state, the proportion of murders in a state that leads to an execution, and details about the limited types of murders that are potentially susceptible to a sentence of death? If an execution rate is a viable proxy, should it be calculated using the ratio of last year’s executions to last year’s murders, last year’s executions to the murders a number of years earlier, or some other values? We illustrate how sensitive various estimates are to these choices. Importantly, the most up-to-date OLS panel data studies generate no evidence of a deterrent effect, while three 2SLS studies purport to find such evidence. The 2SLS studies, none of which shows results that are robust to clustering their standard errors, are unconvincing because they all use a problematic structure based on poorly measured and theoretically inappropriate pseudo-probabilities that are", "title": "" }, { "docid": "fb522eaaaed2f1b1f7fcddf958d2f617", "text": "Image Processing is a technique to enhance raw images received from cameras/sensors placed on satellites, space probes and aircrafts or pictures taken in normal day-to-day life for various applications. Various techniques have been developed in Image processing during the last four to five decades. Most of the techniques are developed for enhancing images obtained from unmanned spacecrafts, space probes and military reconnaissance flights. Image Processing systems are becoming popular due to easy availability of powerful personnel computers, large size memory devices, graphics software etc. Medical image segmentation & classification play an important role in medical research field. The patient CT lung images are classified into normal and abnormal category. Then, the abnormal images are subjected to segmentation to view the tumor portion. Classification depends on the features extracted from the images. We mainly are concentrating on feature extraction stage to yield better classification performance. Texture based features such as GLCM (Gray Level Co-occurrence Matrix) features play an important role in medical image analysis. Totally 12 different statistical features were extracted. To select the discriminative features among them we use sequential forward selection algorithm. Afterwards we prefer multinomial multivariate Bayesian for the classification stage. Classifier performance will be analyses further. The effectiveness of the modified weighted FCM algorithm in terms of computational rate is improved by modifying the cluster center and membership value updating criterion. Objective of this paper is that To achieve a perfect classification by multivariate multinomial Bayesian", "title": "" }, { "docid": "af5f7910be8cbc67ac3aa0e81c8c2bd3", "text": "Manlio De Domenico, Albert Solé-Ribalta, Emanuele Cozzo, Mikko Kivelä, Yamir Moreno, Mason A. Porter, Sergio Gómez, and Alex Arenas Departament d’Enginyeria Informàtica i Matemàtiques, Universitat Rovira i Virgili, 43007 Tarragona, Spain Institute for Biocomputation and Physics of Complex Systems (BIFI), University of Zaragoza, Zaragoza 50018, Spain Oxford Centre for Industrial and Applied Mathematics, Mathematical Institute, University of Oxford, Oxford OX1 3LB, United Kingdom Department of Theoretical Physics, University of Zaragoza, Zaragoza 50009, Spain Complex Networks and Systems Lagrange Lab, Institute for Scientific Interchange, Turin 10126, Italy Oxford Centre for Industrial and Applied Mathematics, Mathematical Institute and CABDyN Complexity Centre, University of Oxford, Oxford OX1 3LB, United Kingdom (Received 23 July 2013; published 4 December 2013)", "title": "" }, { "docid": "0f4750f3998766e8f2a506a2d432f3bf", "text": "Presently sustainability of fashion in the worldwide is the major considerable issue. The much talked concern is for the favor of fashion’s sustainability around the world. Many organizations and fashion conscious personalities have come forward to uphold the further extension of the campaign of good environment for tomorrows. On the other hand, fashion for the morality or ethical issues is one of the key concepts for the humanity and sustainability point of view. Main objectives of this study to justify the sustainability concern of fashion companies and their policy. In this paper concerned brands are focused on the basis of their present activities related fashion from the manufacturing to the marketing process. Most of the cases celebrities are in the forwarded stages for the upheld of the fashion sustainability. For the conservation of the environment, sustainability of the fashion is the utmost need in the present fastest growing world. Nowadays, fashion is considered the vital issue for the ecological aspect with morality concern. The research is based on the rigorously study with the reading materials. The data have been gathered from various sources, mainly academic literature, research article, conference article, PhD thesis, under graduate & post graduate dissertation and a qualitative research method approach has been adopted for this research. For the convenience of the reader and future researchers, Analysis and Findings have done in the same time.", "title": "" }, { "docid": "2a244146b1cf3433b2e506bdf966e134", "text": "The rate of detection of thyroid nodules and carcinomas has increased with the widespread use of ultrasonography (US), which is the mainstay for the detection and risk stratification of thyroid nodules as well as for providing guidance for their biopsy and nonsurgical treatment. The Korean Society of Thyroid Radiology (KSThR) published their first recommendations for the US-based diagnosis and management of thyroid nodules in 2011. These recommendations have been used as the standard guidelines for the past several years in Korea. Lately, the application of US has been further emphasized for the personalized management of patients with thyroid nodules. The Task Force on Thyroid Nodules of the KSThR has revised the recommendations for the ultrasound diagnosis and imaging-based management of thyroid nodules. The review and recommendations in this report have been based on a comprehensive analysis of the current literature and the consensus of experts.", "title": "" }, { "docid": "fbe58cc0d6a3a93bbc64e60661346099", "text": "Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions (e.g. happiness and anger). Such prototypic expressions, however, occur infrequently. Human emotions and intentions are communicated more often by changes in one or two discrete facial features. In this paper, we develop an automatic system to analyze subtle changes in facial expressions based on both permanent (e.g. mouth, eye, and brow) and transient (e.g. furrows and wrinkles) facial features in a nearly frontal image sequence. Multi-state facial component models are proposed for tracking and modeling different facial features. Based on these multi-state models, and without artificial enhancement, we detect and track the facial features, including mouth, eyes, brow, cheeks, and their related wrinkles and facial furrows. Moreover we recover detailed parametric descriptions of the facial features. With these features as the inputs, 11 individual action units or action unit combinations are recognized by a neural network algorithm. A recognition rate of 96.7% is obtained. The recognition results indicate that our system can identify action units regardless of whether they occurred singly or in combinations.", "title": "" }, { "docid": "9ba6656cb67dcb72d4ebadcaf9450f40", "text": "OBJECTIVE\nThe Japan Ankylosing Spondylitis Society conducted a nationwide questionnaire survey of spondyloarthropathies (SpA) in 1990 and 1997, (1) to estimate the prevalence and incidence, and (2) to validate the criteria of Amor and the European Spondylarthropathy Study Group (ESSG) in Japan.\n\n\nMETHODS\nJapan was divided into 9 districts, to each of which a survey supervisor was assigned. According to unified criteria, each supervisor selected all the clinics and hospitals with potential for SpA patients in the district. The study population consisted of all patients with SpA seen at these institutes during a 5 year period (1985-89) for the 1st survey and a 7 year period (1990-96) for the 2nd survey.\n\n\nRESULTS\nThe 1st survey recruited 426 and the 2nd survey 638 cases, 74 of which were registered in both studies. The total number of patients with SpA identified 1985-96 was 990 (760 men, 227 women). They consisted of patients with ankylosing spondylitis (68.3%), psoriatic arthritis (12.7%), reactive arthritis (4.0%), undifferentiated SpA (5.4%), inflammatory bowel disease (2.2%), pustulosis palmaris et plantaris (4.7%), and others (polyenthesitis, etc.) (0.8%). The maximum onset number per year was 49. With the assumption that at least one-tenth of the Japanese population with SpA was recruited, incidence and prevalence were estimated not to exceed 0.48/100,000 and 9.5/100,000 person-years, respectively. The sensitivity was 84.0% for Amor criteria and 84.6 for ESSG criteria.\n\n\nCONCLUSION\nThe incidence and prevalence of SpA in Japanese were estimated to be less than 1/10 and 1/200, respectively, of those among Caucasians. The adaptability of the Amor and ESSG criteria was validated for the Japanese population.", "title": "" }, { "docid": "4432b8022f49b8cffed8fb6800a98a48", "text": "6 Recommendation systems play an extremely important role in e-commerce; 7 by recommending products that suit the taste of the consumers, e-commerce 8 companies can generate large profits. The most commonly used 9 recommender systems typically produce a list of recommendations through 10 collaborative or content-based filtering; neither of those approaches take 11 into account the content of the written reviews, which contain rich 12 information about user’s taste. In this paper, we evaluate the performance of 13 ten different recurrent neural network (RNN) structure on the task of generating 14 recommendations using written reviews. The RNN structures we study include 15 well know implementations such as Multi-stacked bi-directional Gated 16 Recurrent Unit (GRU) and Long Short-Term Memory (LSTM) as well as novel 17 implementation of attention-based RNN structure. The attention-based structures 18 are not only among the best models in terms of prediction accuracy, they also 19 assign an attention weight to each word in the review; by plotting the attention 20 weight of each word we gain additional insight into the underlying mechanisms 21 involved in the prediction process. We develop and test the recommendation 22 systems using the data provided by Yelp Data Challenge. 23", "title": "" }, { "docid": "91ef2853e45d9b82f92689e0b01e6d63", "text": "BACKGROUND\nThis study sought to evaluate the efficacy of nonoperative compression in correcting pectus carinatum in children.\n\n\nMATERIALS AND METHODS\nChildren presenting with pectus carinatum between August 1999 and January 2004 were prospectively enrolled in this study. The management protocol included custom compressive bracing, strengthening exercises, and frequent clinical follow-up.\n\n\nRESULTS\nThere were 30 children seen for evaluation. Their mean age was 13 years (range, 3-16 years) and there were 26 boys and 4 girls. Of the 30 original patients, 6 never returned to obtain the brace, leaving 24 patients in the study. Another 4 subjects were lost to follow-up. For the remaining 20 patients who have either completed treatment or continue in the study, the mean duration of bracing was 16 months, involving an average of 3 follow-up visits and 2 brace adjustments. Five of these patients had little or no improvement due to either too short a follow-up or noncompliance with the bracing. The other 15 patients (75%) had a significant to complete correction. There were no complications encountered during the study period.\n\n\nCONCLUSION\nCompressive orthotic bracing is a safe and effective alternative to both invasive surgical correction and no treatment for pectus carinatum in children. Compliance is critical to the success of this management strategy.", "title": "" } ]
scidocsrr
152d3288dd9bee2aa260e9e06dcf9215
Extended Bandwidth Piezoelectric Lorentz Force Magnetometer Based on a Mechanically Coupled Beam Resonator Array
[ { "docid": "0ad00a5bed02bf2deff12ad9c3dfd2c6", "text": "This letter presents a micromachined silicon Lorentz force magnetometer, which consists of a flexural beam resonator coupled to current-carrying silicon beams via a microleverage mechanism. The flexural beam resonator is a force sensor, which measures the magnetic field through resonant frequency shift induced by the Lorentz force, which acts as an axial load. Previous frequency-modulated Lorentz force magnetometers suffer from low sensitivity, limited by both fabrication restrictions and lack of a force amplification mechanism. In this letter, the microleverage mechanism amplifies the Lorentz force, thereby enhancing the sensitivity of the magnetometer by a factor of 42. The device has a measured sensitivity of 6687 ppm/(mA · T), which is two orders of magnitude larger than the prior state-of-the-art. The measured results agree with an analytical model and finite-element analysis. The frequency stability of the sensor is limited by the quality factor (Q) of 540, which can be increased through improved vacuum packaging.", "title": "" } ]
[ { "docid": "054337a29922a1b56d46d1d3f10bc414", "text": "The ability to automatically learn task specific feature representations has led to a huge success of deep learning methods. When large training data is scarce, such as in medical imaging problems, transfer learning has been very effective. In this paper, we systematically investigate the process of transferring a Convolutional Neural Network, trained on ImageNet images to perform image classification, to kidney detection problem in ultrasound images. We study how the detection performance depends on the extent of transfer. We show that a transferred and tuned CNN can outperform a state-of-the-art feature engineered pipeline and a hybridization of these two techniques achieves 20% higher performance. We also investigate how the evolution of intermediate response images from our network. Finally, we compare these responses to state-of-the-art image processing filters in order to gain greater insight into how transfer learning is able to effectively manage widely varying imaging regimes.", "title": "" }, { "docid": "0f7ac1ddba7acff683ad491bc3b6e8aa", "text": "In Bitcoin, transaction malleability describes the fact that the signatures that prove the ownership of bitcoins being transferred in a transaction do not provide any integrity guarantee for the signatures themselves. This allows an attacker to mount a malleability attack in which it intercepts, modifies, and rebroadcasts a transaction, causing the transaction issuer to believe that the original transaction was not confirmed. In February 2014 MtGox, once the largest Bitcoin exchange, closed and filed for bankruptcy claiming that attackers used malleability attacks to drain its accounts. In this work we use traces of the Bitcoin network for over a year preceding the filing to show that, while the problem is real, there was no widespread use of malleability attacks before the closure of MtGox.", "title": "" }, { "docid": "24957794ed251c2e970d787df6d87064", "text": "Glyph as a powerful multivariate visualization technique is used to visualize data through its visual channels. To visualize 3D volumetric dataset, glyphs are usually placed on 2D surface, such as the slicing plane or the feature surface, to avoid occluding each other. However, the 3D spatial structure of some features may be missing. On the other hand, placing large number of glyphs over the entire 3D space results in occlusion and visual clutter that make the visualization ineffective. To avoid the occlusion, we propose a view-dependent interactive 3D lens that removes the occluding glyphs by pulling the glyphs aside through the animation. We provide two space deformation models and two lens shape models to displace the glyphs based on their spatial distributions. After the displacement, the glyphs around the user-interested region are still visible as the context information, and their spatial structures are preserved. Besides, we attenuate the brightness of the glyphs inside the lens based on their depths to provide more depth cue. Furthermore, we developed an interactive glyph visualization system to explore different glyph-based visualization applications. In the system, we provide a few lens utilities that allows users to pick a glyph or a feature and look at it from different view directions. We compare different display/interaction techniques to visualize/manipulate our lens and glyphs.", "title": "" }, { "docid": "6ad8da8198b1f61dfe0dc337781322d9", "text": "A model of human speech quality perception has been developed to provide an objective measure for predicting subjective quality assessments. The Virtual Speech Quality Objective Listener (ViSQOL) model is a signal based full reference metric that uses a spectro-temporal measure of similarity between a reference and a test speech signal. This paper describes the algorithm and compares the results with PESQ for common problems in VoIP: clock drift, associated time warping and jitter. The results indicate that ViSQOL is less prone to underestimation of speech quality in both scenarios than the ITU standard.", "title": "" }, { "docid": "e50ce59ede6ad5c7a89309aed6aa06aa", "text": "In this paper, we discuss our ongoing efforts to construct a scientific paper browsing system that helps users to read and understand advanced technical content distributed in PDF. Since PDF is a format specifically designed for printing, layout and logical structures of documents are indistinguishably embedded in the file. It requires much effort to extract natural language text from PDF files, and reversely, display semantic annotations produced by NLP tools on the original page layout. In our browsing system, we tackle these issues caused by the gap between printable document and plain text. Our system provides ways to extract natural language sentences from PDF files together with their logical structures, and also to map arbitrary textual spans to their corresponding regions on page images. We setup a demonstration system using papers published in ACL anthology and demonstrate the enhanced search and refined recommendation functions which we plan to make widely available to NLP researchers.", "title": "" }, { "docid": "5969b69858c7f7e7836db2f9d1276b87", "text": "Intelligent tutoring systems (ITSs) acquire rich data about students behavior during learning; data mining techniques can help to describe, interpret and predict student behavior, and to evaluate progress in relation to learning outcomes. This paper surveys a variety of data mining techniques for analyzing how students interact with ITSs, including methods for handling hidden state variables, and for testing hypotheses. To illustrate these methods we draw on data from two ITSs for math instruction. Educational datasets provide new challenges to the data mining community, including inducing action patterns, designing distance metrics, and inferring unobservable states associated with learning.", "title": "" }, { "docid": "8bdc4b79e71f8bb9f001c99ec3b5e039", "text": "The \"tragedy of the commons\" metaphor helps explain why people overuse shared resources. However, the recent proliferation of intellectual property rights in biomedical research suggests a different tragedy, an \"anticommons\" in which people underuse scarce resources because too many owners can block each other. Privatization of biomedical research must be more carefully deployed to sustain both upstream research and downstream product development. Otherwise, more intellectual property rights may lead paradoxically to fewer useful products for improving human health.", "title": "" }, { "docid": "116ab901f60a7282f8a2ea245c59b679", "text": "Image classification is a vital technology many people in all arenas of human life utilize. It is pervasive in every facet of the social, economic, and corporate spheres of influence, worldwide. This need for more accurate, detail-oriented classification increases the need for modifications, adaptations, and innovations to Deep learning algorithms. This paper uses Convolutional Neural Networks (CNN) to classify handwritten digits in the MNIST database, and scenes in the CIFAR-10 database. Our proposed method preprocesses the data in the wavelet domain to attain greater accuracy and comparable efficiency to the spatial domain processing. By separating the image into different subbands, important feature learning occurs over varying low to high frequencies. The fusion of the learned low and high frequency features, and processing the combined feature mapping results in an increase in the detection accuracy. Comparing the proposed methods to spatial domain CNN and Stacked Denoising Autoencoder (SDA), experimental findings reveal a substantial increase in accuracy.", "title": "" }, { "docid": "569ea63f1a523c4040e195b7eb9323e9", "text": "Doubt about the role of stretch reflexes in movement and posture control has remained in part because the questions of reflex “usefulness” and the postural “set” have not been adequately considered in the design of experimental paradigms. The intent of this study was to discover the stabilizing role of stretch reflexes acting upon the ankle musculature while human subjects performed stance tasks requiring several different postural “sets”. Task specific differences of reflex function were investigated by experiments in which the role of stretch reflexes to stabilize sway during stance could be altered to be useful, of no use, or inappropriate. Because the system has available a number of alternate inputs to posture (e.g., vestibular and visual), stretch reflex responses were in themselves not necessary to prevent a loss of balance. Nevertheless, 5 out of 12 subjects in this study used long-latency (120 msec) stretch reflexes to help reduce postural sway. Following an unexpected change in the usefulness of stretch reflexes, the 5 subjects progressively altered reflex gain during the succeeding 3–5 trials. Adaptive changes in gain were always in the sense to reduce sway, and therefore could be attenuating or facilitating the reflex response. Comparing subjects using the reflex with those not doing so, stretch reflex control resulted in less swaying when the task conditions were unchanging. However, the 5 subjects using reflex controls oftentimes swayed more during the first 3–5 trials after a change, when inappropriate responses were elicited. Four patients with clinically diagnosed cerebellar deficits were studied briefly. Among the stance tasks, their performance was similar to normal in some and significantly poorer in others. Their most significant deficit appeared to be the inability to adapt long-latency reflex gain following changes in the stance task. The study concludes with a discussion of the role of stretch reflexes within a hierarchy of controls ranging from muscle stiffness up to centrally initiated responses.", "title": "" }, { "docid": "232b960cc16aa558538858aefd0a7651", "text": "This paper presents a video-based solution for real time vehicle detection and counting system, using a surveillance camera mounted on a relatively high place to acquire the traffic video stream.The two main methods applied in this system are: the adaptive background estimation and the Gaussian shadow elimination. The former allows a robust moving detection especially in complex scenes. The latter is based on color space HSV, which is able to deal with different size and intensity shadows. After these two operations, it obtains an image with moving vehicle extracted, and then operation counting is effected by a method called virtual detector.", "title": "" }, { "docid": "e9e7a68578f23b85bee9ebfe1b923f87", "text": "Low-density lipoprotein (LDL) is the most abundant and the most atherogenic class of cholesterol-carrying lipoproteins in human plasma. The level of plasma LDL is regulated by the LDL receptor, a cell surface glycoprotein that removes LDL from plasma by receptor-mediated endocytosis. Defects in the gene encoding the LDL receptor, which occur in patients with familial hypercholesterolemia, elevate the plasma LDL level and produce premature coronary atherosclerosis. The physiologically important LDL receptors are located primarily in the liver, where their number is regulated by the cholesterol content of the hepatocyte. When the cholesterol content of hepatocytes is raised by ingestion of diets high in saturated fat and cholesterol, LDL receptors fall and plasma LDL levels rise. Conversely, maneuvers that lower the cholesterol content of hepatocytes, such as ingestion of drugs that inhibit cholesterol synthesis (mevinolin or compactin) or prevent the reutilization of bile acids (cholestyramine or colestipol), stimulate LDL receptor production and lower plasma LDL levels. The normal process of receptor regulation can therefore be exploited in powerful and novel ways so as to reverse hypercholesterolemia and prevent atherosclerosis.", "title": "" }, { "docid": "e9c6b36e06699d26daffb073648ba195", "text": "Fine-grained entity type classification (FETC) is the task of classifying an entity mention to a broad set of types. Distant supervision paradigm is extensively used to generate training data for this task. However, generated training data assigns same set of labels to every mention of an entity without considering its local context. Existing FETC systems have two major drawbacks: assuming training data to be noise free and use of hand crafted features. Our work overcomes both drawbacks. We propose a neural network model that jointly learns entity mentions and their context representation to eliminate use of hand crafted features. Our model treats training data as noisy and uses non-parametric variant of hinge loss function. Experiments show that the proposed model outperforms previous stateof-the-art methods on two publicly available datasets, namely FIGER(GOLD) and BBN with an average relative improvement of 2.69% in micro-F1 score. Knowledge learnt by our model on one dataset can be transferred to other datasets while using same model or other FETC systems. These approaches of transferring knowledge further improve the performance of respective models.", "title": "" }, { "docid": "740c3b23904fb05384f0d58c680ea310", "text": "Huge amount data on the internet are in unstructured texts can‟t simply be used for further processing by computer , therefore specific processing method and algorithm require to extract useful pattern. Text mining is process to extract information from the unstructured data. Text classification is task of automatically sorting set of document into categories from predefined set. A major difficulty of text classification is high dimensionality of feature space. Feature selection method used for dimension reduction. This paper describe about text classification process, compare various classifier and also discuss feature selection method for solving problem of high dimensional data and application of text classification.", "title": "" }, { "docid": "1b802879e554140e677020e379b866c1", "text": "This study investigated vertical versus shared leadership as predictors of the effectiveness of 71 change management teams. Vertical leadership stems from an appointed or formal leader of a team, whereas shared leadership (C. L. Pearce, 1997; C. L. Pearce & J. A. Conger, in press; C. L. Pearce & H. P. Sims, 2000) is a group process in which leadership is distributed among, and stems from, team members. Team effectiveness was measured approximately 6 months after the assessment of leadership and was also measured from the viewpoints of managers, internal customers, and team members. Using multiple regression, the authors found both vertical and shared leadership to be significantly related to team effectiveness ( p .05), although shared leadership appears to be a more useful predictor of team effectiveness than vertical leadership.", "title": "" }, { "docid": "8ca3fe42e8a59262f319b995309cbd60", "text": "Deep neural networks (DNNs) are used by different applications that are executed on a range of computer architectures, from IoT devices to supercomputers. The footprint of these networks is huge as well as their computational and communication needs. In order to ease the pressure on resources, research indicates that in many cases a low precision representation (1-2 bit per parameter) of weights and other parameters can achieve similar accuracy while requiring less resources. Using quantized values enables the use of FPGAs to run NNs, since FPGAs are well fitted to these primitives; e.g., FPGAs provide efficient support for bitwise operations and can work with arbitrary-precision representation of numbers. This paper presents a new streaming architecture for running QNNs on FPGAs. The proposed architecture scales out better than alternatives, allowing us to take advantage of systems with multiple FPGAs. We also included support for skip connections, that are used in state-of-the art NNs, and shown that our architecture allows to add those connections almost for free. All this allowed us to implement an 18-layer ResNet for 224×224 images classification, achieving 57.5% top-1 accuracy. In addition, we implemented a full-sized quantized AlexNet. In contrast to previous works, we use 2-bit activations instead of 1-bit ones, which improves AlexNet's top-1 accuracy from 41.8% to 51.03% for the ImageNet classification. Both AlexNet and ResNet can handle 1000-class real-time classification on an FPGA. Our implementation of ResNet-18 consumes 5× less power and is 4× slower for ImageNet, when compared to the same NN on the latest Nvidia GPUs. Smaller NNs, that fit a single FPGA, are running faster then on GPUs on small (32×32) inputs, while consuming up to 20× less energy and power.", "title": "" }, { "docid": "f4d9190ad9123ddcf809f47c71225162", "text": "Please cite this article in press as: Tseng, M Industrial Engineering (2009), doi:10.1016/ Selection of appropriate suppliers in supply chain management strategy (SCMS) is a challenging issue because it requires battery of evaluation criteria/attributes, which are characterized with complexity, elusiveness, and uncertainty in nature. This paper proposes a novel hierarchical evaluation framework to assist the expert group to select the optimal supplier in SCMS. The rationales for the evaluation framework are based upon (i) multi-criteria decision making (MCDM) analysis that can select the most appropriate alternative from a finite set of alternatives with reference to multiple conflicting criteria, (ii) analytic network process (ANP) technique that can simultaneously take into account the relationships of feedback and dependence of criteria, and (iii) choquet integral—a non-additive fuzzy integral that can eliminate the interactivity of expert subjective judgment problems. A case PCB manufacturing firm is studied and the results indicated that the proposed evaluation framework is simple and reasonable to identify the primary criteria influencing the SCMS, and it is effective to determine the optimal supplier even with the interactive and interdependent criteria/attributes. This hierarchical evaluation framework provides a complete picture in SCMS contexts to both researchers and practitioners. 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "d7e2d2d3d25d7c4d09e348b93be23011", "text": "Bandit based methods for tree search have recently gained popularity when applied to huge trees, e.g. in the game of go [6]. Their efficient exploration of the tree enables to return rapidly a good value, and improve precision if more time is provided. The UCT algorithm [8], a tree search method based on Upper Confidence Bounds (UCB) [2], is believed to adapt locally to the effective smoothness of the tree. However, we show that UCT is “over-optimistic” in some sense, leading to a worst-case regret that may be very poor. We propose alternative bandit algorithms for tree search. First, a modification of UCT using a confidence sequence that scales exponentially in the horizon depth is analyzed. We then consider Flat-UCB performed on the leaves and provide a finite regret bound with high probability. Then, we introduce and analyze a Bandit Algorithm for Smooth Trees (BAST) which takes into account actual smoothness of the rewards for performing efficient “cuts” of sub-optimal branches with high confidence. Finally, we present an incremental tree expansion which applies when the full tree is too big (possibly infinite) to be entirely represented and show that with high probability, only the optimal branches are indefinitely developed. We illustrate these methods on a global optimization problem of a continuous function, given noisy values.", "title": "" }, { "docid": "ee46ee9e45a87c111eb14397c99cd653", "text": "This is a review of unsupervised learning applied to videos with the aim of learning visual representations. We look at different realizations of the notion of temporal coherence across various models. We try to understand the challenges being faced, the strengths and weaknesses of different approaches and identify directions for future work. Unsupervised Learning of Visual Representations using Videos Nitish Srivastava Department of Computer Science, University of Toronto", "title": "" }, { "docid": "0d723c344ab5f99447f7ad2ff72c0455", "text": "The aim of this study was to determine the pattern of fixations during the performance of a well-learned task in a natural setting (making tea), and to classify the types of monitoring action that the eyes perform. We used a head-mounted eye-movement video camera, which provided a continuous view of the scene ahead, with a dot indicating foveal direction with an accuracy of about 1 deg. A second video camera recorded the subject's activities from across the room. The videos were linked and analysed frame by frame. Foveal direction was always close to the object being manipulated, and very few fixations were irrelevant to the task. The first object-related fixation typically led the first indication of manipulation by 0.56 s, and vision moved to the next object about 0.61 s before manipulation of the previous object was complete. Each object-related act that did not involve a waiting period lasted an average of 3.3 s and involved about 7 fixations. Roughly a third of all fixations on objects could be definitely identified with one of four monitoring functions: locating objects used later in the process, directing the hand or object in the hand to a new location, guiding the approach of one object to another (e.g. kettle and lid), and checking the state of some variable (e.g. water level). We conclude that although the actions of tea-making are 'automated' and proceed with little conscious involvement, the eyes closely monitor every step of the process. This type of unconscious attention must be a common phenomenon in everyday life.", "title": "" }, { "docid": "310b8159894bc88b74a907c924277de6", "text": "We present a set of clustering algorithms that identify cluster boundaries by searching for a hyperplanar gap in unlabeled data sets. It turns out that the Normalized Cuts algorithm of Shi and Malik [1], originally presented as a graph-theoretic algorithm, can be interpreted as such an algorithm. Viewing Normalized Cuts under this light reveals that it pays more attention to points away from the center of the data set than those near the center of the data set. As a result, it can sometimes split long clusters and display sensitivity to outliers. We derive a variant of Normalized Cuts that assigns uniform weight to all points, eliminating the sensitivity to outliers.", "title": "" } ]
scidocsrr
5309ba756583c03f3b0442c4e5836714
Learning, Attentional Control, and Action Video Games
[ { "docid": "b1151d3588dc4abff883bef8c60005d1", "text": "Here, we demonstrate that action video game play enhances subjects' ability in two tasks thought to indicate the number of items that can be apprehended. Using an enumeration task, in which participants have to determine the number of quickly flashed squares, accuracy measures showed a near ceiling performance for low numerosities and a sharp drop in performance once a critical number of squares was reached. Importantly, this critical number was higher by about two items in video game players (VGPs) than in non-video game players (NVGPs). A following control study indicated that this improvement was not due to an enhanced ability to instantly apprehend the numerosity of the display, a process known as subitizing, but rather due to an enhancement in the slower more serial process of counting. To confirm that video game play facilitates the processing of multiple objects at once, we compared VGPs and NVGPs on the multiple object tracking task (MOT), which requires the allocation of attention to several items over time. VGPs were able to successfully track approximately two more items than NVGPs. Furthermore, NVGPs trained on an action video game established the causal effect of game playing in the enhanced performance on the two tasks. Together, these studies confirm the view that playing action video games enhances the number of objects that can be apprehended and suggest that this enhancement is mediated by changes in visual short-term memory skills.", "title": "" }, { "docid": "48d1f79cd3b887cced3d3a2913a25db3", "text": "Children's use of electronic media, including Internet and video gaming, has increased dramatically to an average in the general population of roughly 3 h per day. Some children cannot control their Internet use leading to increasing research on \"internet addiction.\" The objective of this article is to review the research on ADHD as a risk factor for Internet addiction and gaming, its complications, and what research and methodological questions remain to be addressed. The literature search was done in PubMed and Psychinfo, as well as by hand. Previous research has demonstrated rates of Internet addiction as high as 25% in the population and that it is addiction more than time of use that is best correlated with psychopathology. Various studies confirm that psychiatric disorders, and ADHD in particular, are associated with overuse, with severity of ADHD specifically correlated with the amount of use. ADHD children may be vulnerable since these games operate in brief segments that are not attention demanding. In addition, they offer immediate rewards with a strong incentive to increase the reward by trying the next level. The time spent on these games may also exacerbate ADHD symptoms, if not directly then through the loss of time spent on more developmentally challenging tasks. While this is a major issue for many parents, there is no empirical research on effective treatment. Internet and off-line gaming overuse and addiction are serious concerns for ADHD youth. Research is limited by the lack of measures for youth or parents, studies of children at risk, and studies of impact and treatment.", "title": "" }, { "docid": "040e5e800895e4c6f10434af973bec0f", "text": "The authors investigated the effect of action gaming on the spatial distribution of attention. The authors used the flanker compatibility effect to separately assess center and peripheral attentional resources in gamers versus nongamers. Gamers exhibited an enhancement in attentional resources compared with nongamers, not only in the periphery but also in central vision. The authors then used a target localization task to unambiguously establish that gaming enhances the spatial distribution of visual attention over a wide field of view. Gamers were more accurate than nongamers at all eccentricities tested, and the advantage held even when a concurrent center task was added, ruling out a trade-off between central and peripheral attention. By establishing the causal role of gaming through training studies, the authors demonstrate that action gaming enhances visuospatial attention throughout the visual field.", "title": "" } ]
[ { "docid": "fc935bf600e49db18c0a89f0945bac59", "text": "Psychological positive health and health complaints have long been ignored scientifically. Sleep plays a critical role in children and adolescents development. We aimed at studying the association of sleep duration and quality with psychological positive health and health complaints in children and adolescents from southern Spain. A randomly selected two-phase sample of 380 healthy Caucasian children (6–11.9 years) and 304 adolescents (12–17.9 years) participated in the study. Sleep duration (total sleep time), perceived sleep quality (morning tiredness and sleep latency), psychological positive health and health complaints were assessed using the Health Behaviour in School-aged Children questionnaire. The mean (standard deviation [SD]) reported sleep time for children and adolescents was 9.6 (0.6) and 8.8 (0.6) h/day, respectively. Sleep time ≥10 h was significantly associated with an increased likelihood of reporting no health complaints (OR 2.3; P = 0.005) in children, whereas sleep time ≥9 h was significantly associated with an increased likelihood of overall psychological positive health and no health complaints indicators (OR ~ 2; all P < 0.05) in adolescents. Reporting better sleep quality was associated with an increased likelihood of reporting excellent psychological positive health (ORs between 1.5 and 2.6; all P < 0.05). Furthermore, children and adolescents with no difficulty falling asleep were more likely to report no health complaints (OR ~ 3.5; all P < 0.001). Insufficient sleep duration and poor perceived quality of sleep might directly impact quality of life in children, decreasing general levels of psychological positive health and increasing the frequency of having health complaints.", "title": "" }, { "docid": "40e06996a22e1de4220a09e65ac1a04d", "text": "Obtaining a compact and discriminative representation of facial and body expressions is a difficult problem in emotion recognition. Part of the difficulty is capturing microexpressions, i.e., short, involuntary expressions that last for only a fraction of a second: at a micro-temporal scale, there are so many other subtle face and body movements that do not convey semantically meaningful information. We present a novel approach to this problem by exploiting the sparsity of the frequent micro-temporal motion patterns. Local space-time features are extracted over the face and body region for a very short time period, e.g., few milliseconds. A codebook of microexpressions is learned from the data and used to encode the features in a sparse manner. This allows us to obtain a representation that captures the most salient motion patterns of the face and body at a micro-temporal scale. Experiments performed on the AVEC 2012 dataset show our approach achieving the best published performance on the arousal dimension based solely on visual features. We also report experimental results on audio-visual emotion recognition, comparing early and late data fusion techniques.", "title": "" }, { "docid": "9d7c8b52e6ca73d31f1e71f8e77023c3", "text": "NMDA receptors mediate excitatory synaptic transmission and regulate synaptic plasticity in the central nervous system, but their dysregulation is also implicated in numerous brain disorders. Here, we describe GluN2A-selective negative allosteric modulators (NAMs) that inhibit NMDA receptors by stabilizing the apo state of the GluN1 ligand-binding domain (LBD), which is incapable of triggering channel gating. We describe structural determinants of NAM binding in crystal structures of the GluN1/2A LBD heterodimer, and analyses of NAM-bound LBD structures corresponding to active and inhibited receptor states reveal a molecular switch in the modulatory binding site that mediate the allosteric inhibition. NAM binding causes displacement of a valine in GluN2A and the resulting steric effects can be mitigated by the transition from glycine bound to apo state of the GluN1 LBD. This work provides mechanistic insight to allosteric NMDA receptor inhibition, thereby facilitating the development of novel classes NMDA receptor modulators as therapeutic agents.", "title": "" }, { "docid": "27a20bc4614e9ff012813a71b37ee168", "text": "Pushover analysis was performed on a nineteen story, slender concrete tower building located in San Francisco with a gross area of 430,000 square feet. Lateral system of the building consists of concrete shear walls. The building is newly designed conforming to 1997 Uniform Building Code, and pushover analysis was performed to verify code's underlying intent of Life Safety performance under design earthquake. Procedure followed for carrying out the analysis and results are presented in this paper.", "title": "" }, { "docid": "6e98c2362b504d9f4ab590d4acdc8b8f", "text": "App marketplaces are distribution platforms for mobile applications that serve as a communication channel between users and developers. These platforms allow users to write reviews about downloaded apps. Recent studies found that such reviews include information that is useful for software evolution. However, the manual analysis of a large amount of user reviews is a tedious and time consuming task. In this work we propose a taxonomy for classifying app reviews into categories relevant for software evolution. Additionally, we describe an experiment that investigates the performance of individual machine learning algorithms and its ensembles for automatically classifying the app reviews. We evaluated the performance of the machine learning techniques on 4550 reviews that were systematically labeled using content analysis methods. Overall, the ensembles had a better performance than the individual classifiers, with an average precision of 0.74 and 0.59 recall.", "title": "" }, { "docid": "df889d8492c4edfd86bbd7937d4695d1", "text": "We live in a world where there are countless interactions with computer systems in every-day situations. In the most ideal case, this interaction feels as familiar and as natural as the communication we experience with other humans. To this end, an ideal means of communication between a user and a computer system consists of audiovisual speech signals. Audiovisual text-to-speech technology allows the computer system to utter any spoken message towards its users. Over the last decades, a wide range of techniques for performing audiovisual speech synthesis has been developed. This paper gives a comprehensive overview on these approaches using a categorization of the systems based on multiple important aspects that determine the properties of the synthesized speech signals. The paper makes a clear distinction between the techniques that are used to model the virtual speaker and the techniques that are used to generate the appropriate speech gestures. In addition, the paper discusses the evaluation of audiovisual speech synthesizers, it elaborates on the hardware requirements for performing visual speech synthesis and it describes some important future directions that should stimulate the use of audiovisual speech synthesis technology in real-life applications.", "title": "" }, { "docid": "5fd33c0b5b305c9011760f91c75297ca", "text": "This paper analyzes the root causes of zero-rate output (ZRO) in microelectromechanical system (MEMS) vibratory gyroscopes. ZRO is one of the major challenges for high-performance gyroscopes. The knowledge of its causes is important to minimize ZRO and achieve a robust sensor design. In this paper, a new method to describe an MEMS gyroscope with a parametric state space model is introduced. The model is used to theoretically describe the behavioral influences. A new, more detailed and general gyroscope approximation is used to vary influence parameters, and to verify the method with simulations. The focus is on varying stiffness terms and an extension of the model to other gyroscope approximations is also discussed.", "title": "" }, { "docid": "15e4cfb84801e86211709a8d24979eaa", "text": "The English Lexicon Project is a multiuniversity effort to provide a standardized behavioral and descriptive data set for 40,481 words and 40,481 nonwords. It is available via the Internet at elexicon.wustl.edu. Data from 816 participants across six universities were collected in a lexical decision task (approximately 3400 responses per participant), and data from 444 participants were collected in a speeded naming task (approximately 2500 responses per participant). The present paper describes the motivation for this project, the methods used to collect the data, and the search engine that affords access to the behavioral measures and descriptive lexical statistics for these stimuli.", "title": "" }, { "docid": "7b5b9990bfef9d2baf28030123359923", "text": "a r t i c l e i n f o a b s t r a c t This review takes an evolutionary and chronological perspective on the development of strategic human resource management (SHRM) literature. We divide this body of work into seven themes that reflect the directions and trends researchers have taken over approximately thirty years of research. During this time the field took shape, developed rich conceptual foundations, and matured into a domain that has substantial influence on research activities in HR and related management disciplines. We trace how the field has evolved to its current state, articulate many of the major findings and contributions, and discuss how we believe it will evolve in the future. This approach contributes to the field of SHRM by synthesizing work in this domain and by highlighting areas of research focus that have received perhaps enough attention, as well as areas of research focus that, while promising, have remained largely unexamined. 1. Introduction Boxall, Purcell, and Wright (2007) distinguish among three major subfields of human resource management (HRM): micro HRM (MHRM), strategic HRM (SHRM), and international HRM (IHRM). Micro HRM covers the subfunctions of HR policy and practice and consists of two main categories: one with managing individuals and small groups (e.g., recruitment, selection, induction, training and development, performance management, and remuneration) and the other with managing work organization and employee voice systems (including union-management relations). Strategic HRM covers the overall HR strategies adopted by business units and companies and tries to measure their impacts on performance. Within this domain both design and execution issues are examined. International HRM covers HRM in companies operating across national boundaries. Since strategic HRM often covers the international context, we will include those international HRM articles that have a strategic focus. While most of the academic literature on SHRM has been published in the last 30 years, the intellectual roots of the field can be traced back to the 1920s in the U.S. (Kaufman, 2001). The concept of labor as a human resource and the strategic view of HRM policy and practice were described and discussed by labor economists and industrial relations scholars of that period, such as John Commons. Progressive companies in the 1920s intentionally formulated and adopted innovative HR practices that represented a strategic approach to the management of labor. A small, but visibly elite group of employers in this time period …", "title": "" }, { "docid": "795d4e73b3236a2b968609c39ce8f417", "text": "In this paper, we are introducing an intelligent valet parking management system that guides the cars to autonomously park within a parking lot. The IPLMS for Intelligent Parking Lot Management System, consists of two modules: 1) a model car with a set of micro-controllers and sensors which can scan the environment for suitable parking spot and avoid collision to obstacles, and a Parking Lot Management System (IPLMS) which screens the parking spaces within the parking lot and offers guidelines to the car. The model car has the capability to autonomously maneuver within the parking lot using a fuzzy logic algorithm, and execute parking in the spot determined by the IPLMS, using a parking algorithm. The car receives the instructions from the IPLMS through a wireless communication link. The IPLMS has the flexibility to be adopted by any parking management system, and can potentially save the clients time to look for a parking spot, and/or to stroll from an inaccessible parking space. Moreover, the IPLMS can decrease the financial burden from the parking lot management by offering an easy-to-install system for self-guided valet parking.", "title": "" }, { "docid": "7c6a40af29c1bd8af4b9031ef95a92cf", "text": "A broadband radial waveguide power amplifier has been designed and fabricated using a spatial power dividing/combining technique. A simple electromagnetic model of this power-dividing/combining structure has been developed. Analysis based on equivalent circuits gives the design formula for perfect power-dividing/ combining circuits. The measured small-signal gain of the eight-device power amplifier is 12 –16.5 dB over a broadband from 7 to 15 GHz. The measured maximum output power at 1-dB compression is 28.6 dBm at 10 GHz, with a power-combining efficiency of about 91%. Furthermore, the performance degradation of this power amplifier because of device failures has also been measured.", "title": "" }, { "docid": "0799b728d04cb7c01b9b527a627962a9", "text": "This paper presents a design of Two Stage CMOS operational amplifier, which operates at ±2.5V power supply using umc 2μm CMOS technology. The OP-AMP designed is a two-stage CMOS OP-AMP. The OP-AMP is designed to exhibit a unity gain frequency of 4.416MHz and exhibits a gain of 96dB with a 700 phase margin. Design and Simulation has been carried out in LT Spice tools. Keywords— 2 stage CMOS op-amp, design, simulation and results.", "title": "" }, { "docid": "229395d5aa7d0073ee27c4643d668b3d", "text": "with input from many other team members 6/1/2007 \"DISCLAIMER: The information contained in this paper does not represent the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency (DARPA) or the Department of Defense. DARPA does not guarantee the accuracy or reliability of the information in this paper.\" Austin Robot Technology's (ART's) entry in the DARPA Urban Challenge has two main goals. First and foremost, the team aims to create a fully autonomous vehicle that is capable of safely and robustly meeting all of the criteria laid out in the Technical Evaluation Criteria document [1]. Second, and almost as important, the team aims to produce, educate, and train members of the next generation of computer science and robotics researchers. This technical report documents our significant progress towards both of these goals as of May 2007 and presents a concrete plan to achieve them both fully by the time of the National Qualifying Event (NQE) in October. Specifically, it presents details of both our complete hardware system and our in-progress software, including design rationale, preliminary results, and future plans towards meeting the challenge. In addition, it provides details of the significant undergraduate research component of our efforts and emphasizes the educational value of the project.", "title": "" }, { "docid": "072b36d53de6a1a1419b97a1503f8ecd", "text": "In classical control of brushless dc (BLDC) motors, flux distribution is assumed trapezoidal and fed current is controlled rectangular to obtain a desired constant torque. However, in reality, this assumption may not always be correct, due to nonuniformity of magnetic material and design trade-offs. These factors, together with current controller limitation, can lead to an undesirable torque ripple. This paper proposes a new torque control method to attenuate torque ripple of BLDC motors with un-ideal back electromotive force (EMF) waveforms. In this method, the action time of pulses, which are used to control the corresponding switches, are calculated in the torque controller regarding actual back EMF waveforms in both normal conduction period and commutation period. Moreover, the influence of finite dc bus supply voltage is considered in the commutation period. Simulation and experimental results are shown that, compared with conventional rectangular current control, the proposed torque control method results in apparent reduction of the torque ripple.", "title": "" }, { "docid": "29f1e1c9c1601ba194ddcf18de804101", "text": "In this paper, we introduce Waveprint, a novel method for audio identification. Waveprint uses a combination of computer-vision techniques and large-scale-data-stream processing algorithms to create compact fingerprints of audio data that can be efficiently matched. The resulting system has excellent identification capabilities for small snippets of audio that have been degraded in a variety of manners, including competing noise, poor recording quality, and cell-phone playback. We explicitly measure the tradeoffs between performance, memory usage, and computation through extensive experimentation.", "title": "" }, { "docid": "907fe4b941bc70cddf39bc76a522205f", "text": "We introduce a flexible combination of volume, surface, and line rendering. We employ object-based edge detection because this allows a flexible parametrization of the generated lines. Our techniques were developed mainly for medical applications using segmented patient-individual volume datasets. In addition, we present an evaluation of the generated visualizations with 8 medical professionals and 25 laypersons. Integration of lines in conventional rendering turned out to be appropriate.", "title": "" }, { "docid": "b33e896a23f27a81f04aaeaff2f2350c", "text": "Nowadays it has become increasingly common for family members to be distributed in different time zones. These time differences pose specific challenges for communication within the family and result in different communication practices to cope with them. To gain an understanding of current challenges and practices, we interviewed people who regularly communicate with immediate family members living in other time zones. We report primary findings from the interviews, and identify design opportunities for improving the experience of cross time zone family communication.", "title": "" }, { "docid": "bc4ce8c0dce6515d1432a6baecef4614", "text": "The lsemantica command, presented in this paper, implements Latent Semantic Analysis in Stata. Latent Semantic Analysis is a machine learning algorithm for word and text similarity comparison. Latent Semantic Analysis uses Truncated Singular Value Decomposition to derive the hidden semantic relationships between words and texts. lsemantica provides a simple command for Latent Semantic Analysis in Stata as well as complementary commands for text similarity comparison.", "title": "" }, { "docid": "9b9a2a9695f90a6a9a0d800192dd76f6", "text": "Due to high competition in today's business and the need for satisfactory communication with customers, companies understand the inevitable necessity to focus not only on preventing customer churn but also on predicting their needs and providing the best services for them. The purpose of this article is to predict future services needed by wireless users, with data mining techniques. For this purpose, the database of customers of an ISP in Shiraz, which logs the customer usage of wireless internet connections, is utilized. Since internet service has three main factors to define (Time, Speed, Traffics) we predict each separately. First, future service demand is predicted by implementing a simple Recency, Frequency, Monetary (RFM) as a basic model. Other factors such as duration from first use, slope of customer's usage curve, percentage of activation, Bytes In, Bytes Out and the number of retries to establish a connection and also customer lifetime value are considered and added to RFM model. Then each one of R, F, M criteria is alternately omitted and the result is evaluated. Assessment is done through analysis node which determines the accuracy of evaluated data among partitioned data. The result shows that CART and C5.0 are the best algorithms to predict future services in this case. As for the features, depending upon output of each features, duration and transfer Bytes are the most important after RFM. An ISP may use the model discussed in this article to meet customers' demands and ensure their loyalty and satisfaction.", "title": "" }, { "docid": "b40129a15767189a7a595db89c066cf8", "text": "To increase reliability of face recognition system, the system must be able to distinguish real face from a copy of face such as a photograph. In this paper, we propose a fast and memory efficient method of live face detection for embedded face recognition system, based on the analysis of the movement of the eyes. We detect eyes in sequential input images and calculate variation of each eye region to determine whether the input face is a real face or not. Experimental results show that the proposed approach is competitive and promising for live face detection. Keywords—Liveness Detection, Eye detection, SQI.", "title": "" } ]
scidocsrr
a4d229efc150d64d402ae82a7cccfb66
Machine Learning in Radiation Oncology: Opportunities, Requirements, and Needs
[ { "docid": "5ec4a87235a98a1ea1c01baedd6a3cc2", "text": "Chest X-ray (CXR) is one of the most commonly prescribed medical imaging procedures, often with over 2– 10x more scans than other imaging modalities such as MRI, CT scan, and PET scans. These voluminous CXR scans place significant workloads on radiologists and medical practitioners. Organ segmentation is a crucial step to obtain effective computer-aided detection on CXR. In this work, we propose Structure Correcting Adversarial Network (SCAN) to segment lung fields and the heart in CXR images. SCAN incorporates a critic network to impose on the convolutional segmentation network the structural regularities emerging from human physiology. During training, the critic network learns to discriminate between the ground truth organ annotations from the masks synthesized by the segmentation network. Through this adversarial process the critic network learns the higher order structures and guides the segmentation model to achieve realistic segmentation outcomes. Extensive experiments show that our method produces highly accurate and natural segmentation. Using only very limited training data available, our model reaches human-level performance without relying on any existing trained model or dataset. Our method also generalizes well to CXR images from a different patient population and disease profiles, surpassing the current state-of-the-art.", "title": "" } ]
[ { "docid": "d462aa7c2120ef71ad28222160211342", "text": "Current trends in the field of distance education indicate a shift in pedagogical perspectives and theoretical frameworks, with student interaction at the heart of learner-centered constructivist environments. The purpose of this article is to explore the benefits of using emerging technology tools such as wikis, blogs, and podcasts to foster student interaction in online learning. It also reviews social software applications such as WriteboardTM, InstaCollTM, and ImeemTM. Although emerging technologies offer a vast range of opportunities for promoting collaboration in both synchronous and asynchronous learning environments, distance education programs around the globe face challenges that may limit or deter implementation of these technologies. This article probes the influence of technology on theory and the possible implications this influence affords.", "title": "" }, { "docid": "777c65f8123dd718d6faefaa1fec0b15", "text": "BACKGROUND\nProcessed meat and fish have been shown to be associated with the risk of advanced prostate cancer, but few studies have examined diet after prostate cancer diagnosis and risk of its progression.\n\n\nOBJECTIVE\nWe examined the association between postdiagnostic consumption of processed and unprocessed red meat, fish, poultry, and eggs and the risk of prostate cancer recurrence or progression.\n\n\nDESIGN\nWe conducted a prospective study in 1294 men with prostate cancer, without recurrence or progression as of 2004-2005, who were participating in the Cancer of the Prostate Strategic Urologic Research Endeavor and who were followed for an average of 2 y.\n\n\nRESULTS\nWe observed 127 events (prostate cancer death or metastases, elevated prostate-specific antigen concentration, or secondary treatment) during 2610 person-years. Intakes of processed and unprocessed red meat, fish, total poultry, and skinless poultry were not associated with prostate cancer recurrence or progression. Greater consumption of eggs and poultry with skin was associated with 2-fold increases in risk in a comparison of extreme quantiles: eggs [hazard ratio (HR): 2.02; 95% CI: 1.10, 3.72; P for trend = 0.05] and poultry with skin (HR: 2.26; 95% CI: 1.36, 3.76; P for trend = 0.003). An interaction was observed between prognostic risk at diagnosis and poultry. Men with high prognostic risk and a high poultry intake had a 4-fold increased risk of recurrence or progression compared with men with low/intermediate prognostic risk and a low poultry intake (P for interaction = 0.003).\n\n\nCONCLUSIONS\nOur results suggest that the postdiagnostic consumption of processed or unprocessed red meat, fish, or skinless poultry is not associated with prostate cancer recurrence or progression, whereas consumption of eggs and poultry with skin may increase the risk.", "title": "" }, { "docid": "146d5e7a8079a0b5171d9bc2813f3052", "text": "The Shape Boltzmann Machine (SBM) [1] has recently been introduced as a stateof-the-art model of foreground/background object shape. We extend the SBM to account for the foreground object’s parts. Our new model, the Multinomial SBM (MSBM), can capture both local and global statistics of part shapes accurately. We combine the MSBM with an appearance model to form a fully generative model of images of objects. Parts-based object segmentations are obtained simply by performing probabilistic inference in the model. We apply the model to two challenging datasets which exhibit significant shape and appearance variability, and find that it obtains results that are comparable to the state-of-the-art. There has been significant focus in computer vision on object recognition and detection e.g. [2], but a strong desire remains to obtain richer descriptions of objects than just their bounding boxes. One such description is a parts-based object segmentation, in which an image is partitioned into multiple sets of pixels, each belonging to either a part of the object of interest, or its background. The significance of parts in computer vision has been recognized since the earliest days of the field (e.g. [3, 4, 5]), and there exists a rich history of work on probabilistic models for parts-based segmentation e.g. [6, 7]. Many such models only consider local neighborhood statistics, however several models have recently been proposed that aim to increase the accuracy of segmentations by also incorporating prior knowledge about the foreground object’s shape [8, 9, 10, 11]. In such cases, probabilistic techniques often mainly differ in how accurately they represent and learn about the variability exhibited by the shapes of the object’s parts. Accurate models of the shapes and appearances of parts can be necessary to perform inference in datasets that exhibit large amounts of variability. In general, the stronger the models of these two components, the more performance is improved. A generative model has the added benefit of being able to generate samples, which allows us to visually inspect the quality of its understanding of the data and the problem. Recently, a generative probabilistic model known as the Shape Boltzmann Machine (SBM) has been used to model binary object shapes [1]. The SBM has been shown to constitute the state-of-the-art and it possesses several highly desirable characteristics: samples from the model look realistic, and it generalizes to generate samples that differ from the limited number of examples it is trained on. The main contributions of this paper are as follows: 1) In order to account for object parts we extend the SBM to use multinomial visible units instead of binary ones, resulting in the Multinomial Shape Boltzmann Machine (MSBM), and we demonstrate that the MSBM constitutes a strong model of parts-based object shape. 2) We combine the MSBM with an appearance model to form a fully generative model of images of objects (see Fig. 1). We show how parts-based object segmentations can be obtained simply by performing probabilistic inference in the model. We apply our model to two challenging datasets and find that in addition to being principled and fully generative, the model’s performance is comparable to the state-of-the-art.", "title": "" }, { "docid": "933398ff8f74a99bec6ea6e794910a8e", "text": "Cognitive computing is an interdisciplinary research field that simulates human thought processes in a computerized model. One application for cognitive computing is sentiment analysis on online reviews, which reflects opinions and attitudes toward products and services experienced by consumers. A high level of classification performance facilitates decision making for both consumers and firms. However, while much effort has been made to propose advanced classification algorithms to improve the performance, the importance of the textual quality of the data has been ignored. This research explores the impact of two influential textual features, namely the word count and review readability, on the performance of sentiment classification. We apply three representative deep learning techniques, namely SRN, LSTM, and CNN, to sentiment analysis tasks on a benchmark movie reviews dataset. Multiple regression models are further employed for statistical analysis. Our findings show that the dataset with reviews having a short length and high readability could achieve the best performance compared with any other combinations of the levels of word count and readability and that controlling the review length is more effective for garnering a higher level of accuracy than increasing the readability. Based on these findings, a practical application, i.e., a text evaluator or a website plug-in for text evaluation, can be developed to provide a service of review editorials and quality control for crowd-sourced review websites. These findings greatly contribute to generating more valuable reviews with high textual quality to better serve sentiment analysis and decision making.", "title": "" }, { "docid": "55b76c1b1d4cabee6ebbe9aa26c4058f", "text": "The Fundamental Law of Information Recovery states, informally, that “overly accurate” estimates of “too many” statistics completely destroys privacy ([DN03] et sequelae). Differential privacy is a mathematically rigorous definition of privacy tailored to analysis of large datasets and equipped with a formal measure of privacy loss [DMNS06, Dwo06]. Moreover, differentially private algorithms take as input a parameter, typically called ε, that caps the permitted privacy loss in any execution of the algorithm and offers a concrete privacy/utility tradeoff. One of the strengths of differential privacy is the ability to reason about cumulative privacy loss over multiple analyses, given the values of ε used in each individual analysis. By appropriate choice of ε it is possible to stay within the bounds of the Fundamental Law while releasing any given number of estimated statistics; however, before this work the bounds were not tight. Roughly speaking, differential privacy ensures that the outcome of any anlysis on a database x is distributed very similarly to the outcome on any neighboring database y that differs from x in just one row (Definition 2.3). That is, differentially private algorithms are randomized, and in particular the max divergence between these two distributions (a sort maximum log odds ratio for any event; see Definition 2.2 below) is bounded by the privacy parameter ε. This absolute guarantee on the maximum privacy loss is now sometimes referred to as “pure” differential privacy. A popular relaxation, (ε, δ)-differential privacy (Definition 2.4)[DKM+06], guarantees that with probability at most 1−δ the privacy loss does not exceed ε.1 Typically δ is taken to be “cryptographically” small, that is, smaller than the inverse of any polynomial in the size of the dataset, and pure differential privacy is simply the special case in which δ = 0. The relaxation frequently permits asymptotically better accuracy than pure differential privacy for the same value of ε, even when δ is very small. What happens in the case of multiple analyses? While the composition of k (ε, 0)-differentially privacy algorithms is at worst (kε, 0)-differentially private, it is also simultaneously ( √", "title": "" }, { "docid": "c0025b54f12b3f813d2b51549320821f", "text": "BACKGROUND\nDespite the pervasive use of smartphones among university students, there is still a dearth of research examining the association between smartphone use and psychological well-being among this population. The current study addresses this research gap by investigating the relationship between smartphone use and psychological well-being among university students in Thailand.\n\n\nMETHODS\nThis cross-sectional study was conducted from January to March 2018 among university students aged 18-24 years from the largest university in Chiang Mai, Thailand. The primary outcome was psychological well-being, and was assessed using the Flourishing Scale. Smartphone use, the primary independent variable, was measured by five items which had been adapted from the eight-item Young Diagnostic Questionnaire for Internet Addiction. All scores above the median value were defined as being indicative of excessive smartphone use.\n\n\nRESULTS\nOut of the 800 respondents, 405 (50.6%) were women. In all, 366 (45.8%) students were categorized as being excessive users of smartphones. Students with excessive use of smartphones had lower scores the psychological well-being than those who did not use smartphone excessively (B = -1.60; P < 0.001). Female students had scores for psychological well-being that were, on average, 1.24 points higher than the scores of male students (P < 0.001).\n\n\nCONCLUSION\nThis study provides some of the first insights into the negative association between excessive smartphone use and the psychological well-being of university students. Strategies designed to promote healthy smartphone use could positively impact the psychological well-being of students.", "title": "" }, { "docid": "7138c13d88d87df02c7dbab4c63328c4", "text": "Banisteriopsis caapi is the basic ingredient of ayahuasca, a psychotropic plant tea used in the Amazon for ritual and medicinal purposes, and by interested individuals worldwide. Animal studies and recent clinical research suggests that B. caapi preparations show antidepressant activity, a therapeutic effect that has been linked to hippocampal neurogenesis. Here we report that harmine, tetrahydroharmine and harmaline, the three main alkaloids present in B. caapi, and the harmine metabolite harmol, stimulate adult neurogenesis in vitro. In neurospheres prepared from progenitor cells obtained from the subventricular and the subgranular zones of adult mice brains, all compounds stimulated neural stem cell proliferation, migration, and differentiation into adult neurons. These findings suggest that modulation of brain plasticity could be a major contribution to the antidepressant effects of ayahuasca. They also expand the potential application of B. caapi alkaloids to other brain disorders that may benefit from stimulation of endogenous neural precursor niches.", "title": "" }, { "docid": "1201c549760d3d5d47df540c5d542f26", "text": "A low-cost, hand-held harmonic radar is described for tracking tagged amphibians weighting less than a gram during a cryptic period of their life history. The radar is based on inexpensive, commercial 5.8 GHz wireless communications and 11.6 GHz satellite television technology. The performance of the system was accurately predicted from laboratory measurements by defining an appropriate harmonic tag conversion efficiency. The harmonic radar has a demonstrated maximum tag detection range of 20 ft. The best performance was achieved with an asymmetric, dielectric sleeved dipole with a 1:2 arm length ratio.", "title": "" }, { "docid": "e85e66b6ad6324a07ca299bf4f3cd447", "text": "To date, the majority of ad hoc routing protocol research has been done using simulation only. One of the most motivating reasons to use simulation is the difficulty of creating a real implementation. In a simulator, the code is contained within a single logical component, which is clearly defined and accessible. On the other hand, creating an implementation requires use of a system with many components, including many that have little or no documentation. The implementation developer must understand not only the routing protocol, but all the system components and their complex interactions. Further, since ad hoc routing protocols are significantly different from traditional routing protocols, a new set of features must be introduced to support the routing protocol. In this paper we describe the event triggers required for AODV operation, the design possibilities and the decisions for our ad hoc on-demand distance vector (AODV) routing protocol implementation, AODV-UCSB. This paper is meant to aid researchers in developing their own on-demand ad hoc routing protocols and assist users in determining the implementation design that best fits their needs.", "title": "" }, { "docid": "3cf1197436af89889edc04cae8acfb0f", "text": "The rapid growth of new radio technologies for Smart City/Building/Home applications means that models of cross-technology interference are needed to inform the development of higher layer protocols and applications. We systematically investigate interference interactions between LoRa and IEEE 802.15.4g networks. Our results show that LoRa can obtain high packet reception rates, even in presence of strong IEEE 802.15.4g interference. IEEE 802.15.4g is also shown to have some resilience to LoRa interference. Both effects are highly dependent on the LoRa radio's spreading factor and bandwidth configuration, as well as on the channelization. The results are shown to arise from the interaction between the two radios' modulation schemes. The data have implications for the design and analysis of protocols for both radio technologies.", "title": "" }, { "docid": "d81a287ab942c60980b0599007e1a2d6", "text": "MicroRNAs (miRNAs) are small and non-coding RNA molecules that inhibit gene expression posttranscriptionally. They play important roles in several biological processes, and in recent years there has been an interest in studying how they are related to the pathogenesis of diseases. Although there are already some databases that contain information for miRNAs and their relation with illnesses, their curation represents a significant challenge due to the amount of information that is being generated every day. In particular, respiratory diseases are poorly documented in databases, despite the fact that they are of increasing concern regarding morbidity, mortality and economic impacts. In this work, we present the results that we obtained in the BioCreative Interactive Track (IAT), using a semiautomatic approach for improving biocuration of miRNAs related to diseases. Our procedures will be useful to complement databases that contain this type of information. We adapted the OntoGene text mining pipeline and the ODIN curation system in a full-text corpus of scientific publications concerning one specific respiratory disease: idiopathic pulmonary fibrosis, the most common and aggressive of the idiopathic interstitial cases of pneumonia. We curated 823 miRNA text snippets and found a total of 246 miRNAs related to this disease based on our semiautomatic approach with the system OntoGene/ODIN. The biocuration throughput improved by a factor of 12 compared with traditional manual biocuration. A significant advantage of our semiautomatic pipeline is that it can be applied to obtain the miRNAs of all the respiratory diseases and offers the possibility to be used for other illnesses.\n\n\nDatabase URL\nhttp://odin.ccg.unam.mx/ODIN/bc2015-miRNA/.", "title": "" }, { "docid": "dddab001e2a45200b02507f042b72499", "text": "Interest in automatic crowd behaviour analysis has grown considerably in the last few years. Crowd behaviour analysis has become an integral part all over the world for ensuring peaceful event organizations and minimum casualties in the places of public and religious interests. Traditionally, the area of crowd analysis was computed using handcrafted features. However, the real-world images and videos consist of nonlinearity that must be used efficiently for gaining accuracies in the results. As in many other computer vision areas, deep learning-based methods have taken giant strides for obtaining state-of-the-art performance in crowd behaviour analysis. This paper presents a comprehensive survey of current convolution neural network (CNN)-based methods for crowd behaviour analysis. We have also surveyed popular software tools for CNN in the recent years. This survey presents detailed attributes of CNN with special emphasis on optimization methods that have been utilized in CNN-based methods. It also reviews fundamental and innovative methodologies, both conventional and latest methods of CNN, reported in the last few years. We introduce a taxonomy that summarizes important aspects of the CNN for approaching crowd behaviour analysis. Details of the proposed architectures, crowd analysis needs and their respective datasets are reviewed. In addition, we summarize and discuss the main works proposed so far with particular interest on CNNs on how they treat the temporal dimension of data, their highlighting features and opportunities and challenges for future research. To the best of our knowledge, this is a unique survey for crowd behaviour analysis using the CNN. We hope that this survey would become a reference in this ever-evolving field of research.", "title": "" }, { "docid": "eaf30f31b332869bc45ff1288c41da71", "text": "Search Engines: Information Retrieval In Practice is writen by Bruce Croft in English language. Release on 2009-02-16, this book has 552 page count that consist of helpful information with easy reading experience. The book was publish by Addison-Wesley, it is one of best subjects book genre that gave you everything love about reading. You can find Search Engines: Information Retrieval In Practice book with ISBN 0136072240.", "title": "" }, { "docid": "b86bd120f306cb5be466a691c0899399", "text": "Multirate refresh techniques exploit the non-uniformity in retention times of DRAM cells to reduce the DRAM refresh overheads. Such techniques rely on accurate profiling of retention times of cells, and perform faster refresh only for a few rows which have cells with low retention times. Unfortunately, retention times of some cells can change at runtime due to Variable Retention Time (VRT), which makes it impractical to reliably deploy multirate refresh. Based on experimental data from 24 DRAM chips, we develop architecture-level models for analyzing the impact of VRT. We show that simply relying on ECC DIMMs to correct VRT failures is unusable as it causes a data error once every few months. We propose AVATAR, a VRT-aware multirate refresh scheme that adaptively changes the refresh rate for different rows at runtime based on current VRT failures. AVATAR provides a time to failure in the regime of several tens of years while reducing refresh operations by 62%-72%.", "title": "" }, { "docid": "073ec1e3b8c6feab18f2ae53eab5cc24", "text": "Deep belief nets have been successful in modeling handwritten characters, but it has proved more difficult to apply them to real images. The problem lies in the restricted Boltzmann machine (RBM) which is used as a module for learning deep belief nets one layer at a time. The Gaussian-Binary RBMs that have been used to model real-valued data are not a good way to model the covariance structure of natural images. We propose a factored 3-way RBM that uses the states of its hidden units to represent abnormalities in the local covariance structure of an image. This provides a probabilistic framework for the widely used simple/complex cell architecture. Our model learns binary features that work very well for object recognition on the “tiny images” data set. Even better features are obtained by then using standard binary RBM’s to learn a deeper model.", "title": "" }, { "docid": "fa4480bbc460658bd1ea5804fdebc5ed", "text": "This paper examines the problem of how to teach multiple tasks to a Reinforcement Learning (RL) agent. To this end, we use Linear Temporal Logic (LTL) as a language for specifying multiple tasks in a manner that supports the composition of learned skills. We also propose a novel algorithm that exploits LTL progression and off-policy RL to speed up learning without compromising convergence guarantees, and show that our method outperforms the state-of-the-art approach on randomly generated Minecraft-like grids.", "title": "" }, { "docid": "ee9cb495280dc6e252db80c23f2f8c2b", "text": "Due to the dramatical increase in popularity of mobile devices in the last decade, more sensitive user information is stored and accessed on these devices everyday. However, most existing technologies for user authentication only cover the login stage or only work in restricted controlled environments or GUIs in the post login stage. In this work, we present TIPS, a Touch based Identity Protection Service that implicitly and unobtrusively authenticates users in the background by continuously analyzing touch screen gestures in the context of a running application. To the best of our knowledge, this is the first work to incorporate contextual app information to improve user authentication. We evaluate TIPS over data collected from 23 phone owners and deployed it to 13 of them with 100 guest users. TIPS can achieve over 90% accuracy in real-life naturalistic conditions within a small amount of computational overhead and 6% of battery usage.", "title": "" }, { "docid": "409a7199f73e4dcdffe350e906c03d0f", "text": "In this letter, we propose a protocol for an automatic food recognition system that identifies the contents of the meal from the images of the food. We developed a multilayered convolutional neural network (CNN) pipeline that takes advantages of the features from other deep networks and improves the efficiency. Numerous traditional handcrafted features and methods are explored, among which CNNs are chosen as the best performing features. Networks are trained and fine-tuned using preprocessed images and the filter outputs are fused to achieve higher accuracy. Experimental results on the largest real-world food recognition database ETH Food-101 and newly contributed Indian food image database demonstrate the effectiveness of the proposed methodology as compared to many other benchmark deep learned CNN frameworks.", "title": "" }, { "docid": "0d20f5ae084c6ca4e7a834e1eee1e84c", "text": "Gantry-tilted helical multi-slice computed tomography (CT) refers to the helical scanning CT system equipped with multi-row detector operating at some gantry tilting angle. Its purpose is to avoid the area which is vulnerable to the X-ray radiation. The local tomography is to reduce the total radiation dose by only scanning the region of interest for image reconstruction. In this paper we consider the scanning scheme, and incorporate the local tomography technique with the gantry-tilted helical multi-slice CT. The image degradation problem caused by gantry tilting is studied, and a new error correction method is proposed to deal with this problem in the local CT. Computer simulation shows that the proposed method can enhance the local imaging performance in terms of image sharpness and artifacts reduction", "title": "" }, { "docid": "d2fdb8438802a540ec20a08bf23a7454", "text": "In this paper requirements and conditions for the visitor identification system are outlined and an example system is proposed. Two main subsystems: face detection and face recognition are described. Algorithm for face detection integrates skin-colour, mask analysis, facial features (fast and effective way of eyes localization is presented), reductors, knowledge and template matching. For face recognition a three stage algorithm is proposed. It utilizes well known methods connected in a sequential mode. To improve accuracy and speed some modifications to original methods were proposed and new one presented. The aim was to build a visitor identification system which would be able to operate in mode with a camera and present results in real-time. The emphasis on speed and accuracy was stressed.", "title": "" } ]
scidocsrr
93ac05bfa1ed7cf8d003a7a9ab4473d6
Video-Based Person Re-Identification With Accumulative Motion Context
[ { "docid": "2bc30693be1c5855a9410fb453128054", "text": "Person re-identification is to match pedestrian images from disjoint camera views detected by pedestrian detectors. Challenges are presented in the form of complex variations of lightings, poses, viewpoints, blurring effects, image resolutions, camera settings, occlusions and background clutter across camera views. In addition, misalignment introduced by the pedestrian detector will affect most existing person re-identification methods that use manually cropped pedestrian images and assume perfect detection. In this paper, we propose a novel filter pairing neural network (FPNN) to jointly handle misalignment, photometric and geometric transforms, occlusions and background clutter. All the key components are jointly optimized to maximize the strength of each component when cooperating with others. In contrast to existing works that use handcrafted features, our method automatically learns features optimal for the re-identification task from data. The learned filter pairs encode photometric transforms. Its deep architecture makes it possible to model a mixture of complex photometric and geometric transforms. We build the largest benchmark re-id dataset with 13, 164 images of 1, 360 pedestrians. Unlike existing datasets, which only provide manually cropped pedestrian images, our dataset provides automatically detected bounding boxes for evaluation close to practical applications. Our neural network significantly outperforms state-of-the-art methods on this dataset.", "title": "" } ]
[ { "docid": "d281c9d3862c4e0988247f7fe1e8a702", "text": "The vaginal microbial community is typically characterized by abundant lactobacilli. Lactobacillus iners, a fairly recently detected species, is frequently present in the vaginal niche. However, the role of this species in vaginal health is unclear, since it can be detected in normal conditions as well as during vaginal dysbiosis, such as bacterial vaginosis, a condition characterized by an abnormal increase in bacterial diversity and lack of typical lactobacilli. Compared to other Lactobacillus species, L. iners has more complex nutritional requirements and a Gram-variable morphology. L. iners has an unusually small genome (ca. 1 Mbp), indicative of a symbiotic or parasitic lifestyle, in contrast to other lactobacilli that show niche flexibility and genomes of up to 3-4 Mbp. The presence of specific L. iners genes, such as those encoding iron-sulfur proteins and unique σ-factors, reflects a high degree of niche specification. The genome of L. iners strains also encodes inerolysin, a pore-forming toxin related to vaginolysin of Gardnerella vaginalis. Possibly, this organism may have clonal variants that in some cases promote a healthy vagina, and in other cases are associated with dysbiosis and disease. Future research should examine this friend or foe relationship with the host.", "title": "" }, { "docid": "b7944edc9e6704cbf59489f112f46c11", "text": "The basic paradigm of asset pricing is in vibrant f lux. The purely rational approach is being subsumed by a broader approach based upon the psychology of investors. In this approach, security expected returns are determined by both risk and misvaluation. This survey sketches a framework for understanding decision biases, evaluates the a priori arguments and the capital market evidence bearing on the importance of investor psychology for security prices, and reviews recent models. The best plan is . . . to profit by the folly of others. — Pliny the Elder, from John Bartlett, comp. Familiar Quotations, 9th ed. 1901. IN THE MUDDLED DAYS BEFORE THE RISE of modern finance, some otherwisereputable economists, such as Adam Smith, Irving Fisher, John Maynard Keynes, and Harry Markowitz, thought that individual psychology affects prices.1 What if the creators of asset-pricing theory had followed this thread? Picture a school of sociologists at the University of Chicago proposing the Deficient Markets Hypothesis: that prices inaccurately ref lect all available information. A brilliant Stanford psychologist, call him Bill Blunte, invents the Deranged Anticipation and Perception Model ~or DAPM!, in which proxies for market misvaluation are used to predict security returns. Imagine the euphoria when researchers discovered that these mispricing proxies ~such * Hirshleifer is from the Fisher College of Business, The Ohio State University. This survey was written for presentation at the American Finance Association Annual Meetings in New Orleans, January, 2001. I especially thank the editor, George Constantinides, for valuable comments and suggestions. I also thank Franklin Allen, the discussant, Nicholas Barberis, Robert Bloomfield, Michael Brennan, Markus Brunnermeier, Joshua Coval, Kent Daniel, Ming Dong, Jack Hirshleifer, Harrison Hong, Soeren Hvidkjaer, Ravi Jagannathan, Narasimhan Jegadeesh, Andrew Karolyi, Charles Lee, Seongyeon Lim, Deborah Lucas, Rajnish Mehra, Norbert Schwarz, Jayanta Sen, Tyler Shumway, René Stulz, Avanidhar Subrahmanyam, Siew Hong Teoh, Sheridan Titman, Yue Wang, Ivo Welch, and participants of the Dice Finance Seminar at Ohio State University for very helpful discussions and comments. 1 Smith analyzed how the “overweening conceit” of mankind caused labor to be underpriced in more enterprising pursuits. Young workers do not arbitrage away pay differentials because they are prone to overestimate their ability to succeed. Fisher wrote a book on money illusion; in The Theory of Interest ~~1930!, pp. 493–494! he argued that nominal interest rates systematically fail to adjust sufficiently for inf lation, and explained savings behavior in relation to self-control, foresight, and habits. Keynes ~1936! famously commented on animal spirits in stock markets. Markowitz ~1952! proposed that people focus on gains and losses relative to reference points, and that this helps explain the pricing of insurance and lotteries. THE JOURNAL OF FINANCE • VOL. LVI, NO. 4 • AUGUST 2001", "title": "" }, { "docid": "f8330ca9f2f4c05c26d679906f65de04", "text": "In recent years, VDSL2 standard has been gaining popularity as a high speed network access technology to deliver triple play services of video, voice and data. These services require strict quality-of-experience (QoE) and quality-of-services (QoS) on DSL systems operating in an impulse noise environment. The DSL systems, in-turn, are affected severely in the presence of impulse noise in the telephone line. Therefore to improve upon the requirements of IPTV under the impulse noise conditions the standard body has been evaluating various proposals to mitigate and reduce the error rates. This paper lists and qualitatively compares various initiatives that have been suggested in the VDSL2 standard body to improve the protection of VDSL2 services against impulse noise.", "title": "" }, { "docid": "981a03df711c7c9aabdf163487887824", "text": "We introduce a new paradigm to investigate unsupervised learning, reducing unsupervised learning to supervised learning. Specifically, we mitigate the subjectivity in unsupervised decision-making by leveraging knowledge acquired from prior, possibly heterogeneous, supervised learning tasks. We demonstrate the versatility of our framework via comprehensive expositions and detailed experiments on several unsupervised problems such as (a) clustering, (b) outlier detection, and (c) similarity prediction under a common umbrella of meta-unsupervised-learning. We also provide rigorous PAC-agnostic bounds to establish the theoretical foundations of our framework, and show that our framing of metaclustering circumvents Kleinberg’s impossibility theorem for clustering.", "title": "" }, { "docid": "0510a9c94b3e6b94bca58ffc09decac5", "text": "In this work, we address the problem of lane change maneuver prediction in highway scenarios using information from sensors and perception systems widely used in automated driving. Our prediction approach is two-fold. First, a driver model learned from demonstrations via Inverse Reinforcement Learning is used to equip a host vehicle with the anticipatory behavior reasoning capability of common drivers. Second, inference on an interaction-aware augmented Switching State-Space Model allows the approach to account for the dynamic evidence observed. The use of a driver model that correctly balances the driving and risk-aversive preferences of a driver allows the computation of a planning-based maneuver prediction. Integrating this anticipatory prediction into the maneuver inference engine brings a degree of scene understanding into the estimate and leads to faster lane change detections compared to those obtained by relying on dynamics alone. The performance of the presented framework is evaluated using highway data collected with an instrumented vehicle. The combination of model-based maneuver prediction and filtering-based state and maneuver tracking is shown to outperform an Interacting Multiple Model filter in the detection of highway lane change maneuvers regarding accuracy, detection latency — by an average of 0.4 seconds- and false-positive rates.", "title": "" }, { "docid": "255a155986548bb873ee0bc88a00222b", "text": "Patterns of neural activity are systematically elicited as the brain experiences categorical stimuli and a major challenge is to understand what these patterns represent. Two influential approaches, hitherto treated as separate analyses, have targeted this problem by using model-representations of stimuli to interpret the corresponding neural activity patterns. Stimulus-model-based-encoding synthesizes neural activity patterns by first training weights to map between stimulus-model features and voxels. This allows novel model-stimuli to be mapped into voxel space, and hence the strength of the model to be assessed by comparing predicted against observed neural activity. Representational Similarity Analysis (RSA) assesses models by testing how well the grand structure of pattern-similarities measured between all pairs of model-stimuli aligns with the same structure computed from neural activity patterns. RSA does not require model fitting, but also does not allow synthesis of neural activity patterns, thereby limiting its applicability. We introduce a new approach, representational similarity-encoding, that builds on the strengths of RSA and robustly enables stimulus-model-based neural encoding without model fitting. The approach therefore sidesteps problems associated with overfitting that notoriously confront any approach requiring parameter estimation (and is consequently low cost computationally), and importantly enables encoding analyses to be incorporated within the wider Representational Similarity Analysis framework. We illustrate this new approach by using it to synthesize and decode fMRI patterns representing the meanings of words, and discuss its potential biological relevance to encoding in semantic memory. Our new similarity-based encoding approach unites the two previously disparate methods of encoding models and RSA, capturing the strengths of both, and enabling similarity-based synthesis of predicted fMRI patterns.", "title": "" }, { "docid": "50283f1442d6e50ac6f8334ab992cbc6", "text": "The objective of ent i ty identification i s t o determine the correspondence between object instances f r o m more than one database. This paper ezamines the problem at the instance level assuming that schema level heterogeneity has been resolved a priori . Soundness and completeness are defined as the desired properties of any ent i ty identification technique. To achieve soundness, a set of ident i ty and distinctness rules are established for enti t ies in the integrated world. W e propose the use of eztended key, which i s the union of keys (and possibly other attributes) f r o m the relations t o be matched, and i t s corresponding ident i ty rule, t o determine the equivalence between tuples f r o m relations which m a y not share any common key. Instance level funct ional dependencies (ILFD), a f o r m of semantic constraint information about the real-world entities, are used t o derive the missing eztended key attribute values of a tuple.", "title": "" }, { "docid": "965aa0d30aa761c2218fdc3cbc3b8d92", "text": "In this paper, a MOSFET-based pulsed power supply capable of supplying square pulses of up to 3000 V and widths from nanoseconds to milliseconds is presented and used for an investigation into electroporation-mediated delivery of a plasmid DNA molecule into the pathogenic bacterium Escherichia coli O157:H7. It was concluded that increasing the electric field strength and pulse amplitude resulted in an increase in the number of transformants. However, increasing the number of pulses had the effect of reducing the number of transformants. In all the experiments, the number of cells that were inactivated by the exposure to the pulsed electric field were also measured.", "title": "" }, { "docid": "63de624a33f7c9362b477aabd9faac51", "text": "24 GHz circularly polarized Doppler front-end with a single antenna is developed. The radar system is composed of 24 GHz circularly polarized Doppler radar module, signal conditioning block, DAQ unit, and signal processing program. 24 GHz Doppler radar receiver front-end IC which is comprised of 3-stage LNA, single-ended mixer, and Lange coupler is fabricated with commercial InGaP/GaAs HBT technology. To reduce the chip size and suppress self-mixing, single-ended mixer which uses Tx leakage as a LO signal of the mixer is used. The operation of the developed radar front-end is demonstrated by measuring human vital signal. Compact size and high sensitivity can be achieved at the same time with the circularly polarized Doppler radar with a single antenna.", "title": "" }, { "docid": "4b25c7e58f49784d525398f4611b7ffa", "text": "In this work, we studied the extraction process of papain, present in the latex of papaya fruit (Carica papaya L.) cv. Maradol. The variables studied in the extraction of papain were: latex:alcohol ratio (1:2.1 and 1:3) and drying method (vacuum and refractance window). Papain enzyme responses were obtained in terms of enzymatic activity and yield of the extraction process. The best result in terms of enzyme activity and yield was obtained by vacuum drying and a latex:alcohol ratio of 1:3. The enzyme obtained was characterized by physicochemical and microbiological properties and, enzymatic activity when compared with a commercial sample used as standard.", "title": "" }, { "docid": "717ea3390ffe3f3132d4e2230e645ee5", "text": "Much of what is known about physiological systems has been learned using linear system theory. However, many biomedical signals are apparently random or aperiodic in time. Traditionally, the randomness in biological signals has been ascribed to noise or interactions between very large numbers of constituent components. One of the most important mathematical discoveries of the past few decades is that random behavior can arise in deterministic nonlinear systems with just a few degrees of freedom. This discovery gives new hope to providing simple mathematical models for analyzing, and ultimately controlling, physiological systems. The purpose of this chapter is to provide a brief pedagogic survey of the main techniques used in nonlinear time series analysis and to provide a MATLAB tool box for their implementation. Mathematical reviews of techniques in nonlinear modeling and forecasting can be found in Refs. 1-5. Biomedical signals that have been analyzed using these techniques include heart rate [6-8], nerve activity [9], renal flow [10], arterial pressure [11], electroencephalogram [12], and respiratory waveforms [13]. Section 2 provides a brief overview of dynamical systems theory including phase space portraits, Poincare surfaces of section, attractors, chaos, Lyapunov exponents, and fractal dimensions. The forced Duffing-Van der Pol oscillator (a ubiquitous model in engineering problems) is investigated as an illustrative example. Section 3 outlines the theoretical tools for time series analysis using dynamical systems theory. Reliability checks based on forecasting and surrogate data are also described. The time series methods are illustrated using data from the time evolution of one of the dynamical variables of the forced Duffing-Van der Pol oscillator. Section 4 concludes with a discussion of possible future directions for applications of nonlinear time series analysis in biomedical processes.", "title": "" }, { "docid": "e59d1a3936f880233001eb086032d927", "text": "In microblogging services such as Twitter, the users may become overwhelmed by the raw data. One solution to this problem is the classification of short text messages. As short texts do not provide sufficient word occurrences, traditional classification methods such as \"Bag-Of-Words\" have limitations. To address this problem, we propose to use a small set of domain-specific features extracted from the author's profile and text. The proposed approach effectively classifies the text to a predefined set of generic classes such as News, Events, Opinions, Deals, and Private Messages.", "title": "" }, { "docid": "e8246712bb8c4e793697b9933ab8b4f6", "text": "In this paper we utilize a dimensional emotion representation named Resonance-Arousal-Valence to express music emotion and inverse exponential function to represent emotion decay process. The relationship between acoustic features and their emotional impact reflection based on this representation has been well constructed. As music well expresses feelings, through the users' historical playlist in a session, we utilize the Conditional Random Fields to compute the probabilities of different emotion states, choosing the largest as the predicted user's emotion state. In order to recommend music based on the predicted user's emotion, we choose the optimized ranked music list that has the highest emotional similarities to the music invoking the predicted emotion state in the playlist for recommendation. We utilize our minimization iteration algorithm to assemble the optimized ranked recommended music list. The experiment results show that the proposed emotion-based music recommendation paradigm is effective to track the user's emotions and recommend music fitting his emotional state.", "title": "" }, { "docid": "c913313524862f21df94651f78616e09", "text": "The solidity is one of the most important factors which greatly affects the performance of the straight-bladed vertical axis wind turbine (SB-VAWT). In this study, numerical computations were carried out on a small model of the SB-VAWT with different solidities to invest its performance effects. Two kinds of solidity were decided, and for each one, three patterns were selected by changing the blade chord and number. Numerical computations based on the 2 dimensions incompressible steady flow were made. Flow fields around the SB-VAWT were obtained, and the torque and power coefficients were also calculated. According to the computation results under the conditions of this study, the effects of solidity on both the static and dynamic performance of the SB-VAWT were discussed. Keywords-vertical axis wind turbine;straight-bladed; numerical computation; solidity; stactic torque;power", "title": "" }, { "docid": "f9a3f69cf26b279fa8600fd2ebbc3426", "text": "We introduce Interactive Question Answering (IQA), the task of answering questions that require an autonomous agent to interact with a dynamic visual environment. IQA presents the agent with a scene and a question, like: \"Are there any apples in the fridge?\" The agent must navigate around the scene, acquire visual understanding of scene elements, interact with objects (e.g. open refrigerators) and plan for a series of actions conditioned on the question. Popular reinforcement learning approaches with a single controller perform poorly on IQA owing to the large and diverse state space. We propose the Hierarchical Interactive Memory Network (HIMN), consisting of a factorized set of controllers, allowing the system to operate at multiple levels of temporal abstraction. To evaluate HIMN, we introduce IQUAD V1, a new dataset built upon AI2-THOR [35], a simulated photo-realistic environment of configurable indoor scenes with interactive objects. IQUAD V1 has 75,000 questions, each paired with a unique scene configuration. Our experiments show that our proposed model outperforms popular single controller based methods on IQUAD V1. For sample questions and results, please view our video: https://youtu.be/pXd3C-1jr98.", "title": "" }, { "docid": "0ce0db75982c205b581bc24060b9e2a4", "text": "Maxim Gumin's WaveFunctionCollapse (WFC) algorithm is an example-driven image generation algorithm emerging from the craft practice of procedural content generation. In WFC, new images are generated in the style of given examples by ensuring every local window of the output occurs somewhere in the input. Operationally, WFC implements a non-backtracking, greedy search method. This paper examines WFC as an instance of constraint solving methods. We trace WFC's explosive influence on the technical artist community, explain its operation in terms of ideas from the constraint solving literature, and probe its strengths by means of a surrogate implementation using answer set programming.", "title": "" }, { "docid": "2232f81da81ced942da548d0669bafc6", "text": "Quantitative prediction of quality properties (i.e. extra-functional properties such as performance, reliability, and cost) of software architectures during design supports a systematic software engineering approach. Designing architectures that exhibit a good trade-off between multiple quality criteria is hard, because even after a functional design has been created, many remaining degrees of freedom in the software architecture span a large, discontinuous design space. In current practice, software architects try to find solutions manually, which is time-consuming, can be error-prone and can lead to suboptimal designs. We propose an automated approach to search the design space for good solutions. Starting with a given initial architectural model, the approach iteratively modifies and evaluates architectural models. Our approach applies a multi-criteria genetic algorithm to software architectures modelled with the Palladio Component Model. It supports quantitative performance, reliability, and cost prediction and can be extended to other quantitative quality criteria of software architectures. We validate the applicability of our approach by applying it to an architecture model of a component-based business information system and analyse its quality criteria trade-offs by automatically investigating more than 1200 alternative design candidates.", "title": "" }, { "docid": "a1c82e67868ef3426896cdb541371d79", "text": "Executable packing is the most common technique used by computer virus writers to obfuscate malicious code and evade detection by anti-virus software. Universal unpackers have been proposed that can detect and extract encrypted code from packed executables, therefore potentially revealing hidden viruses that can then be detected by traditional signature-based anti-virus software. However, universal unpackers are computationally expensive and scanning large collections of executables looking for virus infections may take several hours or even days. In this paper we apply pattern recognition techniques for fast detection of packed executables. The objective is to efficiently and accurately distinguish between packed and non-packed executables, so that only executables detected as packed will be sent to an universal unpacker, thus saving a significant amount of processing time. We show that our system achieves very high detection accuracy of packed executables with a low average processing time.", "title": "" }, { "docid": "42e20232bae79a6251a14b03fa264721", "text": "BACKGROUND\nMeralgia paresthetica, a syndrome of pain and/or dysesthesia in the anterolateral thigh, is normally caused by an entrapment of the lateral femoral cutaneous nerve (LFCN) at the anterior superior iliac spine. In a few cases compression of the nerve in the retroperitoneum has been reported to mimic meralgia paresthetica.\n\n\nCASE DESCRIPTION\nA 67-year-old woman presented with a 5-year history of permanent paresthesia in the anterolateral thigh. Motor weakness was not detected. Electromyography showed a neurogenic lesion at the level of L3. Lumbar spine MRI detected a foraminal-extraforaminal disc herniation at L2/L3, which was extirpated via a lateral transmuscular approach. The patient was free of symptoms on the first postoperative day.\n\n\nCONCLUSION\nIn patients with meralgia paresthetica we emphasize a complete radiological investigation of the lumbar spine, including MRI, to exclude radicular compression by a disc herniation or a tumour at the level of L2 or L3.", "title": "" }, { "docid": "820768d9fc4e8f9fb4452e4aeeafd270", "text": "Lateral epicondylitis (Tennis Elbow) is the most frequent type of myotendinosis and can be responsible for substantial pain and loss of function of the affected limb. Muscular biomechanics characteristics and equipment are important in preventing the conditions. This article present on overview of the current knowledge on lateral Epicondylitis and focuses on Etiology, Diagnosis and treatment strategies, conservative treatment are discussed and recent surgical techniques are outlined. This information should assist health care practitioners who treat patients with this disorder.", "title": "" } ]
scidocsrr
951ec25871ccddb608474d94ab3778f6
Holistically Constrained Local Model: Going Beyond Frontal Poses for Facial Landmark Detection
[ { "docid": "2729af242339c8cbc51f49047ed9d049", "text": "We address the problem of interactive facial feature localization from a single image. Our goal is to obtain an accurate segmentation of facial features on high-resolution images under a variety of pose, expression, and lighting conditions. Although there has been significant work in facial feature localization, we are addressing a new application area, namely to facilitate intelligent high-quality editing of portraits, that brings requirements not met by existing methods. We propose an improvement to the Active Shape Model that allows for greater independence among the facial components and improves on the appearance fitting step by introducing a Viterbi optimization process that operates along the facial contours. Despite the improvements, we do not expect perfect results in all cases. We therefore introduce an interaction model whereby a user can efficiently guide the algorithm towards a precise solution. We introduce the Helen Facial Feature Dataset consisting of annotated portrait images gathered from Flickr that are more diverse and challenging than currently existing datasets. We present experiments that compare our automatic method to published results, and also a quantitative evaluation of the effectiveness of our interactive method.", "title": "" } ]
[ { "docid": "ef1e21b30f0065a78ec42def27b1a795", "text": "The rise of industry 4.0 and data-intensive manufacturing makes advanced process control (APC) applications more relevant than ever for process/production optimization, related costs reduction, and increased efficiency. One of the most important APC technologies is virtual metrology (VM). VM aims at exploiting information already available in the process/system under exam, to estimate quantities that are costly or impossible to measure. Machine learning (ML) approaches are the foremost choice to design VM solutions. A serious drawback of traditional ML methodologies is that they require a features extraction phase that generally limits the scalability and performance of VM solutions. Particularly, in presence of multi-dimensional data, the feature extraction process is based on heuristic approaches that may capture features with poor predictive power. In this paper, we exploit modern deep learning (DL)-based technologies that are able to automatically extract highly informative features from the data, providing more accurate and scalable VM solutions. In particular, we exploit DL architectures developed in the realm of computer vision to model data that have both spatial and time evolution. The proposed methodology is tested on a real industrial dataset related to etching, one of the most important semiconductor manufacturing processes. The dataset at hand contains optical emission spectroscopy data and it is paradigmatic of the feature extraction problem in VM under examination.", "title": "" }, { "docid": "07ef2766f22ac6c5b298e3f833cd88b5", "text": "A generic and robust approach for the real-time detection of people and vehicles from an Unmanned Aerial Vehicle (UAV) is an important goal within the framework of fully autonomous UAV deployment for aerial reconnaissance and surveillance. Here we present an approach for the automatic detection of vehicles based on using multiple trained cascaded Haar classifiers with secondary confirmation in thermal imagery. Additionally we present a related approach for people detection in thermal imagery based on a similar cascaded classification technique combining additional multivariate Gaussian shape matching. The results presented show the successful detection of vehicle and people under varying conditions in both isolated rural and cluttered urban environments with minimal false positive detection. Performance of the detector is optimized to reduce the overall false positive rate by aiming at the detection of each object of interest (vehicle/person) at least once in the environment (i.e. per search patter flight path) rather than every object in each image frame. Currently the detection rate for people is ~70% and cars ~80% although the overall episodic object detection rate for each flight pattern exceeds 90%.", "title": "" }, { "docid": "f9afdab6f3cac70d6680b02b32f37b49", "text": "Marx generators can produce high voltage pulses using multiple identical stages that operate at a fraction of the total output voltage, without the need for a step-up transformer that limits the pulse risetimes and lowers the efficiency of the system. Each Marx stage includes a capacitor or pulse forming network, and a high voltage switch. Typically, these switches are spark gaps resulting in Marx generators with low repetition rates and limited lifetimes. The development of economical, compact, high voltage, high di/dt, and fast turn-on solid-state switches make it easy to build economical, long lifetime, high voltage Marx generators capable of high pulse repetition rates. We have constructed a Marx generator using our 24 kV thyristor based switches, which are capable of conducting 14 kA peak currents with ringing discharges at >25 kA/mus rate of current risetimes. The switches have short turn-on delays, less than 200 ns, low timing jitters, and are triggered by a single 10 V isolated trigger pulse. This paper will include a description of a 4-stage solid-state Marx and triggering system, as well as show data from operation at 15 kV charging voltage. The Marx was used to drive a one-stage argon ion accelerator", "title": "" }, { "docid": "4a4598980985371c13fc770c5484cbbe", "text": "Lung carcinoma is often incurable and remains the leading cancer killer in both men and women. Recent evidence indicates that tumors contain a small population of cancer stem cells that are responsible for tumor maintenance and spreading. The identification of the tumorigenic population that sustains lung cancer may contribute significantly to the development of effective therapies. Here, we found that the tumorigenic cells in small cell and non-small cell lung cancer are a rare population of undifferentiated cells expressing CD133, an antigen present in the cell membrane of normal and cancer-primitive cells of the hematopoietic, neural, endothelial and epithelial lineages. Lung cancer CD133+ cells were able to grow indefinitely as tumor spheres in serum-free medium containing epidermal growth factor and basic fibroblast growth factor. The injection of 104 lung cancer CD133+ cells in immunocompromised mice readily generated tumor xenografts phenotypically identical to the original tumor. Upon differentiation, lung cancer CD133+ cells acquired the specific lineage markers, while loosing the tumorigenic potential together with CD133 expression. Thus, lung cancer contains a rare population of CD133+ cancer stem-like cells able to self-renew and generates an unlimited progeny of non-tumorigenic cells. Molecular and functional characterization of such a tumorigenic population may provide valuable information to be exploited in the clinical setting.", "title": "" }, { "docid": "5e25a133af30d08844eca800d82379a3", "text": "This study evaluates the effects of ketamine on healthy and schizophrenic volunteers (SVs) in an effort to define the detailed behavioral effects of the drug in a psychosis model. We compared the effects of ketamine on normal and SVs to establish the comparability of their responses and the extent to which normal subjects might be used experimentally as a model. Eighteen normal volunteers (NVs) and 17 SVs participated in ketamine interviews. Some (n = 7 NVs; n = 9 SVs) had four sessions with a 0.1–0.5 mg/kg of ketamine and a placebo; others (n = 11 NVs; n = 8 SVs) had two sessions with one dose of ketamine (0.3 mg/kg) and a placebo. Experienced research clinicians used the BPRS to assess any change in mental status over time and documented the specifics in a timely way. In both volunteer groups, ketamine induced a dose-related, short (<30 min) increase in psychotic symptoms. The scores of NVs increased on both the Brief Psychiatric Rating Scale (BPRS) psychosis subscale (p = .0001) and the BPRS withdrawal subscale (p = .0001), whereas SVs experienced an increase only in positive symptoms (p = .0001). Seventy percent of the patients reported an increase (i.e., exacerbation) of previously experienced positive symptoms. Normal and schizophrenic groups differed only on the BPRS withdrawal score. The magnitude of ketamine-induced changes in positive symptoms was similar, although the psychosis baseline differed, and the dose-response profiles over time were superimposable across the two populations. The similarity between ketamine-induced symptoms in SVs and their own positive symptoms suggests that ketamine provides a unique model of psychosis in human volunteers. The data suggest that the phencyclidine (PCP) model of schizophrenia maybe a more valid human psychosis/schizophrenia drug model than the amphetamine model, with a broader range of psychotic symptoms. This study indicates that NVs could be used for many informative experimental psychosis studies involving ketamine interviews.", "title": "" }, { "docid": "72a2d1ade2a3f0161014bc940a714c82", "text": "Neurosurgeons are faced with the challenge of learning, planning, and performing increasingly complex surgical procedures in which there is little room for error. With improvements in computational power and advances in visual and haptic display technologies, virtual surgical environments can now offer potential benefits for surgical training, planning, and rehearsal in a safe, simulated setting. This article introduces the various classes of surgical simulators and their respective purposes through a brief survey of representative simulation systems in the context of neurosurgery. Many technical challenges currently limit the application of virtual surgical environments. Although we cannot yet expect a digital patient to be indistinguishable from reality, new developments in computational methods and related technology bring us closer every day. We recognize that the design and implementation of an immersive virtual reality surgical simulator require expert knowledge from many disciplines. This article highlights a selection of recent developments in research areas related to virtual reality simulation, including anatomic modeling, computer graphics and visualization, haptics, and physics simulation, and discusses their implication for the simulation of neurosurgery.", "title": "" }, { "docid": "2aa7f856a2967bd62da9edf1496633e6", "text": "The paper describes a comprehensive system - a digital phonocardiogram (PCG) analyzer, which can acquire heart sounds and electrocardiogram (ECG) in parallel, synchronize the display and play of heart sounds and unite auscultation and check phonocardiogram. The hardware which controlled by MCU C8051 F340, acquires heart sounds and ECG synchronously, and then sends them to indicators, respectively. Heat sounds are displayed and played simultaneously by controlling the moment of writing to indicator and sound output device. In clinical test, heat sound can be successfully located with ECG and real-time played.", "title": "" }, { "docid": "de04d3598687b34b877d744956ca4bcd", "text": "We investigate the reputational impact of financial fraud for outside directors based on a sample of firms facing shareholder class action lawsuits. Following a financial fraud lawsuit, outside directors do not face abnormal turnover on the board of the sued firm but experience a significant decline in other board seats held. The decline in other directorships is greater for more severe cases of fraud and when the outside director bears greater responsibility for monitoring fraud. Interlocked firms that share directors with the sued firm exhibit valuation declines at the lawsuit filing. When fraud-affiliated directors depart from boards of interlocked firms, these firms experience a significant increase in valuation.", "title": "" }, { "docid": "03ba329de93f763ff6f0a8c4c6e18056", "text": "Nowadays, with the availability of massive amount of trade data collected, the dynamics of the financial markets pose both a challenge and an opportunity for high frequency traders. In order to take advantage of the rapid, subtle movement of assets in High Frequency Trading (HFT), an automatic algorithm to analyze and detect patterns of price change based on transaction records must be available. The multichannel, time-series representation of financial data naturally suggests tensor-based learning algorithms. In this work, we investigate the effectiveness of two multilinear methods for the mid-price prediction problem against other existing methods. The experiments in a large scale dataset which contains more than 4 millions limit orders show that by utilizing tensor representation, multilinear models outperform vector-based approaches and other competing ones.", "title": "" }, { "docid": "50ec4623e6b7c4bf6d9207474e16ae47", "text": "We resolve a basic problem regarding subspace distances that has arisen considerably often in applications: How could one define a notion of distance between two linear subspaces of different dimensions in a way that generalizes the usual Grassmann distance between equidimensional subspaces? We show that a natural solution is given by the distance of a point to a Schubert variety within the Grassmannian. Aside from reducing to the usual Grassmann distance when the subspaces are equidimensional, this distance is intrinsic and does not depend on any embedding into a larger ambient space. Furthermore, it can be written down as concrete expressions involving principal angles, and is efficiently computable in numerically stable ways. Our results are also largely independent of the Grassmann distance — if desired, it may be substituted by any other common distances between subspaces. Central to our approach to these problems is a concrete algebraic geometric view of the Grassmannian that parallels the differential geometric perspective that is now well-established in applied and computational mathematics. A secondary goal of this article is to demonstrate that the basic algebraic geometry of Grassmannian can be just as accessible and useful to practitioners.", "title": "" }, { "docid": "863202feb1410b177c6bb10ccc1fa43d", "text": "Multimedia retrieval plays an indispensable role in big data utilization. Past efforts mainly focused on single-media retrieval. However, the requirements of users are highly flexible, such as retrieving the relevant audio clips with one query of image. So challenges stemming from the “media gap,” which means that representations of different media types are inconsistent, have attracted increasing attention. Cross-media retrieval is designed for the scenarios where the queries and retrieval results are of different media types. As a relatively new research topic, its concepts, methodologies, and benchmarks are still not clear in the literature. To address these issues, we review more than 100 references, give an overview including the concepts, methodologies, major challenges, and open issues, as well as build up the benchmarks, including data sets and experimental results. Researchers can directly adopt the benchmarks to promptly evaluate their proposed methods. This will help them to focus on algorithm design, rather than the time-consuming compared methods and results. It is noted that we have constructed a new data set XMedia, which is the first publicly available data set with up to five media types (text, image, video, audio, and 3-D model). We believe this overview will attract more researchers to focus on cross-media retrieval and be helpful to them.", "title": "" }, { "docid": "7ae137752af46ecd4bf8957691069779", "text": "We measured contrast detection thresholds for a foveal Gabor signal flanked by two high contrast Gabor signals. The spatially localized target and masks enabled investigation of space dependent lateral interactions between foveal and neighboring spatial channels. Our data show a suppressive region extending to a radius of two wavelengths, in which the presence of the masking signals have the effect of increasing target threshold. Beyond this range a much larger facilitatory region (up to a distance of ten wavelengths) is indicated, in which contrast thresholds were found to decrease by up to a factor of two. The interactions between the foveal target and the flanking Gabor signals are spatial-frequency and orientation specific in both regions, but less specific in the suppression region.", "title": "" }, { "docid": "563dd9e9a2997606c8294b58b75011e5", "text": "In ship, submarine, or airborne satellite communication or multi-function radar operations, dual-band phased array antennas with dual-linear or circular polarizations are needed. We present a dual-band/dual-polarization phased array design using a interleaved cross-dipole radiator and a cavity-backed disk radiator in the same lattice structure. The low band radiator is a disk radiator sitting on top of a dielectric puck in a cavity and the high band radiator is the cross dipole element printed on a low-K dielectric layer above the disk-cavity plane. Dual-linear or circular polarizations can be achieved using these two elements to cover two separate bands with about a 2:1 ratio. This is a relatively low profile, compact and rigid array. Good impedance match for both bands can be obtained over a wide scan volume.", "title": "" }, { "docid": "c049f188b31bbc482e16d22a8061abfa", "text": "SDN deployments rely on switches that come from various vendors and differ in terms of performance and available features. Understanding these differences and performance characteristics is essential for ensuring successful deployments. In this paper we measure, report, and explain the performance characteristics of flow table updates in three hardware OpenFlow switches. Our results can help controller developers to make their programs efficient. Further, we also highlight differences between the OpenFlow specification and its implementations, that if ignored, pose a serious threat to network security and correctness.", "title": "" }, { "docid": "bc8cced1600b5dd1169b7107245ffa0a", "text": "A facile, ultrasensitive, and selective sensor strip utilizing 4-amino-3-penten-2-one (fluoral-p) functionalized electrospun polyacrylonitrile (PAN) (PAN/fluoral-p) nanofibrous membranes has been successfully developed for naked-eye colorimetric assay of formaldehyde. The sensor strips presented a significant reflectance decreasing band at 417 nm which induced a vivid color change from white to yellow and achieved a much lower naked-eye detection limit of 40 ppb compared with the World Health Organization standard (80 ppb). Based on the specific Hantzsch reaction between fluoral-p and formaldehyde, the as-prepared PAN/fluoral-p membranes are highly selective to formaldehyde with little interference from other volatile organic compounds and the proposed mechanism of this reaction is discussed carefully. Moreover, the colorimetric responses were visually quantitative using UV-vis spectra and the color difference calculated from L*, a*, b* values. Furthermore, due to the extremely large surface area and high porosity of the as-spun PAN nanofibrous membranes, the sensitivity of the nanofibrous membranes-based strips is much higher than traditional filter paper-based ones. Hence, such promising portable colorimetric sensor strips could not only potentially allow for assaying gaseous formaldehyde, but also facilitate the design and development of a novel colorimetric sensing system based on nanofibrous membranes.", "title": "" }, { "docid": "eb0a907ad08990b0fe5e2374079cf395", "text": "We examine whether tolerance for failure spurs corporate innovation based on a sample of venture capital (VC) backed IPO firms. We develop a novel measure of VC investors’ failure tolerance by examining their tendency to continue investing in a venture conditional on the venture not meeting milestones. We find that IPO firms backed by more failure-tolerant VC investors are significantly more innovative. A rich set of empirical tests shows that this result is not driven by the endogenous matching between failure-tolerant VCs and startups with high exante innovation potentials. Further, we find that the marginal impact of VC failure tolerance on startup innovation varies significantly in the cross section. Being financed by a failure-tolerant VC is much more important for ventures that are subject to high failure risk. Finally, we examine the determinants of the cross-sectional heterogeneity in VC failure tolerance. We find that both capital constraints and career concerns can negatively distort VC failure tolerance. We also show that younger and less experienced VCs are more exposed to these distortions, making them less failure tolerant than more established VCs.", "title": "" }, { "docid": "4f9dd51d77b6a7008b213042a825c748", "text": "A crucial capability of real-world intelligent agents is their ability to plan a sequence of actions to achieve their goals in the visual world. In this work, we address the problem of visual semantic planning: the task of predicting a sequence of actions from visual observations that transform a dynamic environment from an initial state to a goal state. Doing so entails knowledge about objects and their affordances, as well as actions and their preconditions and effects. We propose learning these through interacting with a visual and dynamic environment. Our proposed solution involves bootstrapping reinforcement learning with imitation learning. To ensure cross task generalization, we develop a deep predictive model based on successor representations. Our experimental results show near optimal results across a wide range of tasks in the challenging THOR environment.", "title": "" }, { "docid": "854eab1455c6d49b67dc9d0f4864409f", "text": "We investigate the generalizability of deep learning based on the sensitivity to input perturbation. We hypothesize that the high sensitivity to the perturbation of data degrades the performance on it. To reduce the sensitivity to perturbation, we propose a simple and effective regularization method, referred to as spectral norm regularization, which penalizes the high spectral norm of weight matrices in neural networks. We provide supportive evidence for the abovementioned hypothesis by experimentally confirming that the models trained using spectral norm regularization exhibit better generalizability than other baseline methods.", "title": "" }, { "docid": "7bfbcf62f9ff94e80913c73e069ace26", "text": "This paper presents an online highly accurate system for automatic number plate recognition (ANPR) that can be used as a basis for many real-world ITS applications. The system is designed to deal with unclear vehicle plates, variations in weather and lighting conditions, different traffic situations, and high-speed vehicles. This paper addresses various issues by presenting proper hardware platforms along with real-time, robust, and innovative algorithms. We have collected huge and highly inclusive data sets of Persian license plates for evaluations, comparisons, and improvement of various involved algorithms. The data sets include images that were captured from crossroads, streets, and highways, in day and night, various weather conditions, and different plate clarities. Over these data sets, our system achieves 98.7%, 99.2%, and 97.6% accuracies for plate detection, character segmentation, and plate recognition, respectively. The false alarm rate in plate detection is less than 0.5%. The overall accuracy on the dirty plates portion of our data sets is 91.4%. Our ANPR system has been installed in several locations and has been tested extensively for more than a year. The proposed algorithms for each part of the system are highly robust to lighting changes, size variations, plate clarity, and plate skewness. The system is also independent of the number of plates in captured images. This system has been also tested on three other Iranian data sets and has achieved 100% accuracy in both detection and recognition parts. To show that our ANPR is not language dependent, we have tested our system on available English plates data set and achieved 97% overall accuracy.", "title": "" }, { "docid": "8b252e706868440162e50a2c23255cb3", "text": "Currently, most top-performing text detection networks tend to employ fixed-size anchor boxes to guide the search for text instances. Œey usually rely on a large amount of anchors with different scales to discover texts in scene images, thus leading to high computational cost. In this paper, we propose an end-to-end boxbased text detector with scale-adaptive anchors, which can dynamically adjust the scales of anchors according to the sizes of underlying texts by introducing an additional scale regression layer. Œe proposed scale-adaptive anchors allow us to use a few number of anchors to handle multi-scale texts and therefore significantly improve the computational efficiency. Moreover, compared to discrete scales used in previous methods, the learned continuous scales are more reliable, especially for small texts detection. Additionally, we propose Anchor convolution to beŠer exploit necessary feature information by dynamically adjusting the sizes of receptive fields according to the learned scales. Extensive experiments demonstrate that the proposed detector is fast, taking only 0.28 second per image, while outperforming most state-of-the-art methods in accuracy.", "title": "" } ]
scidocsrr
f99d471004ef731e5a4f8437981eb28b
Multi-scale approach from mechatronic to Cyber-Physical Systems for the design of manufacturing systems
[ { "docid": "f538089a72bcc5f6f9f944676b9f199d", "text": "This paper focuses on the challenges of modeling cyber-physical systems (CPSs) that arise from the intrinsic heterogeneity, concurrency, and sensitivity to timing of such systems. It uses a portion of an aircraft vehicle management system (VMS), specifically the fuel management subsystem, to illustrate the challenges, and then discusses technologies that at least partially address the challenges. Specific technologies described include hybrid system modeling and simulation, concurrent and heterogeneous models of computation, the use of domain-specific ontologies to enhance modularity, and the joint modeling of functionality and implementation architectures.", "title": "" } ]
[ { "docid": "b4833563159839519aaaf38b011e7e10", "text": "In the past few years, some nonlinear dimensionality reduction (NLDR) or nonlinear manifold learning methods have aroused a great deal of interest in the machine learning community. These methods are promising in that they can automatically discover the low-dimensional nonlinear manifold in a high-dimensional data space and then embed the data points into a low-dimensional embedding space, using tractable linear algebraic techniques that are easy to implement and are not prone to local minima. Despite their appealing properties, these NLDR methods are not robust against outliers in the data, yet so far very little has been done to address the robustness problem. In this paper, we address this problem in the context of an NLDR method called locally linear embedding (LLE). Based on robust estimation techniques, we propose an approach to make LLE more robust. We refer to this approach as robust locally linear embedding (RLLE). We also present several specific methods for realizing this general RLLE approach. Experimental results on both synthetic and real-world data show that RLLE is very robust against outliers. 2005 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "59eaa9f4967abdc1c863f8fb256ae966", "text": "CONTEXT\nThe projected expansion in the next several decades of the elderly population at highest risk for Parkinson disease (PD) makes identification of factors that promote or prevent the disease an important goal.\n\n\nOBJECTIVE\nTo explore the association of coffee and dietary caffeine intake with risk of PD.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nData were analyzed from 30 years of follow-up of 8004 Japanese-American men (aged 45-68 years) enrolled in the prospective longitudinal Honolulu Heart Program between 1965 and 1968.\n\n\nMAIN OUTCOME MEASURE\nIncident PD, by amount of coffee intake (measured at study enrollment and 6-year follow-up) and by total dietary caffeine intake (measured at enrollment).\n\n\nRESULTS\nDuring follow-up, 102 men were identified as having PD. Age-adjusted incidence of PD declined consistently with increased amounts of coffee intake, from 10.4 per 10,000 person-years in men who drank no coffee to 1.9 per 10,000 person-years in men who drank at least 28 oz/d (P<.001 for trend). Similar relationships were observed with total caffeine intake (P<.001 for trend) and caffeine from non-coffee sources (P=.03 for trend). Consumption of increasing amounts of coffee was also associated with lower risk of PD in men who were never, past, and current smokers at baseline (P=.049, P=.22, and P=.02, respectively, for trend). Other nutrients in coffee, including niacin, were unrelated to PD incidence. The relationship between caffeine and PD was unaltered by intake of milk and sugar.\n\n\nCONCLUSIONS\nOur findings indicate that higher coffee and caffeine intake is associated with a significantly lower incidence of PD. This effect appears to be independent of smoking. The data suggest that the mechanism is related to caffeine intake and not to other nutrients contained in coffee. JAMA. 2000;283:2674-2679.", "title": "" }, { "docid": "48bc9441aceba3a67a5f9d4d88755d63", "text": "We present a proof of concept that machine learning techniques can be used to predict the properties of CNOHF energetic molecules from their molecular structures. We focus on a small but diverse dataset consisting of 109 molecular structures spread across ten compound classes. Up until now, candidate molecules for energetic materials have been screened using predictions from expensive quantum simulations and thermochemical codes. We present a comprehensive comparison of machine learning models and several molecular featurization methods - sum over bonds, custom descriptors, Coulomb matrices, Bag of Bonds, and fingerprints. The best featurization was sum over bonds (bond counting), and the best model was kernel ridge regression. Despite having a small data set, we obtain acceptable errors and Pearson correlations for the prediction of detonation pressure, detonation velocity, explosive energy, heat of formation, density, and other properties out of sample. By including another dataset with ≈300 additional molecules in our training we show how the error can be pushed lower, although the convergence with number of molecules is slow. Our work paves the way for future applications of machine learning in this domain, including automated lead generation and interpreting machine learning models to obtain novel chemical insights.", "title": "" }, { "docid": "6e92e6eda1bb54dffcdf9c165a487e29", "text": "Balanced Scorecard is considered as the world widely used Performance Management System by organizations. Around 57% organizations of the world are using the Balanced Scorecard tool for improving their Organizational Performance [1]. This technique of performance evaluation and management was coined by the Kaplan and Norton in 1992. From that date to 2012 a lot of work has been done by the academicians and practitioner on the Balanced Scorecard. This study is summarizing the major studies conducted on Balanced Scorecard from 1992 to 2012. Summing up all the criticism and appreciations on Balanced Scorecard, the study is suggesting some guidelines for improving the Balanced Scorecard in the light of previous researches conducted on Balanced", "title": "" }, { "docid": "dab84197dec153309bb45368ab730b12", "text": "Simple slopes, regions of significance, and confidence bands are commonly used to evaluate interactions in multiple linear regression (MLR) models, and the use of these techniques has recently been extended to multilevel or hierarchical linear modeling (HLM) and latent curve analysis (LCA). However, conducting these tests and plotting the conditional relations is often a tedious and error-prone task. This article provides an overview of methods used to probe interaction effects and describes a unified collection of freely available online resources that researchers can use to obtain significance tests for simple slopes, compute regions of significance, and obtain confidence bands for simple slopes across the range of the moderator in the MLR, HLM, and LCA contexts. Plotting capabilities are also provided.", "title": "" }, { "docid": "ee862e43dc73654abe1616858d8cd9d8", "text": "From a single image, humans are able to perceive the full 3D shape of an object by exploiting learned shape priors from everyday life. Contemporary single-image 3D reconstruction algorithms aim to solve this task in a similar fashion, but often end up with priors that are highly biased by training classes. Here we present an algorithm, Generalizable Reconstruction (GenRe), designed to capture more generic, class-agnostic shape priors. We achieve this with an inference network and training procedure that combine 2.5D representations of visible surfaces (depth and silhouette), spherical shape representations of both visible and non-visible surfaces, and 3D voxel-based representations, in a principled manner that exploits the causal structure of how 3D shapes give rise to 2D images. Experiments demonstrate that GenRe performs well on single-view shape reconstruction, and generalizes to diverse novel objects from categories not seen during training.", "title": "" }, { "docid": "49bd1cdbeea10f39a2b34cfa5baac0ef", "text": "Recently, image inpainting task has revived with the help of deep learning techniques. Deep neural networks, especially the generative adversarial networks~(GANs) make it possible to recover the missing details in images. Due to the lack of sufficient context information, most existing methods fail to get satisfactory inpainting results. This work investigates a more challenging problem, e.g., the newly-emerging semantic image inpainting - a task to fill in large holes in natural images. In this paper, we propose an end-to-end framework named progressive generative networks~(PGN), which regards the semantic image inpainting task as a curriculum learning problem. Specifically, we divide the hole filling process into several different phases and each phase aims to finish a course of the entire curriculum. After that, an LSTM framework is used to string all the phases together. By introducing this learning strategy, our approach is able to progressively shrink the large corrupted regions in natural images and yields promising inpainting results. Moreover, the proposed approach is quite fast to evaluate as the entire hole filling is performed in a single forward pass. Extensive experiments on Paris Street View and ImageNet dataset clearly demonstrate the superiority of our approach. Code for our models is available at https://github.com/crashmoon/Progressive-Generative-Networks.", "title": "" }, { "docid": "946a5835970a54c748031f2c9945a661", "text": "There is a general move in the aerospace industry to increase the amount of electrically powered equipment on future aircraft. This is generally referred to as the \"more electric aircraft\" and brings on a number of technical challenges that need to be addressed and overcome. Recent advancements in power electronics technology are enabling new systems to be developed and applied to aerospace applications. The growing trend is to connect the AC generator to the aircraft engine via a direct connection or a fixed ratio transmission thus, resulting in the generator providing a variable frequency supply. This move offers benefits to the aircraft such as reducing the weight and improving the reliability. Many aircraft power systems are now operating with a variable frequency over a typical range of 350 Hz to 800 Hz which varies with the engine speed[1,2]. This paper presents the results from a simple scheme for an adaptive control algorithm which could be suitable for use with an electric actuator (or other) aircraft load. The design of this system poses significant challenges due to the nature of the load range and supply frequency variation and requires many features such as: 1) Small input current harmonics to minimize losses., 2) Minimum size and weight to maximize portability and power density. Details will be given on the design methodology and simulation results obtained.", "title": "" }, { "docid": "e43eaf919d7bb920177c164c5eeddca2", "text": "In today's era AMBA (advanced microcontroller bus architecture) specifications have gone far beyond the Microcontrollers. In this paper, AMBA (Advanced Microcontroller Bus Architecture) ASB APB (Advanced system bus - Advanced Peripheral Bus) is implemented. The goal of the proposed paper is to synthesis, simulate complex interface between AMBA ASB and APB. The methodology adopted for the proposed paper is Verilog language with finite state machine models designed in ModelSim Version 10.3 and Xilinx-ISE design suite, version 13.4 is used to extract synthesis, design utilization summary and power reports. For the implementation APB Bridge, arbiter and decoder are designed. In AMBA ASB APB module, master gets into contact with APB bus. Arbiter determines master's status and priority and then, starts communicating with the bus. For selecting a bus slave, decoder uses the accurate address lines and an acknowledgement is given back to the bus master by the slave. An RTL view and an extracted design summary of AMBA ASB APB module at system on chip are shown in result section of the paper. Higher design complexities of SoCs architectures introduce the power consumption into picture. The various power components contribute in the power consumptions which are extracted by the power reports. So, power reports generate a better understanding of the power utilization to the designers. These are clocks total power which consumes of 0.66 mW, hierarchy total power which consumes of 1.05 mW, hierarchy total logical power which consumes of 0.30 mW and hierarchy total signal power which consumes of 0.74 mW powers in the proposed design. Graph is also plotted for clear understanding of the breakdown of powers.", "title": "" }, { "docid": "d281493a7e5da39ef9b4b3378fa6ad69", "text": "One underlying assumption of the conventional multi-view learning algorithms is that all examples can be successfully observed on all the views. However, due to various failures or faults in collecting and pre-processing the data on different views, we are more likely to be faced with an incomplete-view setting, where an example could be missing its representation on one view (i.e., missing view) or could be only partially observed on that view (i.e., missing variables). Low-rank assumption used to be effective for recovering the random missing variables of features, but it is disabled by concentrated missing variables and has no effect on missing views. This paper suggests that the key to handling the incomplete-view problem is to exploit the connections between multiple views, enabling the incomplete views to be restored with the help of the complete views. We propose an effective algorithm to accomplish multi-view learning with incomplete views by assuming that different views are generated from a shared subspace. To handle the large-scale problem and obtain fast convergence, we investigate a successive over-relaxation method to solve the objective function. Convergence of the optimization technique is theoretically analyzed. The experimental results on toy data and real-world data sets suggest that studying the incomplete-view problem in multi-view learning is significant and that the proposed algorithm can effectively handle the incomplete views in different applications.", "title": "" }, { "docid": "624d9a666941c22f9ad1bbd7d6766ed4", "text": "A new benchmarking of beyond-CMOS exploratory devices for logic integrated circuits is presented. It includes new devices with ferroelectric, straintronic, and orbitronic computational state variables. Standby power treatment and memory circuits are included. The set of circuits is extended to sequential logic, including arithmetic logic units. The conclusion that tunneling field-effect transistors are the leading low-power option is reinforced. Ferroelectric transistors may present an attractive option with faster switching delay. Magnetoelectric effects are more energy efficient than spin transfer torque, but the switching speed of magnetization is a limitation. This article enables a better focus on promising beyond-CMOS exploratory devices.", "title": "" }, { "docid": "c14b9a0092ed8ba6d59e741422dfa586", "text": "An elaboration on (Das et al., 2010), this report formalizes frame-semantic parsing as a structure prediction problem and describes an implemented parser that transforms an English sentence into a frame-semantic representation. SEMAFOR 1.0 finds words that evoke FrameNet frames, selects frames for them, and locates the arguments for each frame. The system uses two feature-based, discriminative probabilistic (log-linear) models, one with latent variables to permit disambiguation of new predicate words. The parser is demonstrated to significantly outperform previously published results and is released for public use.", "title": "" }, { "docid": "5b149ce093d0e546a3e99f92ef1608a0", "text": "Smartphones have been becoming ubiquitous and mobile users are increasingly relying on them to store and handle personal information. However, recent studies also reveal the disturbing fact that users’ personal information is put at risk by (rogue) smartphone applications. Existing solutions exhibit limitations in their capabilities in taming these privacy-violating smartphone applications. In this paper, we argue for the need of a new privacy mode in smartphones. The privacy mode can empower users to flexibly control in a fine-grained manner what kinds of personal information will be accessible to an application. Also, the granted access can be dynamically adjusted at runtime in a fine-grained manner to better suit a user’s needs in various scenarios (e.g., in a different time or location). We have developed a system called TISSA that implements such a privacy mode on Android. The evaluation with more than a dozen of information-leaking Android applications demonstrates its effectiveness and practicality. Furthermore, our evaluation shows that TISSA introduces negligible performance overhead.", "title": "" }, { "docid": "3c33528735b53a4f319ce4681527c163", "text": "Within the past two years, important advances have been made in modeling credit risk at the portfolio level. Practitioners and policy makers have invested in implementing and exploring a variety of new models individually. Less progress has been made, however, with comparative analyses. Direct comparison often is not straightforward, because the different models may be presented within rather different mathematical frameworks. This paper offers a comparative anatomy of two especially influential benchmarks for credit risk models, J.P. Morgan’s CreditMetrics and Credit Suisse Financial Product’s CreditRisk+. We show that, despite differences on the surface, the underlying mathematical structures are similar. The structural parallels provide intuition for the relationship between the two models and allow us to describe quite precisely where the models differ in functional form, distributional assumptions, and reliance on approximation formulae. We then design simulation exercises which evaluate the effect of each of these differences individually. JEL Codes: G31, C15, G11 ∗The views expressed herein are my own and do not necessarily reflect those of the Board of Governors or its staff. I would like to thank David Jones for drawing my attention to this issue, and for his helpful comments. I am also grateful to Mark Carey for data and advice useful in calibration of the models, and to Chris Finger and Tom Wilde for helpful comments. Please address correspondence to the author at Division of Research and Statistics, Mail Stop 153, Federal Reserve Board, Washington, DC 20551, USA. Phone: (202)452-3705. Fax: (202)452-5295. Email: 〈mgordy@frb.gov〉. Over the past decade, financial institutions have developed and implemented a variety of sophisticated models of value-at-risk for market risk in trading portfolios. These models have gained acceptance not only among senior bank managers, but also in amendments to the international bank regulatory framework. Much more recently, important advances have been made in modeling credit risk in lending portfolios. The new models are designed to quantify credit risk on a portfolio basis, and thus have application in control of risk concentration, evaluation of return on capital at the customer level, and more active management of credit portfolios. Future generations of today’s models may one day become the foundation for measurement of regulatory capital adequacy. Two of the models, J.P. Morgan’s CreditMetrics and Credit Suisse Financial Product’s CreditRisk+, have been released freely to the public since 1997 and have quickly become influential benchmarks. Practitioners and policy makers have invested in implementing and exploring each of the models individually, but have made less progress with comparative analyses. The two models are intended to measure the same risks, but impose different restrictions and distributional assumptions, and suggest different techniques for calibration and solution. Thus, given the same portfolio of credit exposures, the two models will, in general, yield differing evaluations of credit risk. Determining which features of the models account for differences in output would allow us a better understanding of the sensitivity of the models to the particular assumptions they employ. Unfortunately, direct comparison of the models is not straightforward, because the two models are presented within rather different mathematical frameworks. The CreditMetrics model is familiar to econometricians as an ordered probit model. Credit events are driven by movements in underlying unobserved latent variables. The latent variables are assumed to depend on external “risk factors.” Common dependence on the same risk factors gives rise to correlations in credit events across obligors. The CreditRisk+ model is based instead on insurance industry models of event risk. Instead of a latent variable, each obligor has a default probability. The default probabilities are not constant over time, but rather increase or decrease in response to background macroeconomic factors. To the extent that two obligors are sensitive to the same set of background factors, their default probabilities will move together. These co-movements in probability give rise to correlations in defaults. CreditMetrics and CreditRisk+ may serve essentially the same function, but they appear to be constructed quite differently. This paper offers a comparative anatomy of CreditMetrics and CreditRisk+. We show that, despite differences on the surface, the underlying mathematical structures are similar. The structural parallels provide intuition for the relationship between the two models and allow us to describe quite precisely where the models differ in functional form, distributional assumptions, and reliance on approximation formulae. We can then design simulation exercises which evaluate the effect of these differences individually. We proceed as follows. Section 1 presents a summary of the CreditRisk+ model, and introduces a restricted version of CreditMetrics. The restrictions are imposed to facilitate direct comparison of CreditMetrics and CreditRisk+. While some of the richness of the full CreditMetrics implementation is sacrificed, the essential mathematical characteristics of the model are preserved. Our", "title": "" }, { "docid": "c609033191318819992f4c815e255486", "text": "This paper proposes a novel approach for Quality of Experience (QoE) driven cross-layer optimization for wireless video transmission. We formulate the cross-layer optimization problem with a constraint on the temporal fluctuation of the video quality. Our objective is to minimize the temporal change of the video quality as perceivable quality fluctuations negatively affect the overall quality of experience. The proposed QoE scheme jointly optimizes the application layer and the lower layers of a wireless protocol stack. It allocates network resources and performs rate adaptation such that the fluctuations lie within the range of unperceivable changes. We determine corresponding perception thresholds via extensive subjective tests and evaluate the proposed scheme using an OPNET High Speed Downlink Packet Access (HSDPA) emulator. Our simulation results show that the proposed approach leads to a noticeable improvement of overall user satisfaction for the provided video delivery service when compared to state-of-the-art approaches.", "title": "" }, { "docid": "261f146b67fd8e13d1ad8c9f6f5a8845", "text": "Vision based automatic lane tracking system requires information such as lane markings, road curvature and leading vehicle be detected before capturing the next image frame. Placing a camera on the vehicle dashboard and capturing the forward view results in a perspective view of the road image. The perspective view of the captured image somehow distorts the actual shape of the road, which involves the width, height, and depth. Respectively, these parameters represent the x, y and z components. As such, the image needs to go through a pre-processing stage to remedy the distortion using a transformation technique known as an inverse perspective mapping (IPM). This paper outlines the procedures involved.", "title": "" }, { "docid": "b4dd2dd381fc00419172d87ef113a422", "text": "Automatic on-line signature verification is an intriguing intellectual challenge with many practical applications. I review the context of this problem and then describe my own approach to it, which breaks with tradition by relying primarily on the detailed shape of a signature for its automatic verification, rather than relying primarily on the pen dynamics during the production of the signature. I propose a robust, reliable, and elastic localshape-based model for handwritten on-line curves; this model is generated by first parameterizing each on-line curve over its normalized arc-length and then representing along the length of the curve, in a moving coordinate frame, measures of the curve within a sliding window that are analogous to the position of the center of mass, the torque exerted by a force, and the moments of inertia of a mass distribution about its center of mass. Further, I suggest the weighted and biased harmonic mean as a graceful mechanism of combining errors from multiple models of which at least one model is applicable but not necessarily more than one model is applicable, recommending that each signature be represented by multiple models, these models, perhaps, local and global, shape based and dynamics based. Finally, I outline a signature-verification algorithm that I have implemented and tested successfully both on databases and in live experiments.", "title": "" }, { "docid": "e0924a94e0bf614c9c53259f69ff7909", "text": "In this paper, a unified approach is presented to transfer learning that addresses several source and target domain labelspace and annotation assumptions with a single model. It is particularly effective in handling a challenging case, where source and target label-spaces are disjoint, and outperforms alternatives in both unsupervised and semi-supervised settings. The key ingredient is a common representation termed Common Factorised Space. It is shared between source and target domains, and trained with an unsupervised factorisation loss and a graph-based loss. With a wide range of experiments, we demonstrate the flexibility, relevance and efficacy of our method, both in the challenging cases with disjoint label spaces, and in the more conventional cases such as unsupervised domain adaptation, where the source and target domains share the same label-sets.", "title": "" }, { "docid": "afc5a67dc415ac4039f8120876a40b7f", "text": "One of the implications of connectionist work on language is that the \"language mechanism\" is not essentially different from mechanisms that perform other cognitive tasks. This leads naturally to the notion that language learning must be understood in the context of learning about the world more generally. It is difficult to think of how to constrain a modeling exercise that adopts this perspective, for such a system requires an encoding not only of some interesting part of language but also of the corresponding part of \"the world.\" If one were to undertake this rather daunting task, a natural place to start would be with a subdomain of language in which the important semantic contrasts can be expressed in a two-dimensional visual world and to use low-resolution computer graphics for the encoding of this world. In The Human Semantic Potential, Regier takes precisely this approach, by focusing on the domain of closed-class spatial markers (e.g., prepositions such as above, below, left, right, in, and on in English). Bit map \"movies\" of a trajector that moves in relation to a landmark for several movie frames, or that simply exists, are used as the input to a connectionist learning model; the outputs are labels corresponding to prepositions. The scientific game plan is to construct a model that learns the semantics of spatial markers on the basis of exposure to examples in a given language; the learnable sets of markers under the model then constitute a typology of possible sets of spatial markers in languages of the world. Thus the model makes cross-linguistic typological predictions. Moreover, since the model generalizes from a subset of examples in each language to an assessment of the aptness of each word in all representable conditions, it makes language-internal predictions about the meanings that particular spatial markers can have. Given this framework, it seems reasonable to assess the project in terms of how much we learn about these two domains of prediction from it. In the former case-cross-linguistic typological prediction--the results are only mildly interesting: only a very small subset of the range of predictions that the model makes is revealed, apparently because of the assumed unanalyzability of the connectionist network at the core of the model. In the case of the second task--generalization from positive examples-the results are more appealing: Regier shows how a linguistically-motivated variation on a standard connectionist learning algorithm provides an effective mechanism for", "title": "" } ]
scidocsrr
3eb074ee433d02150836f12207859b8d
Hybrid Euclidean-and-Riemannian Metric Learning for Image Set Classification
[ { "docid": "28efe3b5fe479a1e95029f122f5b62f3", "text": "Most of the current metric learning methods are proposed for point-to-point distance (PPD) based classification. In many computer vision tasks, however, we need to measure the point-to-set distance (PSD) and even set-to-set distance (SSD) for classification. In this paper, we extend the PPD based Mahalanobis distance metric learning to PSD and SSD based ones, namely point-to-set distance metric learning (PSDML) and set-to-set distance metric learning (SSDML), and solve them under a unified optimization framework. First, we generate positive and negative sample pairs by computing the PSD and SSD between training samples. Then, we characterize each sample pair by its covariance matrix, and propose a covariance kernel based discriminative function. Finally, we tackle the PSDML and SSDML problems by using standard support vector machine solvers, making the metric learning very efficient for multiclass visual classification tasks. Experiments on gender classification, digit recognition, object categorization and face recognition show that the proposed metric learning methods can effectively enhance the performance of PSD and SSD based classification.", "title": "" }, { "docid": "c9bce8c17552321f80f18c899aa02f78", "text": "We propose a novel discriminative learning approach to image set classification by modeling the image set with its natural second-order statistic, i.e. covariance matrix. Since nonsingular covariance matrices, a.k.a. symmetric positive definite (SPD) matrices, lie on a Riemannian manifold, classical learning algorithms cannot be directly utilized to classify points on the manifold. By exploring an efficient metric for the SPD matrices, i.e., Log-Euclidean Distance (LED), we derive a kernel function that explicitly maps the covariance matrix from the Riemannian manifold to a Euclidean space. With this explicit mapping, any learning method devoted to vector space can be exploited in either its linear or kernel formulation. Linear Discriminant Analysis (LDA) and Partial Least Squares (PLS) are considered in this paper for their feasibility for our specific problem. We further investigate the conventional linear subspace based set modeling technique and cast it in a unified framework with our covariance matrix based modeling. The proposed method is evaluated on two tasks: face recognition and object categorization. Extensive experimental results show not only the superiority of our method over state-of-the-art ones in both accuracy and efficiency, but also its stability to two real challenges: noisy set data and varying set size.", "title": "" }, { "docid": "c41c56eeb56975c4d65e3847aa6b8b01", "text": "We address the problem of comparing sets of images for object recognition, where the sets may represent variations in an object's appearance due to changing camera pose and lighting conditions. canonical correlations (also known as principal or canonical angles), which can be thought of as the angles between two d-dimensional subspaces, have recently attracted attention for image set matching. Canonical correlations offer many benefits in accuracy, efficiency, and robustness compared to the two main classical methods: parametric distribution-based and nonparametric sample-based matching of sets. Here, this is first demonstrated experimentally for reasonably sized data sets using existing methods exploiting canonical correlations. Motivated by their proven effectiveness, a novel discriminative learning method over sets is proposed for set classification. Specifically, inspired by classical linear discriminant analysis (LDA), we develop a linear discriminant function that maximizes the canonical correlations of within-class sets and minimizes the canonical correlations of between-class sets. Image sets transformed by the discriminant function are then compared by the canonical correlations. Classical orthogonal subspace method (OSM) is also investigated for the similar purpose and compared with the proposed method. The proposed method is evaluated on various object recognition problems using face image sets with arbitrary motion captured under different illuminations and image sets of 500 general objects taken at different views. The method is also applied to object category recognition using ETH-80 database. The proposed method is shown to outperform the state-of-the-art methods in terms of accuracy and efficiency", "title": "" } ]
[ { "docid": "8e45716a80300fa86189e99feb26f113", "text": "BACKGROUND\nWhat is the best way to schedule follow-up appointments? The most popular model requires the patient to negotiate a follow-up appointment time on leaving the office. This process accounts for the majority of follow-up patient scheduling. There are circumstances when this immediate appointment arrangement is not possible, however. The two common processes used to contact patients for follow-up appointments after they have left the office are the postcard reminder method and the prescheduled appointment method.\n\n\nMETHODS\nIn 2001 the two methods used to contact patients for follow-up appointments after they had left the clinic were used for all 2,116 reappointment patients at an ophthalmology practice at Dartmouth-Hitchcock Medical Center. The number of completed successful appointments, the no-show rate, and patient satisfaction for each method were calculated.\n\n\nRESULTS\nA larger number of patient reappointments were completed using the prescheduled appointment procedure than the postcard reminder system (74% vs 54%). The difference between completed and pending appointments (minus no-shows) of the two methods equaled 163 patients per quarter, or 652 patients per year. Additional revenues associated with use of the prescheduled appointment letter method were estimated at $594,600 for 3 years.\n\n\nSUMMARY\nUsing the prescheduled appointment method with a patient notification letter is advised when patients do not schedule their appointments on the way out of the office.", "title": "" }, { "docid": "9a9fd442bc7353d9cd202e9ace6e6580", "text": "The idea of developmental dyspraxia has been discussed in the research literature for almost 100 years. However, there continues to be a lack of consensus regarding both the definition and description of this disorder. This paper presents a neuropsychologically based operational definition of developmental dyspraxia that emphasizes that developmental dyspraxia is a disorder of gesture. Research that has investigated the development of praxis is discussed. Further, different types of gestural disorders displayed by children and different mechanisms that underlie developmental dyspraxia are compared to and contrasted with adult acquired apraxia. The impact of perceptual-motor, language, and cognitive impairments on children's gestural development and the possible associations between these developmental disorders and developmental dyspraxia are also examined. Also, the relationship among limb, orofacial, and verbal dyspraxia is discussed. Finally, problems that exist in the neuropsychological assessment of developmental dyspraxia are discussed and recommendations concerning what should be included in such an assessment are presented.", "title": "" }, { "docid": "60060561d2667ace904983326e9c5f08", "text": "Identifying arbitrary topologies of power networks in real time is a computationally hard problem due to the number of hypotheses that grows exponentially with the network size. A new “Learning-to-Infer” variational inference method is developed for efficient inference of every line status in the network. Optimizing the variational model is transformed to and solved as a discriminative learning problem based on Monte Carlo samples generated with power flow simulations. A major advantage of the developed Learning-to-Infer method is that the labeled data used for training can be generated in an arbitrarily large amount fast and at very little cost. As a result, the power of offline training is fully exploited to learn very complex classifiers for effective real-time topology identification. The proposed methods are evaluated in the IEEE 30, 118 and 300 bus systems. Excellent performance in identifying arbitrary power network topologies in real time is achieved even with relatively simple variational models and a reasonably small amount of data.", "title": "" }, { "docid": "d70214bbb417b0ff7d4a6efbb24abfb6", "text": "While deep reinforcement learning techniques have recently produced considerable achievements on many decision-making problems, their use in robotics has largely been limited to simulated worlds or restricted motions, since unconstrained trial-and-error interactions in the real world can have undesirable consequences for the robot or its environment. To overcome such limitations, we propose a novel reinforcement learning architecture, OptLayer, that takes as inputs possibly unsafe actions predicted by a neural network and outputs the closest actions that satisfy chosen constraints. While learning control policies often requires carefully crafted rewards and penalties while exploring the range of possible actions, OptLayer ensures that only safe actions are actually executed and unsafe predictions are penalized during training. We demonstrate the effectiveness of our approach on robot reaching tasks, both simulated and in the real world.", "title": "" }, { "docid": "e92fb95e275ee5f3c7636a54c13830f9", "text": "Although object tracking has been studied for decades, real-time tracking algorithms often suffer from low accuracy and poor robustness when confronted with difficult, realworld data. We present a tracker that combines 3D shape, color (when available), and motion cues to accurately track moving objects in real-time. Our tracker allocates computational effort based on the shape of the posterior distribution. Starting with a coarse approximation to the posterior, the tracker successively refines this distribution, increasing in tracking accuracy over time. The tracker can thus be run for any amount of time, after which the current approximation to the posterior is returned. Even at a minimum runtime of 0.7 milliseconds, our method outperforms all of the baseline methods of similar speed by at least 10%. If our tracker is allowed to run for longer, the accuracy continues to improve, and it continues to outperform all baseline methods. Our tracker is thus anytime, allowing the speed or accuracy to be optimized based on the needs of the application.", "title": "" }, { "docid": "9f2d6c872761d8922cac8a3f30b4b7ba", "text": "Recently, CNN reported on the future of brain-computer interfaces (BCIs). BCIs are devices that process a user's brain signals to allow direct communication and interaction with the environment. BCIs bypass the normal neuromuscular output pathways and rely on digital signal processing and machine learning to translate brain signals to action (Figure 1). Historically, BCIs were developed with biomedical applications in mind, such as restoring communication in completely paralyzed individuals and replacing lost motor function. More recent applications have targeted nondisabled individuals by exploring the use of BCIs as a novel input device for entertainment and gaming. The task of the BCI is to identify and predict behaviorally induced changes or \"cognitive states\" in a user's brain signals. Brain signals are recorded either noninvasively from electrodes placed on the scalp [electroencephalogram (EEG)] or invasively from electrodes placed on the surface of or inside the brain. BCIs based on these recording techniques have allowed healthy and disabled individuals to control a variety of devices. In this article, we will describe different challenges and proposed solutions for noninvasive brain-computer interfacing.", "title": "" }, { "docid": "10d84e806815a99e345dad71aee4e524", "text": "We establish linear-time reductions between the minimization of a deterministic finite automaton (DFA) and the conjunction of 3 subproblems: the minimization of a strongly connected DFA, the isomorphism problem for a set of strongly connected minimized DFAs, and the minimization of a connected DFA consisting in two strongly connected components, both of which are minimized. We apply this procedure to minimize, in linear time, automata whose nontrivial strongly connected com-", "title": "" }, { "docid": "e8a1330f93a701939367bd390e9018c7", "text": "An eccentric paddle locomotion mechanism based on the epicyclic gear mechanism (ePaddle-EGM), which was proposed to enhance the mobility of amphibious robots in multiterrain tasks, can perform various terrestrial and aquatic gaits. Two of the feasible aquatic gaits are the rotational paddling gait and the oscillating paddling gait. The former one has been studied in our previous work, and a capacity of generating vectored thrust has been found. In this letter, we focus on the oscillating paddling gait by measuring the generated thrusts of the gait on an ePaddle-EGM prototype module. Experimental results verify that the oscillating paddling gait can generate vectored thrust by changing the location of the paddle shaft as well. Furthermore, we compare the oscillating paddling gait with the rotational paddling gait at the vectored thrusting property, magnitude of the thrust, and the gait efficiency.", "title": "" }, { "docid": "59608978a30fcf6fc8bc0b92982abe69", "text": "The self-advocacy movement (Dybwad & Bersani, 1996) grew out of resistance to oppressive practices of institutionalization (and worse) for people with cognitive disabilities. Moving beyond the worst abuses, people with cognitive disabilities seek as full participation in society as possible.", "title": "" }, { "docid": "1297533a04a4172cdfe4094be939c549", "text": "This paper will outline new developments in Emotion-Focused Therapy for Couples (EFT-C) (Greenberg & Goldman, Emotion-focused couples therapy: The dynamics of emotion, love, and power, Washington, DC, American Psychological Association, 2008). People are seen as primarily motivated by their affective goals and the regulation of emotional states. The three motivational systems of attachment, identity, and attraction/liking, viewed as reflective of the core concerns people bring to therapy, are briefly outlined and elaborated. The five-stage model of EFT-C is briefly described. The paper will then provide two illustrations, one that demonstrates how EFT-C therapists work with core issues related to identity, and the other that shows how therapy can promote self-soothing. In the first example, annotated transcripts taken from therapy sessions illustrate how an EFT therapist addresses issues of identity in a highly distressed couple. The second example demonstrates how to facilitate work with individuals within the couples' context to engender and develop capacities for self-soothing, seen as fundamental for the promotion of healthy emotion regulation and couples' overall health.", "title": "" }, { "docid": "92709d6c770dc7708f3dbe6c7a4bebfb", "text": "This paper presents a high-efficiency 60-GHz on-off keying (OOK) transmitter (TX) designed for wireless network-on-chip applications. Aiming at an intra-chip communication distance of 20 mm, the TX consists of a drive amplifier (DA), a high-speed OOK modulator, and a transformer-coupled voltage-controlled oscillator. For high efficiency, a common-source topology with a drain-to-gate neutralization technique is chosen for the DA. A detailed mathematical design methodology is derived for the neutralization technique. The bulk-driven OOK modulator employs a novel dual feedthrough cancellation technique, resulting in a 30-dB on-off ratio. Fabricated in a 65-nm bulk CMOS process, the TX consumes only 19 mW from a 1-V supply, and occupies an active area of 0.077 mm2. A maximum modulation data rate of 16 Gb/s with 0.75-dBm output power is demonstrated through measurements, which translates to a bit-energy efficiency of 1.2 pJ/bit.", "title": "" }, { "docid": "e70811b1d35dcbd184acc8505caaf0be", "text": "What would it be like to have never learned English, but instead only to know Hopi, Mandarin Chinese, or American Sign Language? Would that change the way you think? Imagine entirely losing your language, as the result of stroke or trauma. You are aphasic, unable to speak or listen, read or write. What would your thoughts now be like? As the most extreme case, imagine having been raised without any language at all, as a wild child. What—if anything—would it be like to be such a person? Could you be smart; could you reminisce about the past, plan the future? There is a common sense set of answers to these questions, one that represents the mainstream in many circles of cognitive science (see Pinker, 1994 for a lucid exposition). Under this view, the language you speak does not affect how you think. Rich, powerful and abstract cognition can take place within minds that, due to injury or deprivation, have no natural language. Even babies know about the kinds and individuals that occupy their world. They just don’t know their names. Before being exposed to words in a language such as English, all humans possess the concepts that these words correspond to, as part of what Jerry Fodor (1975) calls ‘mentalese’ or ‘a language of thought’. Under this view, as Fodor puts it, all language learning is actually second language learning—when a child learns the vocabulary of English, all that happens is that the child learns the mappings from the English words onto the symbols of this prior language of thought. There is another perspective, however, one that is also rooted in common sense, and which is popular across many disciplines. Many linguists and anthropologists claim that the language one learns has a profound influence on how", "title": "" }, { "docid": "bfc85b95287e4abc2308849294384d1e", "text": "& 10 0 YE A RS A G O 50 YEARS AGO A Congress was held in Singapore during December 2–9 to celebrate “the Centenary of the formulation of the theory of Evolution by Charles Darwin and Alfred Russel Wallace and the Bicentenary of the publication of the tenth edition of the ‘Systema Naturae’ by Linnaeus”. It was particularly fitting that this Congress should have been held in Singapore for ... it directed special attention to the work of Wallace, who was one of the greatest biologists ever to have worked in south-east Asia ... Prof. Haldane then delivered his presidential address ... The president emphasised the stimuli gained by Linnaeus, Darwin and Wallace through working in peripheral areas where lack of knowledge was a challenge. He suggested that the next major biological advance may well come for similar reasons from peripheral places such as Singapore, or Calcutta, where this challenge still remains and where the lack of complex scientific apparatus drives biologists into different and long-neglected fields of research. From Nature 14 March 1959.", "title": "" }, { "docid": "5e9f408e6b44afd868fb39bbfc4d7170", "text": "With the advent of commodity autonomous mobiles, it is becoming increasingly prevalent to recognize under extreme conditions such as night, erratic illumination conditions. This need has caused the approaches using multi-modal sensors, which could be complementary to each other. The choice for the thermal camera provides a rich source of temperature information, less affected by changing illumination or background clutters. However, existing thermal cameras have a relatively smaller resolution than RGB cameras that has trouble for fully utilizing the information in recognition tasks. To mitigate this, we aim to enhance the low-resolution thermal image according to the extensive analysis of existing approaches. To this end, we introduce Thermal Image Enhancement using Convolutional Neural Network (CNN), called in TEN, which directly learns an end-to-end mapping a single low resolution image to the desired high resolution image. In addition, we examine various image domains to find the best representative of the thermal enhancement. Overall, we propose the first thermal image enhancement method based on CNN guided on RGB data. We provide extensive experiments designed to evaluate the quality of image and the performance of several object recognition tasks such as pedestrian detection, visual odometry, and image registration.", "title": "" }, { "docid": "ee1293cc2e11543c5dad4473b0592f58", "text": "Mobile ad hoc networks’ (MANETs) inherent power limitation makes power-awareness a critical requirement for MANET protocols. In this paper, we propose a new routing metric, the drain rate, which predicts the lifetime of a node as a function of current traffic conditions. We describe the Minimum Drain Rate (MDR) mechanism which uses a combination of the drain rate with remaining battery capacity to establish routes. MDR can be employed by any existing MANET routing protocol to achieve a dual goal: extend both nodal battery life and connection lifetime. Using the ns-2 simulator and the Dynamic Source Routing (DSR) protocol, we compared MDR to the Minimum Total Transmission Power Routing (MTPR) scheme and the Min-Max Battery Cost Routing (MMBCR) scheme and proved that MDR is the best approach to achieve the dual goal.", "title": "" }, { "docid": "a40b8e1bad22921a317c290e17478689", "text": "Two novel adaptive nonlinear filter structures are proposed which are based on linear combinations of order statistics. These adaptive schemes are modifications of the standard LMS algorithm and have the ability to incorporate constraints imposed on coefficients in order to permit location-invariant and unbiased estimation of a constant signal in the presence of additive white noise. The convergence in the mean and in the mean square of the proposed adaptive nonlinear filters is studied. The rate of convergence is also considered. It is verified by simulations that the independence theory provides useful bounds on the rate of convergence. The extreme eigenvalues of the matrix which controls the performance of the location-invariant adaptive LMS L-filter are related to the extreme eigenvalues of the correlation matrix of the ordered noise samples which controls the performance of other adaptive LMS L-filters proposed elsewhere. The proposed filters can adapt well to a variety of noise probability distributions ranging from the short-tailed ones (e.g. uniform distribution) to long-tailed ones (e.g. Laplacian distribution). Zusammenfassung. Es werden zwei neue adaptive nichtlineare Filterstrukturen vorgeschlagen, die auf Linearkombinationen yon Order-Statistik beruhen. Diese adaptiven Strukturen sind Modifikationen des iiblichen LMS-Algorithmus und erlauben die Einbringung yon Bedingungen bezfiglich der Koeffizienten, um ortsinvariante und erwartungstreue Schfitzungen konstanter Signale unter additivem, weiBem Rauschen zu erm6glichen. Die Konvergenz bez/iglich des Mittelwertes und des quadratischen Mittelwertes wird fiir die vorgeschlagenen nichtlinearen Filter untersucht. Weiterhin wird die Konvergenzgeschwindigkeit betrachtet. Durch Simulationen wird gezeigt, dab die Independence-Theorie brauchbare Grenzen ffir die Konvergenzrate liefert. Die extremen Eigenwerte der Matrix, die das Verhalten des ortsinvarianten adaptiven LMS L-Filters bestimmt, werden den Eigenwerten der Korrelationsmatrix der geordneten Rauschabtastwerte gegeniibergestellt, die das Verhalten anderer adaptiver LMS L-Filter bestimmt. Die vorgeschlagenen Filter stellen sich sehr gut auf eine Vielzahl verschiedener Verteilungsdichten des Rauschens ein, angefangen von schmalen Verteilungen (z.B. Gleichverteilung) bis hin zu langsam abfallenden (z.B. Laplace). R6sum+. Nous proposons deux structures de filtre non-lin~aire originales, structures bastes sur des combinaisons lin6aires de statistiques d'ordre. Ces techniques adaptatives sont des modifications de I'algorithme LMS standard et ont la capacit6 d'incorporer des contraintes impos+es sur les coefficients afin de permettre une estimation ne variant pas selon la localisation et non biais6e d 'un signal constant en pr6sence de bruit blanc additif. Nous 6tudions la convergence en moyenne et en moyenne quadratique des filtres non-lin6aires adaptatifs propos6s. Nous consid~rons 6galement le taux de convergence. Nous v+rifions par des simulations que l'hypoth+se d'ind6pendance fournit des bornes utiles sur le taux de convergence. Nous relions les valeurs propres extremes de la matrice qui contr61e les performances du L-filtre LMS adaptatif ne variant pas selon la localisation aux valeurs propres extremes de la matrice de correlation des 6chantillons de bruit ordonn+s qui contr61e les performances d'autres L-filtres LMS proposes ailleurs. Les filtres propos6s peuvent s 'adapter ais+ment 5. une vari6te de distributions de densit6 de bruit allant de celles 5. queue courte (p.e. la distribution uniforme) 5. celles 5. queue longue (p.e. la distribution de Laplace).", "title": "" }, { "docid": "f257b55e0cdffd6ab1129fa73a509e27", "text": "UNLABELLED\nA leak test performed according to ASTM F2338-09 Standard Test Method for Nondestructive Detection of Leaks in Packages by Vacuum Decay Method was developed and validated for container-closure integrity verification of a lyophilized product in a parenteral vial package system. This nondestructive leak test method is intended for use in manufacturing as an in-process package integrity check, and for testing product stored on stability in lieu of sterility tests. Method development and optimization challenge studies incorporated artificially defective packages representing a range of glass vial wall and sealing surface defects, as well as various elastomeric stopper defects. Method validation required 3 days of random-order replicate testing of a test sample population of negative-control, no-defect packages and positive-control, with-defect packages. Positive-control packages were prepared using vials each with a single hole laser-drilled through the glass vial wall. Hole creation and hole size certification was performed by Lenox Laser. Validation study results successfully demonstrated the vacuum decay leak test method's ability to accurately and reliably detect those packages with laser-drilled holes greater than or equal to approximately 5 μm in nominal diameter. All development and validation studies were performed at Whitehouse Analytical Laboratories in Whitehouse, NJ, under the direction of consultant Dana Guazzo of RxPax, LLC, using a VeriPac 455 Micro Leak Test System by Packaging Technologies & Inspection (Tuckahoe, NY). Bristol Myers Squibb (New Brunswick, NJ) fully subsidized all work.\n\n\nLAY ABSTRACT\nA leak test performed according to ASTM F2338-09 Standard Test Method for Nondestructive Detection of Leaks in Packages by Vacuum Decay Method was developed and validated to detect defects in stoppered vial packages containing lyophilized product for injection. This nondestructive leak test method is intended for use in manufacturing as an in-process package integrity check, and for testing product stored on stability in lieu of sterility tests. Test method validation study results proved the method capable of detecting holes laser-drilled through the glass vial wall greater than or equal to 5 μm in nominal diameter. Total test time is less than 1 min per package. All method development and validation studies were performed at Whitehouse Analytical Laboratories in Whitehouse, NJ, under the direction of consultant Dana Guazzo of RxPax, LLC, using a VeriPac 455 Micro Leak Test System by Packaging Technologies & Inspection (Tuckahoe, NY). Bristol Myers Squibb (New Brunswick, NJ) fully subsidized all work.", "title": "" }, { "docid": "81273c11eb51349d0027e2ff2e54c080", "text": "The ground-volume separation of radar scattering plays an important role in the analysis of forested scenes. For this purpose, the data covariance matrix of multi-polarimetric (MP) multi-baseline (MB) SAR surveys can be represented thru a sum of two Kronecker products composed of the data covariance matrices and polarimetric signatures that correspond to the ground and canopy scattering mechanisms (SMs), respectively. The sum of Kronecker products (SKP) decomposition allows the use of different tomographic SAR focusing methods on the ground and canopy structural components separately, nevertheless, the main drawback of this technique relates to the rank-deficiencies of the resultant data covariance matrices, which restrict the usage of the adaptive beamforming techniques, requiring more advanced beamforming methods, such as compressed sensing (CS). This paper proposes a modification of the nonparametric iterative adaptive approach for amplitude and phase estimation (IAA-APES), which applied to MP-MB SAR data, serves as an alternative to the SKP-based techniques for ground-volume reconstruction, which main advantage relates precisely to the non-need of the SKP decomposition technique as a pre-processing step.", "title": "" }, { "docid": "911ea52fa57524e002154e2fe276ac44", "text": "Many current natural language processing applications for social media rely on representation learning and utilize pre-trained word embeddings. There currently exist several publicly-available, pre-trained sets of word embeddings, but they contain few or no emoji representations even as emoji usage in social media has increased. In this paper we release emoji2vec, pre-trained embeddings for all Unicode emoji which are learned from their description in the Unicode emoji standard.1 The resulting emoji embeddings can be readily used in downstream social natural language processing applications alongside word2vec. We demonstrate, for the downstream task of sentiment analysis, that emoji embeddings learned from short descriptions outperforms a skip-gram model trained on a large collection of tweets, while avoiding the need for contexts in which emoji need to appear frequently in order to estimate a representation.", "title": "" }, { "docid": "eed70d4d8bfbfa76382bfc32dd12c3db", "text": "Three studies tested basic assumptions derived from a theoretical model based on the dissociation of automatic and controlled processes involved in prejudice. Study 1 supported the model's assumption that highand low-prejudice persons are equally knowledgeable of the cultural stereotype. The model suggests that the stereotype is automatically activated in the presence of a member (or some symbolic equivalent) of the stereotyped group and that low-prejudice responses require controlled inhibition of the automatically activated stereotype. Study 2, which examined the effects of automatic stereotype activation on the evaluation of ambiguous stereotype-relevant behaviors performed by a race-unspecified person, suggested that when subjects' ability to consciously monitor stereotype activation is precluded, both highand low-prejudice subjects produce stereotype-congruent evaluations of ambiguous behaviors. Study 3 examined highand low-prejudice subjects' responses in a consciously directed thought-listing task. Consistent with the model, only low-prejudice subjects inhibited the automatically activated stereotype-congruent thoughts and replaced them with thoughts reflecting equality and negations of the stereotype. The relation between stereotypes and prejudice and implications for prejudice reduction are discussed.", "title": "" } ]
scidocsrr
176d2a8cfdab4633120455868256e5d3
Internet of Things Based Intelligent Street Lighting System for Smart City
[ { "docid": "9120645b804d2b4ebe83dd0175af5207", "text": "This paper aims at designing and executing the advanced development in embedded systems for energy saving of street lights. Nowadays, human has become too busy, and is unable to find time even to switch the lights wherever not necessary. The present system is like, the street lights will be switched on in the evening before the sun sets and they are switched off the next day morning after there is sufficient light on the roads. this paper gives the best solution for electrical power wastage. Also the manual operation of the lighting system is completely eliminated. In this paper the two sensors are used which are Light Dependent Resistor LDR sensor to indicate a day/night time and the photoelectric sensors to detect the movement on the street. the microcontroller PIC16F877A is used as brain to control the street light system, where the programming language used for developing the software to the microcontroller is C-language. Finally, the system has been successfully designed and implemented as prototype system. Key-Words: Street light, LDR, photoelectric sensor, microcontroller, energy saving and circuit design.", "title": "" } ]
[ { "docid": "a235657ae9c608b349e185ca73053058", "text": "Four cases of a distinctive soft-tissue tumor of the vulva are described. They were characterized by occurrence in middle-aged women (39-50 years), small size (< 3 cm), and a usually well-circumscribed margin. The preoperative clinical diagnosis was that of a labial or Bartholin gland cyst in three of the four cases. The microscopic appearance was remarkably consistent and was characterized by a cellular neoplasm composed of uniform, bland, spindled stromal cells, numerous thick-walled and often hyalinized vessels, and a scarce component of mature adipocytes. Mitotic activity was brisk in three cases (up to 11 mitoses per 10 high power fields). The stromal cells were positive for vimentin and negative for CD34, S-100 protein, actin, desmin, and epithelial membrane antigen, suggesting fibroblastic differentiation. Two patients with follow-up showed no evidence of recurrence. The differential diagnosis of this distinctive tumor includes aggressive angiomyxoma, angiomyofibroblastoma, spindle cell lipoma, solitary fibrous tumor, perineurioma, and leiomyoma. The designation of \"cellular angiofibroma\" is chosen to emphasize the two principal components of this tumor: the cellular spindle cell component and the prominent blood vessels.", "title": "" }, { "docid": "cea68902d38eb453dc62a73ae529b3e2", "text": "During the next 50 years, which is likely to be the final period of rapid agricultural expansion, demand for food by a wealthier and 50% larger global population will be a major driver of global environmental change. Should past dependences of the global environmental impacts of agriculture on human population and consumption continue, 10(9) hectares of natural ecosystems would be converted to agriculture by 2050. This would be accompanied by 2.4- to 2.7-fold increases in nitrogen- and phosphorus-driven eutrophication of terrestrial, freshwater, and near-shore marine ecosystems, and comparable increases in pesticide use. This eutrophication and habitat destruction would cause unprecedented ecosystem simplification, loss of ecosystem services, and species extinctions. Significant scientific advances and regulatory, technological, and policy changes are needed to control the environmental impacts of agricultural expansion.", "title": "" }, { "docid": "bf085248cf23eb064b10424d08a99d5e", "text": "Standard methods of counting binary ones on a computer with a 704 type instruction code require an inner loop which is carried out once for each bit in the machine word. Program 1 (written in SAP language for purposes of illustration) is an example of such a standard program.", "title": "" }, { "docid": "28ba4e921cb942c8022c315561abf526", "text": "Metamaterials have attracted more and more research attentions recently. Metamaterials for electromagnetic applications consist of sub-wavelength structures designed to exhibit particular responses to an incident EM (electromagnetic) wave. Traditional EM (electromagnetic) metamaterial is constructed from thick and rigid structures, with the form-factor suitable for applications only in higher frequencies (above GHz) in microwave band. In this paper, we developed a thin and flexible metamaterial structure with small-scale unit cell that gives EM metamaterials far greater flexibility in numerous applications. By incorporating ferrite materials, the thickness and size of the unit cell of metamaterials have been effectively scaled down. The design, mechanism and development of flexible ferrite loaded metamaterials for microwave applications is described, with simulation as well as measurements. Experiments show that the ferrite film with permeability of 10 could reduce the resonant frequency. The thickness of the final metamaterials is only 0.3mm. This type of ferrite loaded metamaterials offers opportunities for various sub-GHz microwave applications, such as cloaks, absorbers, and frequency selective surfaces.", "title": "" }, { "docid": "406fab96a8fd49f4d898a9735ee1512f", "text": "An otolaryngology phenol applicator kit can be successfully and safely used in the performance of chemical matricectomy. The applicator kit provides a convenient way to apply phenol to the nail matrix precisely and efficiently, whereas minimizing both the risk of application to nonmatrix surrounding soft tissue and postoperative recovery time.Given the smaller size of the foam-tipped applicator, we feel that this is a more precise tool than traditional cotton-tipped applicators for chemical matricectomy. Particularly with regard to lower extremity nail ablation and matricectomy, minimizing soft tissue inflammation could in turn reduce the risk of postoperative infections, decrease recovery time, as well and make for a more positive overall patient experience.", "title": "" }, { "docid": "c019deeb3e4bcc748b0fb70e5a6ade60", "text": "Drivable space detection or road perception is one of the most important tasks for autonomous driving. Sensorbased vision/laser systems may have limited performance in bad illumination/weather conditions, a prior knowledge of the road from the map data is expected to improve the effectiveness. This paper is to employ the map information extracted from the OpenStreetMap (OSM) data, and explore its capability for road perception. The OSM data can be used to render virtual street views, and further refined to provide the prior road mask. The OSM masks can be also combine with image processing and Lidar point clouding approaches to characterize the drivable space. Using a Fully Convolutional Neural Network (FCNN), the OSM availability for deep learning methods is also discussed.", "title": "" }, { "docid": "a0be5127f5e77a96547e742534c87e4d", "text": "This article argues that the dominance of CLT has led to the neglect of one crucial aspect of language pedagogy, namely the context in which that pedagogy takes place. It argues that it is time to replace CLT as the central paradigm in language teaching with a Context Approach which places context at the heart of the profession. The article argues that such a shift is already taking place, and that eventually it will radically change our practice. It concludes by outlining the features of the Context Approach and discussing its implications.", "title": "" }, { "docid": "777243cb514414dd225a9d5f41dc49b7", "text": "We have built and tested a decision tool which will help organisations properly select one business process maturity model (BPMM) over another. This prototype consists of a novel questionnaire with decision criteria for BPMM selection, linked to a unique data set of 69 BPMMs. Fourteen criteria (questions) were elicited from an international Delphi study, and weighed by the analytical hierarchy process. Case studies have shown (non-)profit and academic applications. Our purpose was to describe criteria that enable an informed BPMM choice (conform to decision-making theories, rather than ad hoc). Moreover, we propose a design process for building BPMM decision tools. 2013 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "76715b342c0b0a475ba6db06a0345c7b", "text": "Generalized linear mixed models are a widely used tool for modeling longitudinal data. However , their use is typically restricted to few covariates, because the presence of many predictors yields unstable estimates. The presented approach to the fitting of generalized linear mixed models includes an L 1-penalty term that enforces variable selection and shrinkage simultaneously. A gradient ascent algorithm is proposed that allows to maximize the penalized log-likelihood yielding models with reduced complexity. In contrast to common procedures it can be used in high-dimensional settings where a large number of potentially influential explanatory variables is available. The method is investigated in simulation studies and illustrated by use of real data sets.", "title": "" }, { "docid": "a95a46cdf179f9501b7409da9975767f", "text": "Gibson's ecological theory of perception has received considerable attention within psychology literature, as well as in computer vision and robotics. However, few have applied Gibson's approach to agent-based models of human movement, because the ecological theory requires that individuals have a vision-based mental model of the world, and for large numbers of agents this becomes extremely expensive computationally. Thus, within current pedestrian models, path evaluation is based on calibration from observed data or on sophisticated but deterministic route-choice mechanisms; there is little open-ended behavioural modelling of human-movement patterns. One solution which allows individuals rapid concurrent access to the visual information within an environment is an `exosomatic visual architecture', where the connections between mutually visible locations within a configuration are prestored in a lookup table. Here we demonstrate that, with the aid of an exosomatic visual architecture, it is possible to develop behavioural models in which movement rules originating from Gibson's principle of affordance are utilised. We apply large numbers of agents programmed with these rules to a built-environment example and show that, by varying parameters such as destination selection, field of view, and steps taken between decision points, it is possible to generate aggregate movement levels very similar to those found in an actual building context. DOI:10.1068/b12850", "title": "" }, { "docid": "1620f5e3576ef578c6bedbe2db6eb426", "text": "Automatic essay scoring is nowadays successfully used even in high-stakes tests, but this is mainly limited to holistic scoring of learner essays. We present a new dataset of essays written by highly proficient German native speakers that is scored using a fine-grained rubric with the goal to provide detailed feedback. Our experiments with two state-of-the-art scoring systems (a neural and a SVM-based one) show a large drop in performance compared to existing datasets. This demonstrates the need for such datasets that allow to guide research on more elaborate essay scoring methods.", "title": "" }, { "docid": "900d9747114db774abcb26bb01b8a89e", "text": "Social-networking functions are increasingly embed ded in online rating systems. These functions alter the rating context in which c onsumer ratings are generated. In this paper, we empirically investigate online friends’ s ocial influence in online book ratings. Our quasi-experiment research design exploits the t emporal sequence of social-networking events and ratings and offers a n ew method for identifying social influence while accounting for the homophily effect . We find rating similarity between friends is significantly higher after the formation f the friend relationships, indicating that with social-networking functions, online ratin g contributors are socially nudged when giving their ratings. Additional exploration o f contingent factors suggests that social influence is stronger for older books and us ers who have smaller networks, and relatively more recent and extremely negative ratin gs cast more salient influence. Our study suggests that friends’ social influence is an important consideration when introducing social-networking functions to online r ating systems.", "title": "" }, { "docid": "5d1f550fdecd2a5305771fc96cec42cb", "text": "The modification of synaptic strength produced by long-term potentiation (LTP) is widely thought to underlie memory storage. Indeed, given that hippocampal pyramidal neurons have >10,000 independently modifiable synapses, the potential for information storage by synaptic modification is enormous. However, recent work suggests that CREB-mediated global changes in neuronal excitability also play a critical role in memory formation. Because these global changes have a modest capacity for information storage compared with that of synaptic plasticity, their importance for memory function has been unclear. Here we review the newly emerging evidence for CREB-dependent control of excitability and discuss two possible mechanisms. First, the CREB-dependent transient change in neuronal excitability performs a memory-allocation function ensuring that memory is stored in ways that facilitate effective linking of events with temporal proximity (hours). Second, these changes may promote cell-assembly formation during the memory-consolidation phase. It has been unclear whether such global excitability changes and local synaptic mechanisms are complementary. Here we argue that the two mechanisms can work together to promote useful memory function. The authors discuss newly emerging evidence for the role of the transcription factor CREB in memory, including its role in modulating changes in excitability that are critical for neural assembly formation and linking of memories across time.", "title": "" }, { "docid": "134d85937dc13e4174e2ddb99197f924", "text": "A compact hybrid-integrated 100 Gb/s (4 lane × 25.78125 Gb/s) transmitter optical sub-assembly (TOSA) has been developed for a 100 Gb/s transceiver for 40-km transmission over a single-mode fiber. The TOSA has a simple configuration in which four electro-absorption modulator-integrated distributed feedback (EADFB) lasers are directly attached to the input waveguide end-face of a silica-based arrayed waveguide grating (AWG) multiplexer without bulk lenses. To achieve a high optical butt coupling efficiency between the EADFB lasers and the AWG multiplexer, we integrated a laterally tapered spot-size converter (SSC) for the EADFB laser and employed a waveguide with a high refractive index difference of 2.0% for the AWG multiplexer. By optimizing the laterally tapered SSC structure, we achieved a butt-coupling loss of less than 3 dB, which is an improvement of around 2 dB compared with a laser without an SSC structure. We also developed an ultracompact AWG multiplexer, which was 6.7 mm × 3.5 mm in size with an insertion loss of less than 1.9 dB. We achieved this by using a Mach-Zehnder interferometer-synchronized configuration to obtain a low loss and wide flat-top transmission filter spectra. The TOSA body size was 19.9 mm (L) × 6.0 mm (W) × 5.8 mm (H). Error-free operation was demonstrated for a 40-km transmission when all the lanes were driven simultaneously with a low EA modulator driving voltage of 1.5 V at an operating temperature of 55 °C.", "title": "" }, { "docid": "bc0ca1e4f698fff9277e5bbcf8c8b797", "text": "This paper presents a hybrid method combining a vector fitting (VF) and a global optimization for diagnosing coupled resonator bandpass filters. The method can extract coupling matrix from the measured or electromagnetically simulated admittance parameters (Y -parameters) of a narrow band coupled resonator bandpass filter with losses. The optimization method is used to remove the phase shift effects of the measured or the EM simulated Y -parameters caused by the loaded transmission lines at the input/output ports of a filter. VF is applied to determine the complex poles and residues of the Y -parameters without phase shift. The coupling matrix can be extracted (also called the filter diagnosis) by these complex poles and residues. The method can be used to computer-aided tuning (CAT) of a filter in the stage of this filter design and/or product process to accelerate its physical design. Three application examples illustrate the validity of the proposed method.", "title": "" }, { "docid": "c09fc633fd17919f45ccc56c4a28ceef", "text": "The 6-pole UHF helical resonators filter was designed, simulated, fabricated, and tested. The design factors, simulation results, filter performance characteristics are presented in this paper. The coupling of helical resonators was designed using a mode-matching technique. The design procedures are simple, and measured performance is excellent. The simulated and measured results show the validity of the proposed design method.", "title": "" }, { "docid": "7e5620a0d16881a6af7d10ce15aed683", "text": "In this paper, we infer the statuses of a taxi, consisting of occupied, non-occupied and parked, in terms of its GPS trajectory. The status information can enable urban computing for improving a city’s transportation systems and land use planning. In our solution, we first identify and extract a set of effective features incorporating the knowledge of a single trajectory, historical trajectories and geographic data like road network. Second, a parking status detection algorithm is devised to find parking places (from a given trajectory), dividing a trajectory into segments (i.e., sub-trajectories). Third, we propose a two-phase inference model to learn the status (occupied or non-occupied) of each point from a taxi segment. This model first uses the identified features to train a local probabilistic classifier and then carries out a Hidden SemiMarkov Model (HSMM) for globally considering long term travel patterns. We evaluated our method with a large-scale real-world trajectory dataset generated by 600 taxis, showing the advantages of our method over baselines.", "title": "" }, { "docid": "2738f51f986a6f6d4d4244e66bb6869a", "text": "A frequency compensation technique for three- stage amplifiers is introduced. The proposed solution exploits only one Miller capacitor and a resistor in the compensation network. The straightness of the technique is used to design, using a standard CMOS 0.35-mum process, a 1.5-V OTA driving a 150-pF load capacitor. The dc current consumption is about 14 muA at DC and a 1.6-MHz gain-bandwidth product is obtained, providing significant improvement in both MHz-pF/mA and (V/mus)-pF/mA performance parameters.", "title": "" }, { "docid": "17fb585ff12cff879febb32c2a16b739", "text": "An electroencephalography (EEG) based Brain Computer Interface (BCI) enables people to communicate with the outside world by interpreting the EEG signals of their brains to interact with devices such as wheelchairs and intelligent robots. More specifically, motor imagery EEG (MI-EEG), which reflects a subject's active intent, is attracting increasing attention for a variety of BCI applications. Accurate classification of MI-EEG signals while essential for effective operation of BCI systems is challenging due to the significant noise inherent in the signals and the lack of informative correlation between the signals and brain activities. In this paper, we propose a novel deep neural network based learning framework that affords perceptive insights into the relationship between the MI-EEG data and brain activities. We design a joint convolutional recurrent neural network that simultaneously learns robust high-level feature presentations through low-dimensional dense embeddings from raw MI-EEG signals. We also employ an Autoencoder layer to eliminate various artifacts such as background activities. The proposed approach has been evaluated extensively on a large-scale public MI-EEG dataset and a limited but easy-to-deploy dataset collected in our lab. The results show that our approach outperforms a series of baselines and the competitive state-of-the-art methods, yielding a classification accuracy of 95.53%. The applicability of our proposed approach is further demonstrated with a practical BCI system for typing.", "title": "" }, { "docid": "0ef117ca4663f523d791464dad9a7ebf", "text": "In this paper, a circularly polarized, omnidirectional side-fed bifilar helix antenna, which does not require a ground plane is presented. The antenna has a height of less than 0.1λ and the maximum boresight gain of 1.95dB, with 3dB beamwidth of 93°. The impedance bandwidth of the antenna for VSWR≤2 (with reference to resonant input resistance of 25Ω) is 2.7%. The simulated axial ratio(AR) at the resonant frequency 860MHz is 0.9 ≤AR≤ 1.0 in the whole hemisphere except small region around the nulls. The polarization bandwidth for AR≤3dB is 34.7%. The antenna is especially useful for high speed aerodynamic bodies made of composite materials (such as UAVs) where low profile antennas are essential to reduce air resistance and/or proper metallic ground is not available for monopole-type antenna.", "title": "" } ]
scidocsrr
c61400fa47baec994f4daa576d1d05af
A Framework for Blockchain-Based Applications
[ { "docid": "5eb65797b9b5e90d5aa3968d5274ae72", "text": "Blockchains enable tamper-proof, ordered logging for transactional data in a decentralized manner over open-access, overlay peer-to-peer networks. In this paper, we propose a decentralized framework of proactive caching in a hierarchical wireless network based on blockchains. We employ the blockchain-based smart contracts to construct an autonomous content caching market. In the market, the cache helpers are able to autonomously adapt their caching strategies according to the market statistics obtained from the blockchain, and the truthfulness of trustless nodes are financially enforced by smart contract terms. Further, we propose an incentive-compatible consensus mechanism based on proof-of-stake to financially encourage the cache helpers to stay active in service. We model the interaction between the cache helpers and the content providers as a Chinese restaurant game. Based on the theoretical analysis regarding the Nash equilibrium of the game, we propose a decentralized strategy-searching algorithm using sequential best response. The simulation results demonstrate both the efficiency and reliability of the proposed equilibrium searching algorithm.", "title": "" } ]
[ { "docid": "73270e8140d763510d97f7bd2fdd969e", "text": "Inspired by the progress of deep neural network (DNN) in single-media retrieval, the researchers have applied the DNN to cross-media retrieval. These methods are mainly two-stage learning: the first stage is to generate the separate representation for each media type, and the existing methods only model the intra-media information but ignore the inter-media correlation with the rich complementary context to the intra-media information. The second stage is to get the shared representation by learning the cross-media correlation, and the existing methods learn the shared representation through a shallow network structure, which cannot fully capture the complex cross-media correlation. For addressing the above problems, we propose the cross-media multiple deep network (CMDN) to exploit the complex cross-media correlation by hierarchical learning. In the first stage, CMDN jointly models the intra-media and intermedia information for getting the complementary separate representation of each media type. In the second stage, CMDN hierarchically combines the inter-media and intra-media representations to further learn the rich cross-media correlation by a deeper two-level network strategy, and finally get the shared representation by a stacked network style. Experiment results show that CMDN achieves better performance comparing with several state-of-the-art methods on 3 extensively used cross-media datasets.", "title": "" }, { "docid": "bd5d84c9d699080b2d668809626e90fe", "text": "Until now, error type performance for Grammatical Error Correction (GEC) systems could only be measured in terms of recall because system output is not annotated. To overcome this problem, we introduce ERRANT, a grammatical ERRor ANnotation Toolkit designed to automatically extract edits from parallel original and corrected sentences and classify them according to a new, dataset-agnostic, rulebased framework. This not only facilitates error type evaluation at different levels of granularity, but can also be used to reduce annotator workload and standardise existing GEC datasets. Human experts rated the automatic edits as “Good” or “Acceptable” in at least 95% of cases, so we applied ERRANT to the system output of the CoNLL-2014 shared task to carry out a detailed error type analysis for the first time.", "title": "" }, { "docid": "5090070d6d928b83bd22d380f162b0a6", "text": "The Federal Aviation Administration (FAA) has been increasing the National Airspace System (NAS) capacity to accommodate the predicted rapid growth of air traffic. One method to increase the capacity is reducing air traffic controller workload so that they can handle more air traffic. It is crucial to measure the impact of the increasing future air traffic on controller workload. Our experimental data show a linear relationship between the number of aircraft in the en route center sector and controllers’ perceived workload. Based on the extensive range of aircraft count from 14 to 38 in the experiment, we can predict en route center controllers working as a team of Radar and Data controllers with the automation tools available in the our experiment could handle up to about 28 aircraft. This is 33% more than the 21 aircraft that en route center controllers typically handle in a busy sector.", "title": "" }, { "docid": "890038199db8a8391d25f1922d18cd62", "text": "In this paper we present a framework for learning a three layered model of human shape, pose and garment deformation. The proposed deformation model provides intuitive control over the three parameters independently, while producing aesthetically pleasing deformations of both the garment and the human body. The shape and pose deformation layers of the model are trained on a rich dataset of full body 3D scans of human subjects in a variety of poses. The garment deformation layer is trained on animated mesh sequences of dressed actors and relies on a novel technique for human shape and posture estimation under clothing. The key contribution of this paper is that we consider garment deformations as the residual transformations between a naked mesh and the dressed mesh of the same subject.", "title": "" }, { "docid": "6f1877d360251e601b3ce63e7b991052", "text": "In education research, there is a widely-cited result called \"Bloom's two sigma\" that characterizes the differences in learning outcomes between students who receive one-on-one tutoring and those who receive traditional classroom instruction. Tutored students scored in the 95th percentile, or two sigmas above the mean, on average, compared to students who received traditional classroom instruction. In human-robot interaction research, however, there is relatively little work exploring the potential benefits of personalizing a robot's actions to an individual's strengths and weaknesses. In this study, participants solved grid-based logic puzzles with the help of a personalized or non-personalized robot tutor. Participants' puzzle solving times were compared between two non-personalized control conditions and two personalized conditions (n=80). Although the robot's personalizations were less sophisticated than what a human tutor can do, we still witnessed a \"one-sigma\" improvement (68th percentile) in post-tests between treatment and control groups. We present these results as evidence that even relatively simple personalizations can yield significant benefits in educational or assistive human-robot interactions.", "title": "" }, { "docid": "05ce4be5b7d3c33ba1ebce575aca4fb9", "text": "In this competitive world, business is becoming highly saturated. Especially, the field of telecommunication faces complex challenges due to a number of vibrant competitive service providers. Therefore, it has become very difficult for them to retain existing customers. Since the cost of acquiring new customers is much higher than the cost of retaining the existing customers, it is the time for the telecom industries to take necessary steps to retain the customers to stabilize their market value. This paper explores the application of data mining techniques in predicting the likely churners and attribute selection on identifying the churn. It also compares the efficiency of several classifiers and lists their performances for two real telecom datasets.", "title": "" }, { "docid": "31e6da3635ec5f538f15a7b3e2d95e5b", "text": "Smart electricity meters are currently deployed in millions of households to collect detailed individual electricity consumption data. Compared with traditional electricity data based on aggregated consumption, smart meter data are much more volatile and less predictable. There is a need within the energy industry for probabilistic forecasts of household electricity consumption to quantify the uncertainty of future electricity demand in order to undertake appropriate planning of generation and distribution. We propose to estimate an additive quantile regression model for a set of quantiles of the future distribution using a boosting procedure. By doing so, we can benefit from flexible and interpretable models, which include an automatic variable selection. We compare our approach with three benchmark methods on both aggregated and disaggregated scales using a smart meter data set collected from 3639 households in Ireland at 30-min intervals over a period of 1.5 years. The empirical results demonstrate that our approach based on quantile regression provides better forecast accuracy for disaggregated demand, while the traditional approach based on a normality assumption (possibly after an appropriate Box-Cox transformation) is a better approximation for aggregated demand. These results are particularly useful since more energy data will become available at the disaggregated level in the future.", "title": "" }, { "docid": "2af4d946d00b37ec0f6d37372c85044b", "text": "Training of discrete latent variable models remains challenging because passing gradient information through discrete units is difficult. We propose a new class of smoothing transformations based on a mixture of two overlapping distributions, and show that the proposed transformation can be used for training binary latent models with either directed or undirected priors. We derive a new variational bound to efficiently train with Boltzmann machine priors. Using this bound, we develop DVAE++, a generative model with a global discrete prior and a hierarchy of convolutional continuous variables. Experiments on several benchmarks show that overlapping transformations outperform other recent continuous relaxations of discrete latent variables including Gumbel-Softmax (Maddison et al., 2016; Jang et al., 2016), and discrete variational autoencoders (Rolfe, 2016).", "title": "" }, { "docid": "4af5f2e9b12b4efa43c053fd13f640d0", "text": "The high level of heterogeneity between linguistic annotations usually complic ates the interoperability of processing modules within an NLP pipeline. In this paper, a framework for the interoperation of NLP co mp nents, based on a data-driven architecture, is presented. Here, ontologies of linguistic annotation are employed to provide a conceptu al basis for the tag-set neutral processing of linguistic annotations. The framework proposed here is based on a set of struc tured OWL ontologies: a reference ontology, a set of annotation models which formalize different annotation schemes, and a declarativ e linking between these, specified separately. This modular architecture is particularly scalable and flexible as it allows for the integration of different reference ontologies of linguistic annotations in order to overcome the absence of a consensus for an ontology of ling uistic terminology. Our proposal originates from three lines of research from different fields: research on annotation type systems in UIMA; the ontological architecture OLiA, originally developed for sustainable documentation and annotation-independent corpus browsin g, and the ontologies of the OntoTag model, targeted towards the processing of linguistic annotations in Semantic Web applications. We describ how UIMA annotations can be backed up by ontological specifications of annotation schemes as in the OLiA model, and how these ar e linked to the OntoTag ontologies, which allow for further ontological processing.", "title": "" }, { "docid": "21bb289fb932b23d95fee7d40401d70c", "text": "Mobile phone use is banned or regulated in some circumstances. Despite recognized safety concerns and legal regulations, some people do not refrain from using mobile phones. Such problematic mobile phone use can be considered to be an addiction-like behavior. To find the potential predictors, we examined the correlation between problematic mobile phone use and personality traits reported in addiction literature, which indicated that problematic mobile phone use was a function of gender, self-monitoring, and approval motivation but not of loneliness. These findings suggest that the measurements of these addictive personality traits would be helpful in the screening and intervention of potential problematic users of mobile phones.", "title": "" }, { "docid": "095f4ea337421d6e1310acf73977fdaa", "text": "We consider the problem of autonomous robotic laundry folding, and propose a solution to the perception and manipulation challenges inherent to the task. At the core of our approach is a quasi-static cloth model which allows us to neglect the complex dynamics of cloth under significant parts of the state space, allowing us to reason instead in terms of simple geometry. We present an algorithm which, given a 2D cloth polygon and a desired sequence of folds, outputs a motion plan for executing the corresponding manipulations, deemed g-folds, on a minimal number of robot grippers. We define parametrized fold sequences for four clothing categories: towels, pants, short-sleeved shirts, and long-sleeved shirts, each represented as polygons. We then devise a model-based optimization approach for visually inferring the class and pose of a spread-out or folded clothing article from a single image, such that the resulting polygon provides a parse suitable for these folding primitives. We test the manipulation and perception tasks individually, and combine them to implement an autonomous folding system on the Willow Garage PR2. This enables the PR2 to identify a clothing article spread out on a table, execute the computed folding sequence, and visually track its progress over successive folds.", "title": "" }, { "docid": "fabd41342129ce739aec41bfa93629c4", "text": "This paper presents a new method for viewpoint invariant pedestrian recognition problem. We use a metric learning framework to obtain a robust metric for large margin nearest neighbor classification with rejection (i.e., classifier will return no matches if all neighbors are beyond a certain distance). The rejection condition necessitates the use of a uniform threshold for a maximum allowed distance for deeming a pair of images a match. In order to handle the rejection case, we propose a novel cost similar to the Large Margin Nearest Neighbor (LMNN) method and call our approach Large Margin Nearest Neighbor with Rejection (LMNN-R). Our method is able to achieve significant improvement over previously reported results on the standard Viewpoint Invariant Pedestrian Recognition (VIPeR [1]) dataset.", "title": "" }, { "docid": "1503fae33ae8609a2193e978218d1543", "text": "The construct of resilience has captured the imagination of researchers across various disciplines over the last five decades (Ungar, 2008a). Despite a growing body of research in the area of resilience, there is little consensus among researchers about the definition and meaning of this concept. Resilience has been used to describe eight kinds of phenomena across different disciplines. These eight phenomena can be divided into two clusters based on the disciplinary origin. The first cluster mainly involves definitions of resilience derived from the discipline of psychology and covers six themes including (i) personality traits, (ii) positive outcomes/forms of adaptation despite high-risk, (iii) factors associated with positive adaptation, (iv) processes, (v) sustained competent functioning/stress resistance, and (vi) recovery from trauma or adversity. The second cluster of definitions is rooted in the discipline of sociology and encompasses two themes including (i) human agency and resistance, and (ii) survival. This paper discusses the inconsistencies in the varied definitions used within the published literature and describes the differing conceptualizations of resilience as well as their limitations. The paper concludes by offering a unifying conceptualization of resilience and by discussing implications for future research on resilience.", "title": "" }, { "docid": "75e9b017838ccfdcac3b85030470a3bd", "text": "The new \"Direct Self-Control\" (DSC) is a simple method of signal processing, which gives converter fed three-phase machines an excellent dynamic performance. To control the torque e.g. of an induction motor it is sufficient to process the measured signals of the stator currents and the total flux linkages only. Optimal performance of drive systems is accomplished in steady state as well as under transient conditions by combination of several two limits controls. The expenses are less than in the case of proposed predictive control systems or FAM, if the converters switching frequency has to be kept minimal.", "title": "" }, { "docid": "79ca2676dab5da0c9f39a0996fcdcfd8", "text": "Estimation of human shape from images has numerous applications ranging from graphics to surveillance. A single image provides insufficient constraints (e.g. clothing), making human shape estimation more challenging. We propose a method to simultaneously estimate a person’s clothed and naked shapes from a single image of that person wearing clothing. The key component of our method is a deformable model of clothed human shape. We learn our deformable model, which spans variations in pose, body, and clothes, from a training dataset. These variations are derived by the non-rigid surface deformation, and encoded in various low-dimension parameters. Our deformable model can be used to produce clothed 3D meshes for different people in different poses, which neither appears in the training dataset. Afterward, given an input image, our deformable model is initialized with a few user-specified 2D joints and contours of the person. We optimize the parameters of the deformable model by pose fitting and body fitting in an iterative way. Then the clothed and naked 3D shapes of the person can be obtained simultaneously. We illustrate our method for texture mapping and animation. The experimental results on real images demonstrate the effectiveness of our method.", "title": "" }, { "docid": "cfa6b417658cfc1b25200a8ff578ed2c", "text": "The Learning Analytics (LA) discipline analyzes educational data obtained from student interaction with online resources. Most of the data is collected from Learning Management Systems deployed at established educational institutions. In addition, other learning platforms, most notably Massive Open Online Courses such as Udacity and Coursera or other educational initiatives such as Khan Academy, generate large amounts of data. However, there is no generally agreedupon data model for student interactions. Thus, analysis tools must be tailored to each system's particular data structure, reducing their interoperability and increasing development costs. Some e-Learning standards designed for content interoperability include data models for gathering student performance information. In this paper, we describe how well-known LA tools collect data, which we link to how two e-Learning standards - IEEE Standard for Learning Technology and Experience API - define their data models. From this analysis, we identify the advantages of using these e-Learning standards from the point of view of Learning Analytics.", "title": "" }, { "docid": "f5662b8a124ad973084088b64004f3f5", "text": "A metal-frame antenna for the long-term evolution/wireless wide area network (LTE/WWAN) operation in the metal-casing tablet computer is presented. The antenna is formed by using two inverted-F antenna (IFA) structures to provide a low band and a high band to, respectively, cover the LTE/WWAN operation in the 824-960 and 1710-2690 MHz bands. The larger IFA has a longer radiating metal strip for the low band, and the smaller IFA has a shorter radiating metal strip for the high band. The two radiating metal strips are configured to be a portion of the metal frame disposed around the edges of the metal back cover of the tablet computer. The projection of the metal frame lies on the edges of the metal back cover, such that there is no ground clearance between the projection and the metal back cover. Furthermore, the feeding and shorting strips with matching networks therein for the two IFAs are disposed on a small dielectric substrate (feed circuit board), which is separated from the system circuit board and the metal back cover. In this case, there is generally no planar space of the metal back cover and system circuit board occupied, and the antenna can cover the 824-960/1710-2690 MHz bands. Results of the proposed antenna are presented. An extended study is also presented to show that the antenna's low-band coverage can be widened from 824-960 to 698-960 MHz. The wider bandwidth coverage is obtained when a switchable inductor bank is applied in the larger IFA.", "title": "" }, { "docid": "d5c545781cc26242da97f5e75535cd6f", "text": "Kutato is a system that takes as input a database of cases and produces a belief network that captures many of the dependence relations represented by those data. This system incorporates a module for determining the entropy of a belief network and a module for constructing belief networks based on entropy calculations. Kutato constructs an initial belief network in which all variables in the database are assumed to be marginally independent. The entropy of this belief network is calculated, and that arc is added that minimizes the entropy of the resulting belief network. Conditional probabilities for an arc are obtained directly from the database. This process continues until an entropy-based threshold is reached. We have tested the system by generating databases from networks using the probabilistic logic-sampling method, and then using those databases as input to Kutato. The system consistently reproduces the original belief networks with high fidelity.", "title": "" }, { "docid": "d9c5bcd63b0f3d45aa037d7b3e80aad3", "text": "In recent years, type II diabetes has become a serious disease that threaten the health and mind of human. Efficient predictive modeling is required for medical researchers and practitioners. This study proposes a type II diabetes prediction model based on random forest which aims at analyzing some readily available indicators (age, weight, waist, hip, etc.) effects on diabetes and discovering some rules on given data. The method can significantly reduce the risk of disease through digging out a clear and understandable model for type II diabetes from a medical database. Random forest algorithm uses multiple decision trees to train the samples, and integrates weight of each tree to get the final results. The validation results at school of medicine, University of Virginia shows that the random forest algorithm can greatly reduce the problem of over-fitting of the single decision tree, and it can effectively predict the impact of these readily available indicators on the risk of diabetes. Additionally, we get a better prediction accuracy using random forest than using the naive Bayes algorithm, ID3 algorithm and AdaBoost algorithm.", "title": "" }, { "docid": "3e80dc7319f1241e96db42033c16f6b4", "text": "Automatic expert assignment is a common problem encountered in both industry and academia. For example, for conference program chairs and journal editors, in order to collect \"good\" judgments for a paper, it is necessary for them to assign the paper to the most appropriate reviewers. Choosing appropriate reviewers of course includes a number of considerations such as expertise and authority, but also diversity and avoiding conflicts. In this paper, we explore the expert retrieval problem and implement an automatic paper-reviewer recommendation system that considers aspects of expertise, authority, and diversity. In particular, a graph is first constructed on the possible reviewers and the query paper, incorporating expertise and authority information. Then a Random Walk with Restart (RWR) [1] model is employed on the graph with a sparsity constraint, incorporating diversity information. Extensive experiments on two reviewer recommendation benchmark datasets show that the proposed method obtains performance gains over state-of-the-art reviewer recommendation systems in terms of expertise, authority, diversity, and, most importantly, relevance as judged by human experts.", "title": "" } ]
scidocsrr
d76123f373e28fd6c4fde1108caf3825
A comparison of pilot-aided channel estimation methods for OFDM systems
[ { "docid": "822b3d69fd4c55f45a30ff866c78c2b1", "text": "Orthogonal frequency-division multiplexing (OFDM) modulation is a promising technique for achieving the high bit rates required for a wireless multimedia service. Without channel estimation and tracking, OFDM systems have to use differential phase-shift keying (DPSK), which has a 3-dB signalto-noise ratio (SNR) loss compared with coherent phase-shift keying (PSK). To improve the performance of OFDM systems by using coherent PSK, we investigate robust channel estimation for OFDM systems. We derive a minimum mean-square-error (MMSE) channel estimator, which makes full use of the timeand frequency-domain correlations of the frequency response of time-varying dispersive fading channels. Since the channel statistics are usually unknown, we also analyze the mismatch of the estimator-to-channel statistics and propose a robust channel estimator that is insensitive to the channel statistics. The robust channel estimator can significantly improve the performance of OFDM systems in a rapid dispersive fading channel.", "title": "" } ]
[ { "docid": "1803a9dbb7955862c8a4d046f807897a", "text": "Vertebrate animals exploit the elastic properties of their tendons in several different ways. Firstly, metabolic energy can be saved in locomotion if tendons stretch and then recoil, storing and returning elastic strain energy, as the animal loses and regains kinetic energy. Leg tendons save energy in this way when birds and mammals run, and an aponeurosis in the back is also important in galloping mammals. Tendons may have similar energy-saving roles in other modes of locomotion, for example in cetacean swimming. Secondly, tendons can recoil elastically much faster than muscles can shorten, enabling animals to jump further than they otherwise could. Thirdly, tendon elasticity affects the control of muscles, enhancing force control at the expense of position control.", "title": "" }, { "docid": "81fd4801b7dbe39573a44f2af0e94b9a", "text": "In this paper, we propose a conceptual framework for assessing the salience of landmarks for navigation. Landmark salience is derived as a result of the observer’s point of view, both physical and cognitive, the surrounding environment, and the objects contained therein. This is in contrast to the currently held view that salience is an inherent property of some spatial feature. Salience, in our approach, is expressed as a three-valued Saliency Vector. The components that determine this vector are Perceptual Salience, which defines the exogenous (or passive) potential of an object or region for acquisition of visual attention, Cognitive Salience, which is an endogenous (or active) mode of orienting attention, triggered by informative cues providing advance information about the target location, and Contextual Salience, which is tightly coupled to modality and task to be performed. This separation between voluntary and involuntary direction of visual attention in dependence of the context allows defining a framework that accounts for the interaction between observer, environment, and landmark. We identify the low-level factors that contribute to each type of salience and suggest a probabilistic approach for their integration. Finally, we discuss the implications, consider restrictions, and explore the scope of the framework.", "title": "" }, { "docid": "4bac03c1e5c5cad93595dd38954a8a94", "text": "This paper addresses the problem of path prediction for multiple interacting agents in a scene, which is a crucial step for many autonomous platforms such as self-driving cars and social robots. We present SoPhie; an interpretable framework based on Generative Adversarial Network (GAN), which leverages two sources of information, the path history of all the agents in a scene, and the scene context information, using images of the scene. To predict a future path for an agent, both physical and social information must be leveraged. Previous work has not been successful to jointly model physical and social interactions. Our approach blends a social attention mechanism with a physical attention that helps the model to learn where to look in a large scene and extract the most salient parts of the image relevant to the path. Whereas, the social attention component aggregates information across the different agent interactions and extracts the most important trajectory information from the surrounding neighbors. SoPhie also takes advantage of GAN to generates more realistic samples and to capture the uncertain nature of the future paths by modeling its distribution. All these mechanisms enable our approach to predict socially and physically plausible paths for the agents and to achieve state-of-the-art performance on several different trajectory forecasting benchmarks.", "title": "" }, { "docid": "a872ab9351dc645b5799d576f5f10eb6", "text": "A new framework for advanced manufacturing is being promoted in Germany, and is increasingly being adopted by other countries. The framework represents a coalescing of digital and physical technologies along the product value chain in an attempt to transform the production of goods and services1. It is an approach that focuses on combining technologies such as additive manufacturing, automation, digital services and the Internet of Things, and it is part of a growing movement towards exploiting the convergence between emerging technologies. This technological convergence is increasingly being referred to as the ‘fourth industrial revolution’, and like its predecessors, it promises to transform the ways we live and the environments we live in. (While there is no universal agreement on what constitutes an ‘industrial revolution’, proponents of the fourth industrial revolution suggest that the first involved harnessing steam power to mechanize production; the second, the use of electricity in mass production; and the third, the use of electronics and information technology to automate production.) Yet, without up-front efforts to ensure its beneficial, responsible and responsive development, there is a very real danger that this fourth industrial revolution will not only fail to deliver on its promise, but also ultimately increase the very challenges its advocates set out to solve. At its heart, the fourth industrial revolution represents an unprecedented fusion between and across digital, physical and biological technologies, and a resulting anticipated transformation in how products are made and used2. This is already being experienced with the growing Internet of Things, where dynamic information exchanges between networked devices are opening up new possibilities from manufacturing to lifestyle enhancement and risk management. Similarly, a rapid amplification of 3D printing capabilities is now emerging through the convergence of additive manufacturing technologies, online data sharing and processing, advanced materials, and ‘printable’ biological systems. And we are just beginning to see the commercial use of potentially transformative convergence between cloud-based artificial intelligence and open-source hardware and software, to create novel platforms for innovative human–machine interfaces. These and other areas of development only scratch the surface of how convergence is anticipated to massively extend the impacts of the individual technologies it draws on. This is a revolution that comes with the promise of transformative social, economic and environmental advances — from eliminating disease, protecting the environment, and providing plentiful energy, food and water, to reducing inequity and empowering individuals and communities. Yet, the path towards this utopia-esque future is fraught with pitfalls — perhaps more so than with any former industrial revolution. As more people get closer to gaining access to increasingly powerful converging technologies, a complex risk landscape is emerging that lies dangerously far beyond the ken of current regulations and governance frameworks. As a result, we are in danger of creating a global ‘wild west’ of technology innovation, where our good intentions may be among the first casualties. Within this emerging landscape, cyber security is becoming an increasingly important challenge, as global digital networks open up access to manufacturing processes and connected products across the world. The risks of cyber ‘insecurity’ increase by orders of magnitude as manufacturing becomes more distributed and less conventionally securable. Distributed manufacturing is another likely outcome of the fourth industrial revolution. A powerful fusion between online resources, modular and open-source tech, and point-of-source production devices, such as 3D printers, will increasingly enable entrepreneurs to set up shop almost anywhere. While this could be a boon for local economies, it magnifies the ease with which manufacturing can slip the net of conventional regulation, while still having the ability to have a global impact. These and other challenges reflect a blurring of the line between hardware and software systems that is characteristic of the fourth industrial revolution. We are heading rapidly towards a future where hardware manufacturers are able to grow, crash and evolve physical products with the same speed that we have become accustomed to with software products. Yet, manufacturing regulations remain based on product development cycles that span years, not hours. Anticipating this high-speed future, we are already seeing the emergence of hardware capabilities that can be updated at the push of a button. Tesla Motors, for instance, recently released a software update that added hardware-based ‘autopilot’ capabilities to the company’s existing fleet of model S vehicles3. This early demonstration of the convergence between hardware and software reflects a growing capacity to rapidly change the behaviour of hardware systems through software modifications that lies far beyond the capacity of current regulations to identify, monitor and control. This in turn increases the potential risks to health, safety and the environment, simply because well-intentioned technologies are at some point going to fall through the holes in an increasingly inadequate regulatory net. There are many other examples where converging technologies are increasing the gap between what we can do and our understanding of how to do it responsibly. The convergence between robotics, nanotechnology and cognitive augmentation, for instance, and that between artificial intelligence, gene editing and maker communities both push us into uncertain territory. Yet despite the vulnerabilities inherent with fast-evolving technological capabilities that are tightly coupled, complex and poorly regulated, we lack even the beginnings of national or international conceptual frameworks to think about responsible decisionmaking and responsive governance. How vulnerable we will be to unintended and unwanted consequences in this convergent technologies future is unclear. What is clear though is that, without new thinking on risk, resilience and governance, and without rapidly emerging abilities to identify early warnings and take corrective action, the chances of systems based around converging technologies failing fast and failing spectacularly will only increase.", "title": "" }, { "docid": "bd38c54756349c002962d0f25aed8d1b", "text": "Textbook and Color Atlas of Traumatic Injuries to the Teeth encompasses the full scope of acute dental trauma, including all aspects of inter-disciplinary treatment. This fourth edition captures the significant advances which have been made in the subject of dental traumatology, since the publication of the last edition more than a decade ago. The comprehensive nature of the book is designed to appeal to distinguished clinicians and scholars of dental traumatology, whether they be oral surgeons, pediatric dentists, endodontists, or from a related specialist community.", "title": "" }, { "docid": "b49925f5380f695ccc3f9a150030051c", "text": "Understanding the behaviour of algorithms is a key element of computer science. However, this learning objective is not always easy to achieve, as the behaviour of some algorithms is complicated or not readily observable, or affected by the values of their input parameters. To assist students in learning the multilevel feedback queue scheduling algorithm (MLFQ), we designed and developed an interactive visualization tool, Marble MLFQ, that illustrates how the algorithm works under various conditions. The tool is intended to supplement course material and instructions in an undergraduate operating systems course. The main features of Marble MLFQ are threefold: (1) It animates the steps of the scheduling algorithm graphically to allow users to observe its behaviour; (2) It provides a series of lessons to help users understand various aspects of the algorithm; and (3) It enables users to customize input values to the algorithm to support exploratory learning.", "title": "" }, { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "74fb6f153fe8d6f8eac0f18c1040a659", "text": "The DAVID Gene Functional Classification Tool http://david.abcc.ncifcrf.gov uses a novel agglomeration algorithm to condense a list of genes or associated biological terms into organized classes of related genes or biology, called biological modules. This organization is accomplished by mining the complex biological co-occurrences found in multiple sources of functional annotation. It is a powerful method to group functionally related genes and terms into a manageable number of biological modules for efficient interpretation of gene lists in a network context.", "title": "" }, { "docid": "d82553a7bf94647aaf60eb36748e567f", "text": "We propose a novel image-based rendering algorithm for handling complex scenes that may include reflective surfaces. Our key contribution lies in treating the problem in the gradient domain. We use a standard technique to estimate scene depth, but assign depths to image gradients rather than pixels. A novel view is obtained by rendering the horizontal and vertical gradients, from which the final result is reconstructed through Poisson integration using an approximate solution as a data term. Our algorithm is able to handle general scenes including reflections and similar effects without explicitly separating the scene into reflective and transmissive parts, as required by previous work. Our prototype renderer is fully implemented on the GPU and runs in real time on commodity hardware.", "title": "" }, { "docid": "d06f27b688f430acf5652fd4c67905b1", "text": "A comprehensive in vitro study involving antiglycation, antioxidant and anti-diabetic assays was carried out in mature fruits of strawberry. The effect of aqueous extract of mature strawberry fruits on glycation of guanosine with glucose and fructose with or without oxidizing entities like reactive oxygen species was analyzed. Spectral studies showed that glycation and/or fructation of guanosine was significantly inhibited by aqueous extract of strawberry. The UV absorbance of the glycation reactions was found to be maximum at 24 hrs. and decreased consecutively for 48, 72 and 96 hours. Inhibition of oxidative damage due to reactive oxygen species was also observed in presence of the plant extract. To our knowledge, antiglycation activity of strawberry fruit with reference to guanosine is being demonstrated for the first time. To determine the antioxidant activity of the plant extract, in vitro antioxidant enzymes assays (catalase, peroxidase, polyphenol oxidase and ascorbic acid oxidase) and antioxidant assays (DPPH, superoxide anion scavenging activity and xanthine oxidase) were performed. Maximum inhibition activity of 79.36%, 65.62% and 62.78% was observed for DPPH, superoxide anion scavenging and xanthine oxidase, respectively. In antidiabetic assays, IC50 value for alpha – amylase and alpha – glucosidase activity of fruit extract of strawberry was found to be 86.47 ± 1.12μg/ml and 76.83 ± 0.93 μg/ml, respectively. Thus, the aqueous extract of strawberry showed antiglycation, antioxidant and antidiabetic properties indicating that strawberry fruits, as a dietary supplement, may be utilized towards management of diabetes.", "title": "" }, { "docid": "2643c7960df0aed773aeca6e04fde67e", "text": "Many studies utilizing dogs, cats, birds, fish, and robotic simulations of animals have tried to ascertain the health benefits of pet ownership or animal-assisted therapy in the elderly. Several small unblinded investigations outlined improvements in behavior in demented persons given treatment in the presence of animals. Studies piloting the use of animals in the treatment of depression and schizophrenia have yielded mixed results. Animals may provide intangible benefits to the mental health of older persons, such as relief social isolation and boredom, but these have not been formally studied. Several investigations of the effect of pets on physical health suggest animals can lower blood pressure, and dog walkers partake in more physical activity. Dog walking, in epidemiological studies and few preliminary trials, is associated with lower complication risk among patients with cardiovascular disease. Pets may also have harms: they may be expensive to care for, and their owners are more likely to fall. Theoretically, zoonotic infections and bites can occur, but how often this occurs in the context of pet ownership or animal-assisted therapy is unknown. Despite the poor methodological quality of pet research after decades of study, pet ownership and animal-assisted therapy are likely to continue due to positive subjective feelings many people have toward animals.", "title": "" }, { "docid": "1165be411612c7d6c09ec0408ffdeaad", "text": "OBJECTIVES\nTo describe and compare 20 m shuttle run test (20mSRT) performance among children and youth across 50 countries; to explore broad socioeconomic indicators that correlate with 20mSRT performance in children and youth across countries and to evaluate the utility of the 20mSRT as an international population health indicator for children and youth.\n\n\nMETHODS\nA systematic review was undertaken to identify papers that explicitly reported descriptive 20mSRT (with 1-min stages) data on apparently healthy 9-17 year-olds. Descriptive data were standardised to running speed (km/h) at the last completed stage. Country-specific 20mSRT performance indices were calculated as population-weighted mean z-scores relative to all children of the same age and sex from all countries. Countries were categorised into developed and developing groups based on the Human Development Index, and a correlational analysis was performed to describe the association between country-specific performance indices and broad socioeconomic indicators using Spearman's rank correlation coefficient.\n\n\nRESULTS\nPerformance indices were calculated for 50 countries using collated data on 1 142 026 children and youth aged 9-17 years. The best performing countries were from Africa and Central-Northern Europe. Countries from South America were consistently among the worst performing countries. Country-specific income inequality (Gini index) was a strong negative correlate of the performance index across all 50 countries.\n\n\nCONCLUSIONS\nThe pattern of variability in the performance index broadly supports the theory of a physical activity transition and income inequality as the strongest structural determinant of health in children and youth. This simple and cost-effective assessment would be a powerful tool for international population health surveillance.", "title": "" }, { "docid": "af495aaae51ead951246733d088a2a47", "text": "In this paper, we present a novel parallel implementation for training Gradient Boosting Decision Trees (GBDTs) on Graphics Processing Units (GPUs). Thanks to the wide use of the open sourced XGBoost library, GBDTs have become very popular in recent years and won many awards in machine learning and data mining competitions. Although GPUs have demonstrated their success in accelerating many machine learning applications, there are a series of key challenges of developing a GPU-based GBDT algorithm, including irregular memory accesses, many small sorting operations and varying data parallel granularities in tree construction. To tackle these challenges on GPUs, we propose various novel techniques (including Run-length Encoding compression and thread/block workload dynamic allocation, and reusing intermediate training results for efficient gradient computation). Our experimental results show that our algorithm named GPU-GBDT is often 10 to 20 times faster than the sequential version of XGBoost, and achieves 1.5 to 2 times speedup over a 40 threaded XGBoost running on a relatively high-end workstation of 20 CPU cores. Moreover, GPU-GBDT outperforms its CPU counterpart by 2 to 3 times in terms of performance-price ratio.", "title": "" }, { "docid": "b8f81b8274dc466114d945bb3a597fea", "text": "SIGNIFICANCE\nNonalcoholic fatty liver disease (NAFLD), characterized by liver triacylglycerol build-up, has been growing in the global world in concert with the raised prevalence of cardiometabolic disorders, including obesity, diabetes, and hyperlipemia. Redox imbalance has been suggested to be highly relevant to NAFLD pathogenesis. Recent Advances: As a major health problem, NAFLD progresses to the more severe nonalcoholic steatohepatitis (NASH) condition and predisposes susceptible individuals to liver and cardiovascular disease. Although NAFLD represents the predominant cause of chronic liver disorders, the mechanisms of its development and progression remain incompletely understood, even if various scientific groups ascribed them to the occurrence of insulin resistance, dyslipidemia, inflammation, and apoptosis. Nevertheless, oxidative stress (OxS) more and more appears as the most important pathological event during NAFLD development and the hallmark between simple steatosis and NASH manifestation.\n\n\nCRITICAL ISSUES\nThe purpose of this article is to summarize recent developments in the understanding of NAFLD, essentially focusing on OxS as a major pathogenetic mechanism. Various attempts to translate reactive oxygen species (ROS) scavenging by antioxidants into experimental and clinical studies have yielded mostly encouraging results.\n\n\nFUTURE DIRECTIONS\nAlthough augmented concentrations of ROS and faulty antioxidant defense have been associated to NAFLD and related complications, mechanisms of action and proofs of principle should be highlighted to support the causative role of OxS and to translate its concept into the clinic. Antioxid. Redox Signal. 26, 519-541.", "title": "" }, { "docid": "3933d3ae98f7f83e8b501858402bfefe", "text": "The evaluation of the postural control system (PCS) has applications in rehabilitation, sports medicine, gait analysis, fall detection, and diagnosis of many diseases associated with a reduction in balance ability. Standing involves significant muscle use to maintain balance, making standing balance a good indicator of the health of the PCS. Inertial sensor systems have been used to quantify standing balance by assessing displacement of the center of mass, resulting in several standardized measures. Electromyogram (EMG) sensors directly measure the muscle control signals. Despite strong evidence of the potential of muscle activity for balance evaluation, less study has been done on extracting unique features from EMG data that express balance abnormalities. In this paper, we present machine learning and statistical techniques to extract parameters from EMG sensors placed on the tibialis anterior and gastrocnemius muscles, which show a strong correlation to the standard parameters extracted from accelerometer data. This novel interpretation of the neuromuscular system provides a unique method of assessing human balance based on EMG signals. In order to verify the effectiveness of the introduced features in measuring postural sway, we conduct several classification tests that operate on the EMG features and predict significance of different balance measures.", "title": "" }, { "docid": "a212ba02d2546ee33e42fe26f4b05295", "text": "The requirement to operate aircraft in GPS-denied environments can be met by using visual odometry. Aiming at a full-scale aircraft equipped with a high-accuracy inertial navigation system (INS), the proposed method combines vision and the INS for odometry estimation. With such an INS, the aircraft orientation is accurate with low drift, but it contains high-frequency noise that can affect the vehicle motion estimation, causing position estimation to drift. Our method takes the INS orientation as input and estimates translation. During motion estimation, the method virtually rotates the camera by reparametrizing features with their depth direction perpendicular to the ground. This partially eliminates error accumulation in motion estimation caused by the INS high-frequency noise, resulting in a slow drift. We experiment on two hardware configurations in the acquisition of depth for the visual features: 1) the height of the aircraft above the ground is measured by an altimeter assuming that the imaged ground is a local planar patch, and 2) the depth map of the ground is registered with a two-dimensional laser in a push-broom configuration. The method is tested with data collected from a full-scale helicopter. The accumulative flying distance for the overall tests is approximately 78 km. We observe slightly better accuracy with the push-broom laser than the altimeter. C © 2015 Wiley Periodicals, Inc.", "title": "" }, { "docid": "b91c93a552e7d7cc09d477289c986498", "text": "Application Programming Interface (API) documents are a typical way of describing legal usage of reusable software libraries, thus facilitating software reuse. However, even with such documents, developers often overlook some documents and build software systems that are inconsistent with the legal usage of those libraries. Existing software verification tools require formal specifications (such as code contracts), and therefore cannot directly verify the legal usage described in natural language text of API documents against the code using that library. However, in practice, most libraries do not come with formal specifications, thus hindering tool-based verification. To address this issue, we propose a novel approach to infer formal specifications from natural language text of API documents. Our evaluation results show that our approach achieves an average of 92% precision and 93% recall in identifying sentences that describe code contracts from more than 2500 sentences of API documents. Furthermore, our results show that our approach has an average 83% accuracy in inferring specifications from over 1600 sentences describing code contracts.", "title": "" }, { "docid": "5f49c93d7007f0f14f1410ce7805b29a", "text": "Die Psychoedukation im Sinne eines biopsychosozialen Schmerzmodells zielt auf das Erkennen und Verändern individueller schmerzauslösender und -aufrechterhaltender Faktoren ab. Der Einfluss kognitiver Bewertungen, emotionaler Verarbeitungsprozesse und schmerzbezogener Verhaltensweisen steht dabei im Mittelpunkt. Die Anregung und Anleitung zu einer verbesserten Selbstbeobachtung stellt die Voraussetzung zum Einsatz aktiver Selbstkontrollstrategien und zur Erhöhung der Selbstwirksamkeitserwartung dar. Dazu zählt die Entwicklung und Erarbeitung von Schmerzbewältigungsstrategien wie z. B. Aufmerksamkeitslenkung und Genusstraining. Eine besondere Bedeutung kommt dem Aufbau einer Aktivitätenregulation zur Strukturierung eines angemessenen Verhältnisses von Erholungs- und Anforderungsphasen zu. Interventionsmöglichkeiten stellen hier die Vermittlung von Entspannungstechniken, Problemlösetraining, spezifisches Kompetenztraining sowie Elemente der kognitiven Therapie dar. Der Aufbau alternativer kognitiver und handlungsbezogener Lösungsansätze dient einer verbesserten Bewältigung internaler und externaler Stressoren. Genutzt werden die förderlichen Bedingungen gruppendynamischer Prozesse. Einzeltherapeutische Interventionen dienen der Bearbeitung spezifischer psychischer Komorbiditäten und der individuellen Unterstützung bei der beruflichen und sozialen Wiedereingliederung. Providing the patient with a pain model based on the biopsychosocial approach is one of the most important issues in psychological intervention. Illness behaviour is influenced by pain-eliciting and pain-aggravating thoughts. Identification and modification of these thoughts is essential and aims to change cognitive evaluations, emotional processing, and pain-referred behaviour. Improved self-monitoring concerning maladaptive thoughts, feelings, and behaviour enables functional coping strategies (e.g. attention diversion and learning to enjoy things) and enhances self-efficacy expectancies. Of special importance is the establishment of an appropriate balance between stress and recreation. Intervention options include teaching relaxation techniques, problem-solving strategies, and specific skills as well as applying appropriate elements of cognitive therapy. The development of alternative cognitive and action-based strategies improves the patient’s ability to cope with internal and external stressors. All of the psychological elements are carried out in a group setting. Additionally, individual therapy is offered to treat comorbidities or to support reintegration into the patient’s job.", "title": "" }, { "docid": "a059b3ef66c54ecbe43aa0e8d35b9da8", "text": "Completion of lagging strand DNA synthesis requires processing of up to 50 million Okazaki fragments per cell cycle in mammalian cells. Even in yeast, the Okazaki fragment maturation happens approximately a million times during a single round of DNA replication. Therefore, efficient processing of Okazaki fragments is vital for DNA replication and cell proliferation. During this process, primase-synthesized RNA/DNA primers are removed, and Okazaki fragments are joined into an intact lagging strand DNA. The processing of RNA/DNA primers requires a group of structure-specific nucleases typified by flap endonuclease 1 (FEN1). Here, we summarize the distinct roles of these nucleases in different pathways for removal of RNA/DNA primers. Recent findings reveal that Okazaki fragment maturation is highly coordinated. The dynamic interactions of polymerase δ, FEN1 and DNA ligase I with proliferating cell nuclear antigen allow these enzymes to act sequentially during Okazaki fragment maturation. Such protein-protein interactions may be regulated by post-translational modifications. We also discuss studies using mutant mouse models that suggest two distinct cancer etiological mechanisms arising from defects in different steps of Okazaki fragment maturation. Mutations that affect the efficiency of RNA primer removal may result in accumulation of unligated nicks and DNA double-strand breaks. These DNA strand breaks can cause varying forms of chromosome aberrations, contributing to development of cancer that associates with aneuploidy and gross chromosomal rearrangement. On the other hand, mutations that impair editing out of polymerase α incorporation errors result in cancer displaying a strong mutator phenotype.", "title": "" }, { "docid": "5a7e85bd8df70ab29d7549bed6cf440e", "text": "The surgery-first approach in orthognathic surgery has recently created a broader interest in completely eliminating time-consuming preoperative orthodontic treatment. Available evidence on the surgery-first approach should be appraised to support its use in orthognathic surgery. A MEDLINE search using the keywords \"surgery first\" and \"orthognathic surgery\" was conducted to select studies using the surgery-first approach. We also manually searched the reference list of the selected keywords to include articles not selected by the MEDLINE search. The search identified 18 articles related to the surgery-first approach. There was no randomized controlled clinical trial. Four papers were excluded as the content was only personal opinion or basic scientific research. Three studies were retrospective cohort studies in nature. The other 11 studies were case reports. For skeletal Class III surgical correction, the final long-term outcomes for maxillofacial and dental relationship were not significantly different between the surgery-first approach and the orthodontics-first approach in transverse (e.g., intercanine or intermolar width) dimension, vertical (e.g., anterior open bite, lower anterior facial height) dimension, and sagittal (e.g., anterior-posterior position of pogonion and lower incisors) dimension. Total treatment duration was substantially shorter in cases of surgery-first approach use. In conclusion, most published studies related to the surgery-first approach were mainly on orthognathic correction of skeletal Class III malocclusion. Both the surgery-first approach and orthodontics-first approach had similar long-term outcomes in dentofacial relationship. However, the surgery-first approach had shorter treatment time.", "title": "" } ]
scidocsrr
2fee5b8583a79d55ecedda5efefe2cde
Security of the Blockchain Against Long Delay Attack
[ { "docid": "a172cd697bfcb1f3d2a824bb6a5bb6d1", "text": "Bitcoin provides two incentives for miners: block rewards and transaction fees. The former accounts for the vast majority of miner revenues at the beginning of the system, but it is expected to transition to the latter as the block rewards dwindle. There has been an implicit belief that whether miners are paid by block rewards or transaction fees does not affect the security of the block chain.\n We show that this is not the case. Our key insight is that with only transaction fees, the variance of the block reward is very high due to the exponentially distributed block arrival time, and it becomes attractive to fork a \"wealthy\" block to \"steal\" the rewards therein. We show that this results in an equilibrium with undesirable properties for Bitcoin's security and performance, and even non-equilibria in some circumstances. We also revisit selfish mining and show that it can be made profitable for a miner with an arbitrarily low hash power share, and who is arbitrarily poorly connected within the network. Our results are derived from theoretical analysis and confirmed by a new Bitcoin mining simulator that may be of independent interest.\n We discuss the troubling implications of our results for Bitcoin's future security and draw lessons for the design of new cryptocurrencies.", "title": "" }, { "docid": "bcce4f80c84a22722481e55eefb4830f", "text": "State machine replication, or “consensus”, is a central abstraction for distributed systems where a set of nodes seek to agree on an ever-growing, linearly-ordered log. In this paper, we propose a practical new paradigm called Thunderella for achieving state machine replication by combining a fast, asynchronous path with a (slow) synchronous “fall-back” path (which only gets executed if something goes wrong); as a consequence, we get simple state machine replications that essentially are as robust as the best synchronous protocols, yet “optimistically” (if a super majority of the players are honest), the protocol “instantly” confirms transactions. We provide instantiations of this paradigm in both permissionless (using proof-of-work) and permissioned settings. Most notably, this yields a new blockchain protocol (for the permissionless setting) that remains resilient assuming only that a majority of the computing power is controlled by honest players, yet optimistically—if 3/4 of the computing power is controlled by honest players, and a special player called the “accelerator”, is honest—transactions are confirmed as fast as the actual message delay in the network. We additionally show the 3/4 optimistic bound is tight for protocols that are resilient assuming only an honest majority.", "title": "" }, { "docid": "c19863ef5fa4979f288763837e887a7c", "text": "Decentralized cryptocurrencies have pushed deployments of distributed consensus to more stringent environments than ever before. Most existing protocols rely on proofs-of-work which require expensive computational puzzles to enforce, imprecisely speaking, “one vote per unit of computation”. The enormous amount of energy wasted by these protocols has been a topic of central debate, and well-known cryptocurrencies have announced it a top priority to alternative paradigms. Among the proposed alternative solutions, proofs-of-stake protocols have been of particular interest, where roughly speaking, the idea is to enforce “one vote per unit of stake”. Although the community have rushed to propose numerous candidates for proofs-of-stake, no existing protocol has offered formal proofs of security, which we believe to be a critical, indispensible ingredient of a distributed consensus protocol, particularly one that is to underly a high-value cryptocurrency system. In this work, we seek to address the following basic questions: • What kind of functionalities and robustness requirements should a consensus candidate offer to be suitable in a proof-of-stake application? • Can we design a provably secure protocol that satisfies these requirements? To the best of our knowledge, we are the first to formally articulate a set of requirements for consensus candidates for proofs-of-stake. We argue that any consensus protocol satisfying these properties can be used for proofs-of-stake, as long as money does not switch hands too quickly. Moreover, we provide the first consensus candidate that provably satisfies the desired robustness properties.", "title": "" }, { "docid": "3967c0f7013aa7838bc384ca3e0e40e3", "text": "Bitcoin and hundreds of other cryptocurrencies employ a consensus protocol called Nakamoto consensus which reward miners for maintaining a public blockchain. In this paper, we study the security of this protocol with respect to rational miners and show how a minority of the computation power can incentivize the rest of the network to accept a blockchain of the minority’s choice. By deviating from the mining protocol, a mining pool which controls at least 38.2% of the network’s total computational power can, with modest financial capacity, gain mining advantage over honest mining. Such an attack creates a longer valid blockchain by forking the honest blockchain, and the attacker’s blockchain need not disrupt any “legitimate” non-mining transactions present on the honest blockchain. By subverting the consensus protocol, the attacking pool can double-spend money or simply create a blockchain that pays mining rewards to the attacker’s pool. We show that our attacks are easy to encode in any Nakamoto-consensus-based cryptocurrency which supports a scripting language that is sufficiently expressive to encode its own mining puzzles.", "title": "" } ]
[ { "docid": "97da7e7b07775f58c86d26a2b714ba9f", "text": "Nowadays, visual object recognition is one of the key applications for computer vision and deep learning techniques. With the recent development in mobile computing technology, many deep learning framework software support Personal Digital Assistant systems, i.e., smart phones or tablets, allowing developers to conceive innovative applications. In this work, we intend to employ such ICT strategies with the aim of supporting the tourism in an art city: for these reasons, we propose to provide tourists with a mobile application in order to better explore artistic heritage within an urban environment by using just their smartphone's camera. The software solution is based on Google TensorFlow, an innovative deep learning framework mainly designed for pattern recognition tasks. The paper presents our design choices and an early performance evaluation.", "title": "" }, { "docid": "fd5e6dcb20280daad202f34cd940e7ce", "text": "Chapters cover topics in areas such as P and NP, space complexity, randomness, computational problems that are (or appear) infeasible to solve, pseudo-random generators, and probabilistic proof systems. The introduction nicely summarizes the material covered in the rest of the book and includes a diagram of dependencies between chapter topics. Initial chapters cover preliminary topics as preparation for the rest of the book. These are more than topical or historical summaries but generally not sufficient to fully prepare the reader for later material. Readers should approach this text already competent at undergraduate-level algorithms in areas such as basic analysis, algorithm strategies, fundamental algorithm techniques, and the basics for determining computability. Elective work in P versus NP or advanced analysis would be valuable but that isn‟t really required.", "title": "" }, { "docid": "fc9eb12afb2c86005ae4f06835feb6cc", "text": "Peer pressure is a reoccurring phenomenon in criminal or deviant behaviour especially, as it pertains to adolescents. It may begin in early childhood of about 5years and increase through childhood to become more intense in adolescence years. This paper examines how peer pressure is present in adolescents and how it may influence or create the leverage to non-conformity to societal norms and laws. The paper analyses the process and occurrence of peer influence and pressure on individuals and groups within the framework of the social learning and the social control theories. Major features of the peer pressure process are identified as group dynamics, delinquent peer subculture, peer approval of delinquent behaviour and sanctions for non-conformity which include ridicule, mockery, ostracism and even mayhem or assault in some cases. Also, the paper highlights acceptance and rejection as key concepts that determine the sway or gladiation of adolescents to deviant and criminal behaviour. Finally, it concludes that peer pressure exists for conformity and in delinquent subculture, the result is conformity to criminal codes and behaviour. The paper recommends more urgent, serious and offensive grass root approaches by governments and institutions against this growing threat to the continued peace, orderliness and development of society.", "title": "" }, { "docid": "20cfcfde25db033db8d54fe7ae6fcca1", "text": "We present the first study that evaluates both speaker and listener identification for direct speech in literary texts. Our approach consists of two steps: identification of speakers and listeners near the quotes, and dialogue chain segmentation. Evaluation results show that this approach outperforms a rule-based approach that is stateof-the-art on a corpus of literary texts.", "title": "" }, { "docid": "ad4b137253407e4323e288b65b03bd08", "text": "We formulate a document summarization method to extract passage-level answers for non-factoid queries, referred to as answer-biased summaries. We propose to use external information from related Community Question Answering (CQA) content to better identify answer bearing sentences. Three optimization-based methods are proposed: (i) query-biased, (ii) CQA-answer-biased, and (iii) expanded-query-biased, where expansion terms were derived from related CQA content. A learning-to-rank-based method is also proposed that incorporates a feature extracted from related CQA content. Our results show that even if a CQA answer does not contain a perfect answer to a query, their content can be exploited to improve the extraction of answer-biased summaries from other corpora. The quality of CQA content is found to impact on the accuracy of optimization-based summaries, though medium quality answers enable the system to achieve a comparable (and in some cases superior) accuracy to state-of-the-art techniques. The learning-to-rank-based summaries, on the other hand, are not significantly influenced by CQA quality. We provide a recommendation of the best use of our proposed approaches in regard to the availability of different quality levels of related CQA content. As a further investigation, the reliability of our approaches was tested on another publicly available dataset.", "title": "" }, { "docid": "5637bed8be75d7e79a2c2adb95d4c28e", "text": "BACKGROUND\nLimited evidence exists to show that adding a third agent to platinum-doublet chemotherapy improves efficacy in the first-line advanced non-small-cell lung cancer (NSCLC) setting. The anti-PD-1 antibody pembrolizumab has shown efficacy as monotherapy in patients with advanced NSCLC and has a non-overlapping toxicity profile with chemotherapy. We assessed whether the addition of pembrolizumab to platinum-doublet chemotherapy improves efficacy in patients with advanced non-squamous NSCLC.\n\n\nMETHODS\nIn this randomised, open-label, phase 2 cohort of a multicohort study (KEYNOTE-021), patients were enrolled at 26 medical centres in the USA and Taiwan. Patients with chemotherapy-naive, stage IIIB or IV, non-squamous NSCLC without targetable EGFR or ALK genetic aberrations were randomly assigned (1:1) in blocks of four stratified by PD-L1 tumour proportion score (<1% vs ≥1%) using an interactive voice-response system to 4 cycles of pembrolizumab 200 mg plus carboplatin area under curve 5 mg/mL per min and pemetrexed 500 mg/m2 every 3 weeks followed by pembrolizumab for 24 months and indefinite pemetrexed maintenance therapy or to 4 cycles of carboplatin and pemetrexed alone followed by indefinite pemetrexed maintenance therapy. The primary endpoint was the proportion of patients who achieved an objective response, defined as the percentage of patients with radiologically confirmed complete or partial response according to Response Evaluation Criteria in Solid Tumors version 1.1 assessed by masked, independent central review, in the intention-to-treat population, defined as all patients who were allocated to study treatment. Significance threshold was p<0·025 (one sided). Safety was assessed in the as-treated population, defined as all patients who received at least one dose of the assigned study treatment. This trial, which is closed for enrolment but continuing for follow-up, is registered with ClinicalTrials.gov, number NCT02039674.\n\n\nFINDINGS\nBetween Nov 25, 2014, and Jan 25, 2016, 123 patients were enrolled; 60 were randomly assigned to the pembrolizumab plus chemotherapy group and 63 to the chemotherapy alone group. 33 (55%; 95% CI 42-68) of 60 patients in the pembrolizumab plus chemotherapy group achieved an objective response compared with 18 (29%; 18-41) of 63 patients in the chemotherapy alone group (estimated treatment difference 26% [95% CI 9-42%]; p=0·0016). The incidence of grade 3 or worse treatment-related adverse events was similar between groups (23 [39%] of 59 patients in the pembrolizumab plus chemotherapy group and 16 [26%] of 62 in the chemotherapy alone group). The most common grade 3 or worse treatment-related adverse events in the pembrolizumab plus chemotherapy group were anaemia (seven [12%] of 59) and decreased neutrophil count (three [5%]); an additional six events each occurred in two (3%) for acute kidney injury, decreased lymphocyte count, fatigue, neutropenia, and sepsis, and thrombocytopenia. In the chemotherapy alone group, the most common grade 3 or worse events were anaemia (nine [15%] of 62) and decreased neutrophil count, pancytopenia, and thrombocytopenia (two [3%] each). One (2%) of 59 patients in the pembrolizumab plus chemotherapy group experienced treatment-related death because of sepsis compared with two (3%) of 62 patients in the chemotherapy group: one because of sepsis and one because of pancytopenia.\n\n\nINTERPRETATION\nCombination of pembrolizumab, carboplatin, and pemetrexed could be an effective and tolerable first-line treatment option for patients with advanced non-squamous NSCLC. This finding is being further explored in an ongoing international, randomised, double-blind, phase 3 study.\n\n\nFUNDING\nMerck & Co.", "title": "" }, { "docid": "e50355a29533bc7a91468aae1053873d", "text": "A substrate integrated waveguide (SIW)-fed circularly polarized (CP) antenna array with a broad bandwidth of axial ratio (AR) is presented for 60-GHz wireless personal area networks (WPAN) applications. The widened AR bandwidth of an antenna element is achieved by positioning a slot-coupled rotated strip above a slot cut onto the broadwall of an SIW. A 4 × 4 antenna array is designed and fabricated using low temperature cofired ceramic (LTCC) technology. A metal-topped via fence is introduced around the strip to reduce the mutual coupling between the elements of the array. The measured results show that the AR bandwidth is more than 7 GHz. A stable boresight gain is greater than 12.5 dBic across the desired bandwidth of 57-64 GHz.", "title": "" }, { "docid": "287f30c5e338fc32a82cf2ec5366c6c5", "text": "A hallmark of mammalian immunity is the heterogeneity of cell fate that exists among pathogen-experienced lymphocytes. We show that a dividing T lymphocyte initially responding to a microbe exhibits unequal partitioning of proteins that mediate signaling, cell fate specification, and asymmetric cell division. Asymmetric segregation of determinants appears to be coordinated by prolonged interaction between the T cell and its antigen-presenting cell before division. Additionally, the first two daughter T cells displayed phenotypic and functional indicators of being differentially fated toward effector and memory lineages. These results suggest a mechanism by which a single lymphocyte can apportion diverse cell fates necessary for adaptive immunity.", "title": "" }, { "docid": "fd8bdd4ce030c53b1adcb22d667cb023", "text": "Cross-Lingual Learning provides a mechanism to adapt NLP tools available for label rich languages to achieve similar tasks for label-scarce languages. An efficient cross-lingual tool significantly reduces the cost and effort required to manually annotate data. In this paper, we use the Recursive Autoencoder architecture to develop a Cross Lingual Sentiment Analysis (CLSA) tool using sentence aligned corpora between a pair of resource rich (English) and resource poor (Hindi) language. The system is based on the assumption that semantic similarity between different phrases also implies sentiment similarity in majority of sentences. The resulting system is then analyzed on a newly developed Movie Reviews Dataset in Hindi with labels given on a rating scale and compare performance of our system against existing systems. It is shown that our approach significantly outperforms state of the art systems for Sentiment Analysis, especially when labeled data is scarce.", "title": "" }, { "docid": "27be379b6192aa6db9101b7ec18d5585", "text": "In this paper, we investigate the problem of detecting depression from recordings of subjects' speech using speech processing and machine learning. There has been considerable interest in this problem in recent years due to the potential for developing objective assessments from real-world behaviors, which may provide valuable supplementary clinical information or may be useful in screening. The cues for depression may be present in “what is said” (content) and “how it is said” (prosody). Given the limited amounts of text data, even in this relatively large study, it is difficult to employ standard method of learning models from n-gram features. Instead, we learn models using word representations in an alternative feature space of valence and arousal. This is akin to embedding words into a real vector space albeit with manual ratings instead of those learned with deep neural networks [1]. For extracting prosody, we employ standard feature extractors such as those implemented in openSMILE and compare them with features extracted from harmonic models that we have been developing in recent years. Our experiments show that our features from harmonic model improve the performance of detecting depression from spoken utterances than other alternatives. The context features provide additional improvements to achieve an accuracy of about 74%, sufficient to be useful in screening applications.", "title": "" }, { "docid": "3a5d43d86d39966aca2d93d1cf66b13d", "text": "In the current context of increased surveillance and security, more sophisticated and robust surveillance systems are needed. One idea relies on the use of pairs of video (visible spectrum) and thermal infrared (IR) cameras located around premises of interest. To automate the system, a robust person detection algorithm and the development of an efficient technique enabling the fusion of the information provided by the two sensors becomes necessary and these are described in this chapter. Recently, multi-sensor based image fusion system is a challenging task and fundamental to several modern day image processing applications, such as security systems, defence applications, and intelligent machines. Image fusion techniques have been actively investigated and have wide application in various fields. It is often a vital pre-processing procedure to many computer vision and image processing tasks which are dependent on the acquisition of imaging data via sensors, such as IR and visible. One such task is that of human detection. To detect humans with an artificial system is difficult for a number of reasons as shown in Figure 1 (Gavrila, 2001). The main challenge for a vision-based pedestrian detector is the high degree of variability with the human appearance due to articulated motion, body size, partial occlusion, inconsistent cloth texture, highly cluttered backgrounds and changing lighting conditions.", "title": "" }, { "docid": "f8b201105e3b92ed4ef2a884cb626c0d", "text": "Several years of academic and industrial research efforts have converged to a common understanding on fundamental security building blocks for the upcoming vehicular communication (VC) systems. There is a growing consensus toward deploying a special-purpose identity and credential management infrastructure, i.e., a vehicular public-key infrastructure (VPKI), enabling pseudonymous authentication, with standardization efforts toward that direction. In spite of the progress made by standardization bodies (IEEE 1609.2 and ETSI) and harmonization efforts [Car2Car Communication Consortium (C2C-CC)], significant questions remain unanswered toward deploying a VPKI. Deep understanding of the VPKI, a central building block of secure and privacy-preserving VC systems, is still lacking. This paper contributes to the closing of this gap. We present SECMACE, a VPKI system, which is compatible with the IEEE 1609.2 and ETSI standards specifications. We provide a detailed description of our state-of-the-art VPKI that improves upon existing proposals in terms of security and privacy protection, and efficiency. SECMACE facilitates multi-domain operations in the VC systems and enhances user privacy, notably preventing linking pseudonyms based on timing information and offering increased protection even against honest-but-curious VPKI entities. We propose multiple policies for the vehicle–VPKI interactions and two large-scale mobility trace data sets, based on which we evaluate the full-blown implementation of SECMACE. With very little attention on the VPKI performance thus far, our results reveal that modest computing resources can support a large area of vehicles with very few delays and the most promising policy in terms of privacy protection can be supported with moderate overhead.", "title": "" }, { "docid": "f583bd78a154d3317453e1cb02026b2d", "text": "PURPOSE\nTo evaluate the clinical performance of lithium disilicate (LiDiSi) crowns with a feather-edge finish line margin over a 9-year period.\n\n\nMATERIALS AND METHODS\nIn total, 110 lithium disilicate crowns, 40 anterior (36.3%) and 70 posterior (63.7%), were cemented with resin cement after fluoridric acid and silane surface treatment and observed by a different clinician. The data were analyzed using the Kaplan-Meier method. The clinical evaluation used the California Dental Association (CDA) modified criteria after recalling all patients between January and April 2013.\n\n\nRESULTS\nTwo crowns had failed and were replaced due to core fractures. One chipping occurred on a first molar and the ceramic surface was polished. The overall survival probability was 96.1% up to 9 years, with a failure rate of 1.8%.\n\n\nCONCLUSION\nIn this retrospective analysis, lithium disilicate with a vertical finish line used in single-crown restorations had a low clinical failure rate up to 9 years.", "title": "" }, { "docid": "79e565a569d72836e089067e35b5844c", "text": "The author has tried to show, in detail and with precision. just how the global regularities with which biology deals can be envisaged as structures within a many-dimensioned space. He not only has shown how such ideas as chreods, the epigenetic landscape, and switching points, which previously were expressed only in the unsophisticated language of biology, can be formulated more adequately in terms such as vector fields, attractors, catastrophes. and the like; going much further than this. he develops many highly original ideas, both strictly mathematical ones within the field of topology, and applications of these to very many aspects of biology and of other sciences. It would be quite wrong to give the impression that Thorn's book is exclusively devoted to biology, The subjects mentioned in his title, Structural Stability and Morphoge.esis, have a much wider reference: and he relates his topological system of thought to physical and indeed to general philosophical problems. In biology, Thorn not only uses topological modes of thought to provide formal definitions of concepts and a logical framework by which they can be related; he also makes a bold attempt at a direct comparison between topological structures within four-dimensional space-time, such as catastrophe hypersurfaces, and the physical structures found in developing embryos. The basic importance of this book is the introduction, in a massive and thorough way, of topological thinking as a framework for theoretical biology.", "title": "" }, { "docid": "a6e4a1912f2a0e58f97f4b5a5ab93dec", "text": "An adaptive fuzzy inference neural network (AFINN) is proposed in this paper. It has self-construction ability, parameter estimation ability and rule extraction ability. The structure of AFINN is formed by the following four phases: (1) initial rule creation, (2) selection of important input elements, (3) identification of the network structure and (4) parameter estimation using LMS (least-mean square) algorithm. When the number of input dimension is large, the conventional fuzzy systems often cannot handle the task correctly because the degree of each rule becomes too small. AFINN solves such a problem by modification of the learning and inference algorithm. 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "35470a422cdb3a287d45797e39c04637", "text": "In this paper, we propose a method to recognize food images which include multiple food items considering co-occurrence statistics of food items. The proposed method employs a manifold ranking method which has been applied to image retrieval successfully in the literature. In the experiments, we prepared co-occurrence matrices of 100 food items using various kinds of data sources including Web texts, Web food blogs and our own food database, and evaluated the final results obtained by applying manifold ranking. As results, it has been proved that co-occurrence statistics obtained from a food photo database is very helpful to improve the classification rate within the top ten candidates.", "title": "" }, { "docid": "2f742514ffec09ea1abf2d846ba630e1", "text": "A number of high-level query languages, such as Hive, Pig, Flume, and Jaql, have been developed in recent years to increase analyst productivity when processing and analyzing very large datasets. The implementation of each of these languages includes a complete, data model-dependent query compiler, yet each involves a number of similar optimizations. In this work, we describe a new query compiler architecture that separates language-specific and data model-dependent aspects from a more general query compiler backend that can generate executable data-parallel programs for shared-nothing clusters and can be used to develop multiple languages with different data models. We have built such a data model-agnostic query compiler substrate, called Algebricks, and have used it to implement three different query languages --- HiveQL, AQL, and XQuery --- to validate the efficacy of this approach. Experiments show that all three query languages benefit from the parallelization and optimization that Algebricks provides and thus have good parallel speedup and scaleup characteristics for large datasets.", "title": "" }, { "docid": "645a1ad9ab07eee096180e08e6f1fdff", "text": "In the light of evidence from about 200 studies showing gender symmetry in perpetration of partner assault, research can now focus on why gender symmetry is predominant and on the implications of symmetry for primary prevention and treatment of partner violence. Progress in such research is handicapped by a number of problems: (1) Insufficient empirical research and a surplus of discussion and theory, (2) Blinders imposed by commitment to a single causal factor theory-patriarchy and male dominance-in the face of overwhelming evidence that this is only one of a multitude of causes, (3) Research purporting to investigate gender differences but which obtains data on only one gender, (4) Denial of research grants to projects that do not assume most partner violence is by male perpetrators, (5) Failure to investigate primary prevention and treatment programs for female offenders, and (6) Suppression of evidence on female perpetration by both researchers and agencies.", "title": "" }, { "docid": "a4d45c12ecc459ea6564fb0df8d13bd3", "text": "Amazon’s Mechanical Turk (AMT) has revolutionized data processing and collection in both research and industry and remains one of the most prominent paid crowd work platforms today (Kittur et al., 2013). Unfortunately, it also remains in beta nine years after its launch with many of the same limitations as when it was launched: lack of worker profi indicating skills or experience, inability to post worker or employer ratings and reviews, minimal infrastructure for eff ely managing workers or collecting analytics, etc. Difficulty accomplishing quality, complex work with AMT continues to drive active research. Fortunately, many other alternative platforms now exist and off a wide range of features and workflow models for accomplishing quality work (crowdsortium.org). Despite this, research on crowd work has continued to focus on AMT near-exclusively. By analogy, if one had only ever programmed in Basic, how might this limit one’s conception of programming? What if the only search engine we knew was AltaVista? Adar (2011) opined that prior research has often been envisioned too narrowly for AMT, “...writing the user’s manual for MTurk ... struggl[ing] against the limits of the platform...”. Such narrow focus risks AMT’s particular vagaries and limitations unduly shape research questions, methodology, and imagination. To assess the extent of AMT’s infl upon research questions and use, we review its impact on prior work, assess what functionality and workflows other platforms off and consider what light other platforms’ diverse capabilities may shed on current research practices and future directions. To this end, we present a qualitative content analysis (Mayring, 2000) of ClickWorker, CloudFactory, CrowdComputing Systems, CrowdFlower, CrowdSource, MobileWorks, and oDesk. To characterize and diff entiate crowd work platforms, we identify several key categories for analysis. Our qualitative content analysis assesses each platform by drawing upon a variety of information sources: Webpages, blogs, news articles, white papers, and research papers. We also shared our analyses with platform representatives and incorporated their feedback. Contributions. Our content analysis of crowd work platforms represents the first such study we know of by researchers for researchers, with categories of analysis chosen based on research relevance. Contributions include our review of how AMT assumptions and limitations have influenced prior research, the detailed criteria we developed for characterizing crowd work platforms, and our analysis. Findings inform", "title": "" }, { "docid": "a442a5fd2ec466cac18f4c148661dd96", "text": "BACKGROUND\nLong waiting times for registration to see a doctor is problematic in China, especially in tertiary hospitals. To address this issue, a web-based appointment system was developed for the Xijing hospital. The aim of this study was to investigate the efficacy of the web-based appointment system in the registration service for outpatients.\n\n\nMETHODS\nData from the web-based appointment system in Xijing hospital from January to December 2010 were collected using a stratified random sampling method, from which participants were randomly selected for a telephone interview asking for detailed information on using the system. Patients who registered through registration windows were randomly selected as a comparison group, and completed a questionnaire on-site.\n\n\nRESULTS\nA total of 5641 patients using the online booking service were available for data analysis. Of them, 500 were randomly selected, and 369 (73.8%) completed a telephone interview. Of the 500 patients using the usual queuing method who were randomly selected for inclusion in the study, responses were obtained from 463, a response rate of 92.6%. Between the two registration methods, there were significant differences in age, degree of satisfaction, and total waiting time (P<0.001). However, gender, urban residence, and valid waiting time showed no significant differences (P>0.05). Being ignorant of online registration, not trusting the internet, and a lack of ability to use a computer were three main reasons given for not using the web-based appointment system. The overall proportion of non-attendance was 14.4% for those using the web-based appointment system, and the non-attendance rate was significantly different among different hospital departments, day of the week, and time of the day (P<0.001).\n\n\nCONCLUSION\nCompared to the usual queuing method, the web-based appointment system could significantly increase patient's satisfaction with registration and reduce total waiting time effectively. However, further improvements are needed for broad use of the system.", "title": "" } ]
scidocsrr
6f138b13069ca07aa9594d3ecc1b21a2
Analysis of the Communication Traffic for Blockchain Synchronization of IoT Devices
[ { "docid": "7805c8f8d951a38c82ab33728f2083f1", "text": "There has been increasing interest in adopting BlockChain (BC), that underpins the crypto-currency Bitcoin, in Internet of Things (IoT) for security and privacy. However, BCs are computationally expensive and involve high bandwidth overhead and delays, which are not suitable for most IoT devices. This paper proposes a lightweight BC-based architecture for IoT that virtually eliminates the overheads of classic BC, while maintaining most of its security and privacy benefits. IoT devices benefit from a private immutable ledger, that acts similar to BC but is managed centrally, to optimize energy consumption. High resource devices create an overlay network to implement a publicly accessible distributed BC that ensures end-to-end security and privacy. The proposed architecture uses distributed trust to reduce the block validation processing time. We explore our approach in a smart home setting as a representative case study for broader IoT applications. Qualitative evaluation of the architecture under common threat models highlights its effectiveness in providing security and privacy for IoT applications. Simulations demonstrate that our method decreases packet and processing overhead significantly compared to the BC implementation used in Bitcoin.", "title": "" } ]
[ { "docid": "97a13a2a11db1b67230ab1047a43e1d6", "text": "Road detection from the perspective of moving vehicles is a challenging issue in autonomous driving. Recently, many deep learning methods spring up for this task, because they can extract high-level local features to find road regions from raw RGB data, such as convolutional neural networks and fully convolutional networks (FCNs). However, how to detect the boundary of road accurately is still an intractable problem. In this paper, we propose siamesed FCNs (named “s-FCN-loc”), which is able to consider RGB-channel images, semantic contours, and location priors simultaneously to segment the road region elaborately. To be specific, the s-FCN-loc has two streams to process the original RGB images and contour maps, respectively. At the same time, the location prior is directly appended to the siamesed FCN to promote the final detection performance. Our contributions are threefold: 1) An s-FCN-loc is proposed that learns more discriminative features of road boundaries than the original FCN to detect more accurate road regions. 2) Location prior is viewed as a type of feature map and directly appended to the final feature map in s-FCN-loc to promote the detection performance effectively, which is easier than other traditional methods, namely, different priors for different inputs (image patches). 3) The convergent speed of training s-FCN-loc model is 30% faster than the original FCN because of the guidance of highly structured contours. The proposed approach is evaluated on the KITTI road detection benchmark and one-class road detection data set, and achieves a competitive result with the state of the arts.", "title": "" }, { "docid": "560577e6abcccdb399d437cbd52ad266", "text": "With smart devices, particular smartphones, becoming our everyday companions, the ubiquitous mobile Internet and computing applications pervade people’s daily lives. With the surge demand on high-quality mobile services at anywhere, how to address the ubiquitous user demand and accommodate the explosive growth of mobile traffics is the key issue of the next generation mobile networks. The Fog computing is a promising solution towards this goal. Fog computing extends cloud computing by providing virtualized resources and engaged location-based services to the edge of the mobile networks so as to better serve mobile traffics. Therefore, Fog computing is a lubricant of the combination of cloud computing and mobile applications. In this article, we outline the main features of Fog computing and describe its concept, architecture and design goals. Lastly, we discuss some of the future research issues from the networking perspective.", "title": "" }, { "docid": "a5879d5e7934380913cd2683ba2525b9", "text": "This paper deals with the design & development of a theft control system for an automobile, which is being used to prevent/control the theft of a vehicle. The developed system makes use of an embedded system based on GSM technology. The designed & developed system is installed in the vehicle. An interfacing mobile is also connected to the microcontroller, which is in turn, connected to the engine. Once, the vehicle is being stolen, the information is being used by the vehicle owner for further processing. The information is passed onto the central processing insurance system, where by sitting at a remote place, a particular number is dialed by them to the interfacing mobile that is with the hardware kit which is installed in the vehicle. By reading the signals received by the mobile, one can control the ignition of the engine; say to lock it or to stop the engine immediately. Again it will come to the normal condition only after entering a secured password. The owner of the vehicle & the central processing system will know this secured password. The main concept in this design is introducing the mobile communications into the embedded system. The designed unit is very simple & low cost. The entire designed unit is on a single chip. When the vehicle is stolen, owner of vehicle may inform to the central processing system, then they will stop the vehicle by just giving a ring to that secret number and with the help of SIM tracking knows the location of vehicle and informs to the local police or stops it from further movement.", "title": "" }, { "docid": "f66609f826cae05b1b330f138c6e556a", "text": "We describe pke, an open source python-based keyphrase extraction toolkit. It provides an end-to-end keyphrase extraction pipeline in which each component can be easily modified or extented to develop new approaches. pke also allows for easy benchmarking of state-of-the-art keyphrase extraction approaches, and ships with supervised models trained on the SemEval-2010 dataset (Kim et al., 2010).", "title": "" }, { "docid": "e84b6bbb2eaee0edb6ac65d585056448", "text": "As memory accesses become slower with respect to the processor and consume more power with increasing memory size, the focus of memory performance and power consumption has become increasingly important. With the trend to develop multi-threaded, multi-core processors, the demands on the memory system will continue to scale. However, determining the optimal memory system configuration is non-trivial. The memory system performance is sensitive to a large number of parameters. Each of these parameters take on a number of values and interact in fashions that make overall trends difficult to discern. A comparison of the memory system architectures becomes even harder when we add the dimensions of power consumption and manufacturing cost. Unfortunately, there is a lack of tools in the public-domain that support such studies. Therefore, we introduce DRAMsim, a detailed and highly-configurable C-based memory system simulator to fill this gap. DRAMsim implements detailed timing models for a variety of existing memories, including SDRAM, DDR, DDR2, DRDRAM and FB-DIMM, with the capability to easily vary their parameters. It also models the power consumption of SDRAM and its derivatives. It can be used as a standalone simulator or as part of a more comprehensive system-level model. We have successfully integrated DRAMsim into a variety of simulators including MASE [15], Sim-alpha [14], BOCHS[2] and GEMS[13]. The simulator can be downloaded from www.ece.umd.edu/dramsim.", "title": "" }, { "docid": "f3ec87229acd0ec98c044ad42fd9fec1", "text": "Increasingly, Internet users trade privacy for service. Facebook, Google, and others mine personal information to target advertising. This paper presents a preliminary and partial answer to the general question \"Can users retain their privacy while still benefiting from these web services?\". We propose NOYB, a novel approach that provides privacy while preserving some of the functionality provided by online services. We apply our approach to the Facebook online social networking website. Through a proof-of-concept implementation we demonstrate that NOYB is practical and incrementally deployable, requires no changes to or cooperation from an existing online service, and indeed can be non-trivial for the online service to detect.", "title": "" }, { "docid": "4d7616ce77bd32bcb6bc140279aefea8", "text": "We argue that living systems process information such that functionality emerges in them on a continuous basis. We then provide a framework that can explain and model the normativity of biological functionality. In addition we offer an explanation of the anticipatory nature of functionality within our overall approach. We adopt a Peircean approach to Biosemiotics, and a dynamical approach to Digital-Analog relations and to the interplay between different levels of functionality in autonomous systems, taking an integrative approach. We then apply the underlying biosemiotic logic to a particular biological system, giving a model of the B-Cell Receptor signaling system, in order to demonstrate how biosemiotic concepts can be used to build an account of biological information and functionality. Next we show how this framework can be used to explain and model more complex aspects of biological normativity, for example, how cross-talk between different signaling pathways can be avoided. Overall, we describe an integrated theoretical framework for the emergence of normative functions and, consequently, for the way information is transduced across several interconnected organizational levels in an autonomous system, and we demonstrate how this can be applied in real biological phenomena. Our aim is to open the way towards realistic tools for the modeling of information and normativity in autonomous biological agents.", "title": "" }, { "docid": "485cda7203863d2ff0b2070ca61b1126", "text": "Interestingly, understanding natural language that you really wait for now is coming. It's significant to wait for the representative and beneficial books to read. Every book that is provided in better way and utterance will be expected by many peoples. Even you are a good reader or not, feeling to read this book will always appear when you find it. But, when you feel hard to find it as yours, what to do? Borrow to your friends and don't know when to give back it to her or him.", "title": "" }, { "docid": "399436510d316e8afea01036ee06e235", "text": "We present a deep layered architecture that generalizes classical convolutional neural networks (ConvNets). The architecture, called SimNets, is driven by two operators, one being a similarity function whose family contains the convolution operator used in ConvNets, and the other is a new soft max-min-mean operator called MEX that realizes classical operators like ReLU and max pooling, but has additional capabilities that make SimNets a powerful generalization of ConvNets. Three interesting properties emerge from the architecture: (i) the basic input to hidden layer to output machinery contains as special cases kernel machines with the Exponential and Generalized Gaussian kernels, the output units being ”neurons in feature space” (ii) in its general form, the basic machinery has a higher abstraction level than kernel machines, and (iii) initializing networks using unsupervised learning is natural. Experiments demonstrate the capability of achieving state of the art accuracy with networks that are an order of magnitude smaller than comparable ConvNets.", "title": "" }, { "docid": "13088399108afc4e641944add45214c0", "text": "Inasmuch as science is observational or perceptual in nature, the goal of providing a scientific model and mechanism for the evolution of complex systems ultimately requires a supporting theory of reality of which perception itself is the model (or theory-to-universe mapping). Where information is the abstract currency of perception, such a theory must incorporate the theory of information while extending the information concept to incorporate reflexive self-processing in order to achieve an intrinsic (self-contained) description of reality. This extension is associated with a limiting formulation of model theory identifying mental and physical reality, resulting in a reflexively self-generating, self-modeling theory of reality identical to its universe on the syntactic level. By the nature of its derivation, this theory, the Cognitive Theoretic Model of the Universe or CTMU, can be regarded as a supertautological reality-theoretic extension of logic. Uniting the theory of reality with an advanced form of computational language theory, the CTMU describes reality as a Self-Configuring Self-Processing Language or SCSPL, a reflexive intrinsic language characterized not only by self-reference and recursive self-definition, but full self-configuration and selfexecution (reflexive read-write functionality). SCSPL reality embodies a dual-aspect monism consisting of infocognition, self-transducing information residing in self-recognizing SCSPL elements called syntactic operators. The CTMU identifies itself with the structure of these operators and thus with the distributive syntax of its self-modeling SCSPL universe, including the reflexive grammar by which the universe refines itself from unbound telesis or UBT, a primordial realm of infocognitive potential free of informational constraint. Under the guidance of a limiting (intrinsic) form of anthropic principle called the Telic Principle, SCSPL evolves by telic recursion, jointly configuring syntax and state while maximizing a generalized selfselection parameter and adjusting on the fly to freely-changing internal conditions. SCSPL relates space, time and object by means of conspansive duality and conspansion, an SCSPL-grammatical process featuring an alternation between dual phases of existence associated with design and actualization and related to the familiar wave-particle duality of quantum mechanics. By distributing the design phase of reality over the actualization phase, conspansive spacetime also provides a distributed mechanism for Intelligent Design, adjoining to the restrictive principle of natural selection a basic means of generating information and complexity. Addressing physical evolution on not only the biological but cosmic level, the CTMU addresses the most evident deficiencies and paradoxes associated with conventional discrete and continuum models of reality, including temporal directionality and accelerating cosmic expansion, while preserving virtually all of the major benefits of current scientific and mathematical paradigms.  2002 Christopher Michael Langan <clangan@ctmu.net> View the most recent version of this paper at: www.ctmu.net", "title": "" }, { "docid": "7265c5e3f64b0a19592e7b475649433c", "text": "A power transformer outage has a dramatic financial consequence not only for electric power systems utilities but also for interconnected customers. The service reliability of this important asset largely depends upon the condition of the oil-paper insulation. Therefore, by keeping the qualities of oil-paper insulation system in pristine condition, the maintenance planners can reduce the decline rate of internal faults. Accurate diagnostic methods for analyzing the condition of transformers are therefore essential. Currently, there are various electrical and physicochemical diagnostic techniques available for insulation condition monitoring of power transformers. This paper is aimed at the description, analysis and interpretation of modern physicochemical diagnostics techniques for assessing insulation condition in aged transformers. Since fields and laboratory experiences have shown that transformer oil contains about 70% of diagnostic information, the physicochemical analyses of oil samples can therefore be extremely useful in monitoring the condition of power transformers.", "title": "" }, { "docid": "f3348f2323a5a97980551f00367703d1", "text": "Bacterial samples had been isolated from clinically detected diseased juvenile Pangasius, collected from Mymensingh, Bangladesh. Primarily, the isolates were found as Gram-negative, motile, oxidase-positive, fermentative, and O/129 resistant Aeromonas bacteria. The species was exposed as Aeromonas hydrophila from esculin hydrolysis test. Ten isolates of A. hydrophila were identified from eye lesions, kidney, and liver of the infected fishes. Further characterization of A. hydrophila was accomplished using API-20E and antibiotic sensitivity test. Isolates were highly resistant to amoxyclav among ten different antibiotics. All isolates were found as immensely pathogenic to healthy fishes while intraperitoneal injection. Histopathologically, necrotic hematopoietic tissues with pyknotic nuclei, mild hemorrhage, and wide vacuolation in kidney, liver, and muscle were principally noticed due to Aeromonad infection. So far, this is the first full note on characterizing A. hydrophila from diseased farmed Pangasius in Bangladesh. The present findings will provide further direction to develop theranostic strategies of A. hydrophila infection.", "title": "" }, { "docid": "82bb1d74e1e2d4b7b412b2a921f5eaad", "text": "This paper addresses the topic of community crime prevention. As in many other areas of public policy, there are widely divergent approaches that might be taken to crime prevention focused on local neighbourhoods. In what follows four major forms of prevention relevant to the Australian context will be discussed, with particular emphasis being placed on an approach to crime prevention which enhances the ability of a community to bring together, in an integrative way, divergent groups which can easily become isolated from each other as a result of contemporary economic and urban forces.", "title": "" }, { "docid": "e106df98a3d0240ed3e10840697bfc74", "text": "Online question and answer (Q&A) services are facing key challenges to motivate domain experts to provide quick and high-quality answers. Recent systems seek to engage real-world experts by allowing them to set a price on their answers. This leads to a \"targeted\" Q&A model where users to ask questions to a target expert by paying the price. In this paper, we perform a case study on two emerging targeted Q&A systems Fenda (China) and Whale (US) to understand how monetary incentives affect user behavior. By analyzing a large dataset of 220K questions (worth 1 million USD), we find that payments indeed enable quick answers from experts, but also drive certain users to game the system for profits. In addition, this model requires users (experts) to proactively adjust their price to make profits. People who are unwilling to lower their prices are likely to hurt their income and engagement over time.", "title": "" }, { "docid": "e79db51ac85ceafba66dddd5c038fbdf", "text": "Machine learning based anti-phishing techniques are based on various features extracted from different sources. These features differentiate a phishing website from a legitimate one. Features are taken from various sources like URL, page content, search engine, digital certificate, website traffic, etc, of a website to detect it as a phishing or non-phishing. The websites are declared as phishing sites if the heuristic design of the websites matches with the predefined rules. The accuracy of the anti-phishing solution depends on features set, training data and machine learning algorithm. This paper presents a comprehensive analysis of Phishing attacks, their exploitation, some of the recent machine learning based approaches for phishing detection and their comparative study. It provides a better understanding of the phishing problem, current solution space in machine learning domain, and scope of future research to deal with Phishing attacks efficiently using machine learning based approaches.", "title": "" }, { "docid": "9920660432c2a2cf1f83ed6b8412b433", "text": "We propose a new approach for metric learning by framing it as learning a sparse combination of locally discriminative metrics that are inexpensive to generate from the training data. This flexible framework allows us to naturally derive formulations for global, multi-task and local metric learning. The resulting algorithms have several advantages over existing methods in the literature: a much smaller number of parameters to be estimated and a principled way to generalize learned metrics to new testing data points. To analyze the approach theoretically, we derive a generalization bound that justifies the sparse combination. Empirically, we evaluate our algorithms on several datasets against state-of-theart metric learning methods. The results are consistent with our theoretical findings and demonstrate the superiority of our approach in terms of classification performance and scalability.", "title": "" }, { "docid": "5f483cfb3949e8feb109533344aa32be", "text": "Shadows and highlights represent a challenge to the computer vision researchers due to a variance in the brightness on the surfaces of the objects under consideration. This paper presents a new colour detection and segmentation algorithm for road signs in which the effect of shadows and highlights are neglected to get better colour segmentation results. Images are taken by a digital camera mounted in a car. The RGB images are converted into HSV colour space and the shadow-highlight invariant method is applied to extract the colours of the road signs under shadow and highlight conditions. The method is tested on hundreds of outdoor images under such light conditions, and it shows high robustness; more than 95% of correct segmentation is achieved", "title": "" }, { "docid": "d4c24280350ac92bf84e76654741f09d", "text": "Following damage to specific sectors of the prefrontal cortex, humans develop a defect in real-life decision making, in spite of otherwise normal intellectual performance. The patients so affected may even realize the consequences of their actions but fail to act accordingly, thus appearing oblivious to the future. The neural basis of this defect has resisted explanation. Here we identify a physiological correlate for the defect and discuss its possible significance. We measured the skin conductance responses (SCRs) of 7 patients with prefrontal damage, and 12 normal controls, during the performance of a novel task, a card game that simulates real-life decision making in the way it factors uncertainty, rewards, and penalties. Both patients and controls generated SCRs after selecting cards that were followed by penalties or by reward. However, after a number of trials, controls also began to generate SCRs prior to their selection of a card, while they pondered from which deck to choose, but no patients showed such anticipatory SCRs. The absence of anticipatory SCRs in patients with prefrontal damage is a correlate of their insensitivity to future outcomes. It is compatible with the idea that these patients fail to activate biasing signals that would serve as value markers in the distinction between choices with good or bad future outcomes; that these signals also participate in the enhancement of attention and working memory relative to representations pertinent to the decision process; and that the signals hail from the bioregulatory machinery that sustains somatic homeostasis and can be expressed in emotion and feeling.", "title": "" }, { "docid": "47432aed7a46f1591597208dd25e8425", "text": "Successful breastfeeding is dependent upon an infant's ability to correctly latch onto a mother's breast. If an infant is born with oral soft tissue abnormalities such as tongue-tie or lip-tie, breastfeeding may become challenging or impossible. During the oral evaluation of an infant presenting with breastfeeding problems, one area that is often overlooked and undiagnosed and, thus, untreated is the attachment of the upper lip to the maxillary gingival tissue. Historically, this tissue has been described as the superior labial frenum, median labial frenum, or maxillary labial frenum. These terms all refer to a segment of the mucous membrane in the midline of the upper lip containing loose connective tissue that inserts into the maxillary arch's loose, unattached gingival or tight, attached gingival tissue. There is no muscle contained within this tissue. In severe instances, this tissue may extend into the area behind the upper central incisors and incisive papilla. The author has defined and identified the restrictions of mobility of this tissue as a lip-tie, which reflects the clinical attachment of the upper lip to the maxillary arch. This article discusses the diagnosis and classifications of the lip-tie, as it affects an infant's latch onto the mother's breast. As more and more women choose to breastfeed, lip-ties must be considered as an impediment to breastfeeding, recognizing that they can affect a successful, painless latch and milk transfer.", "title": "" }, { "docid": "b94429b8f1a8bf06a4efe8305ecf430d", "text": "Schizophrenia is a complex psychiatric disorder with a characteristic disease course and heterogeneous etiology. While substance use disorders and a family history of psychosis have individually been identified as risk factors for schizophrenia, it is less well understood if and how these factors are related. To address this deficiency, we examined the relationship between substance use disorders and family history of psychosis in a sample of 1219 unrelated patients with schizophrenia. The lifetime rate of substance use disorders in this sample was 50%, and 30% had a family history of psychosis. Latent class mixture modeling identified three distinct patient subgroups: (1) individuals with low probability of substance use disorders; (2) patients with drug and alcohol abuse, but no symptoms of dependence; and (3) patients with substance dependence. Substance use was related to being male, to a more severe disease course, and more acute symptoms at assessment, but not to an earlier age of onset of schizophrenia or a specific pattern of positive and negative symptoms. Furthermore, substance use in schizophrenia was not related to a family history of psychosis. The results suggest that substance use in schizophrenia is an independent risk factor for disease severity and onset.", "title": "" } ]
scidocsrr
2c9ae20b16303935ecaf50657834668c
Bringing semantic structures to user intent detection in online medical queries
[ { "docid": "16ccacd0f59bd5e307efccb9f15ac678", "text": "This document presents the results from Inst. of Computing Tech., CAS in the ACLSIGHAN-sponsored First International Chinese Word Segmentation Bakeoff. The authors introduce the unified HHMM-based frame of our Chinese lexical analyzer ICTCLAS and explain the operation of the six tracks. Then provide the evaluation results and give more analysis. Evaluation on ICTCLAS shows that its performance is competitive. Compared with other system, ICTCLAS has ranked top both in CTB and PK closed track. In PK open track, it ranks second position. ICTCLAS BIG5 version was transformed from GB version only in two days; however, it achieved well in two BIG5 closed tracks. Through the first bakeoff, we could learn more about the development in Chinese word segmentation and become more confident on our HHMM-based approach. At the same time, we really find our problems during the evaluation. The bakeoff is interesting and helpful.", "title": "" }, { "docid": "a1758acf5b65d054dd8a354cedc8e412", "text": "Given a health-related question (such as \"I have a bad stomach ache. What should I do?\"), a medical self-diagnosis Android inquires further information from the user, diagnoses the disease, and ultimately recommend best solutions. One practical challenge to build such an Android is to ask correct questions and obtain most relevant information, in order to correctly pinpoint the most likely causes of health conditions. In this paper, we tackle this challenge, named \"relevant symptom question generation\": Given a limited set of patient described symptoms in the initial question (e.g., \"stomach ache\"), what are the most critical symptoms to further ask the patient, in order to correctly diagnose their potential problems? We propose an augmented long short-term memory (LSTM) framework, where the network architecture can naturally incorporate the inputs from embedding vectors of patient described symptoms and an initial disease hypothesis given by a predictive model. Then the proposed framework generates the most important symptom questions. The generation process essentially models the conditional probability to observe a new and undisclosed symptom, given a set of symptoms from a patient as well as an initial disease hypothesis. Experimental results show that the proposed model obtains improvements over alternative methods by over 30% (both precision and mean ordinal distance).", "title": "" } ]
[ { "docid": "d2c0b035e46d146849e5f153d63aa36e", "text": "This paper describes a mobile manipulator that uses its wheels for manipulation as well as locomotion. This robot, named themobipulator , looks like a small car with four independently powered wheels, none of them steered. It is designed to manipulate paper and other objects on the surface of a desk. The wheels are used for locomotion or for manipulation, switching functions dynamically as the task demands. So far we have preliminary demonstrations of a variety of motions, and performance data for the task of moving a sheet of paper in a square while maintaining constant orientation.", "title": "" }, { "docid": "112f059dad1099f7dd407042ae3b3c8c", "text": "In this paper, we present a Question Answering system based on redundancy and a Passage Retrieval method that is specifically oriented to Question Answering. We suppose that in a large enough document collection the answer to a given question may appear in several different forms. Therefore, it is possible to find one or more sentences that contain the answer and that also include tokens from the original question. The Passage Retrieval engine is almost language-independent since it is based on n-gram structures. Question classification and answer extraction modules are based on shallow patterns.", "title": "" }, { "docid": "742596f0ab5bddd930eb4081ce8097b3", "text": "We show how third-party web trackers can deanonymize users of cryptocurrencies. We present two distinct but complementary attacks. On most shopping websites, third party trackers receive information about user purchases for purposes of advertising and analytics. We show that, if the user pays using a cryptocurrency, trackers typically possess enough information about the purchase to uniquely identify the transaction on the blockchain, link it to the user’s cookie, and further to the user’s real identity. Our second attack shows that if the tracker is able to link two purchases of the same user to the blockchain in this manner, it can identify the user’s entire cluster of addresses and transactions on the blockchain, even if the user employs blockchain anonymity techniques such as CoinJoin. The attacks are passive and hence can be retroactively applied to past purchases. We discuss several mitigations, but none are perfect.", "title": "" }, { "docid": "898897564f0cf3672cc729669bb8a445", "text": "Knit, woven, and nonwoven fabrics offer a diverse range of stretch and strain limiting mechanical properties that can be leveraged to produce tailored, whole-body deformation mechanics of soft robotic systems. This work presents new insights and methods for combining heterogeneous fabric material layers to create soft fabric-based actuators. This work demonstrates that a range of multi-degree-of-freedom motions can be generated by varying fabrics and their layered arrangements when a thin airtight bladder is inserted between them and inflated. Specifically, we present bending and straightening fabric-based actuators that are simple to manufacture, lightweight, require low operating pressures, display a high torque-to-weight ratio, and occupy a low volume in their unpressurized state. Their utility is demonstrated through their integration into a glove that actively assists hand opening and closing.", "title": "" }, { "docid": "4295f4366f757064db71af17709e4dd6", "text": "This work presents a rapidly deployable system for automated precision weeding with minimal human labeling time. This overcomes a limiting factor in robotic precision weeding related to the use of vision-based classification systems trained for species that may not be relevant to specific farms. We present a novel approach to overcome this problem by employing unsupervised weed scouting, weed-group labeling, and finally, weed classification that is trained on the labeled scouting data. This work demonstrates a novel labeling approach designed to maximize labeling accuracy whilst needing to label as few images as possible. The labeling approach is able to provide the best classification results of any of the examined exemplar-based labeling approaches whilst needing to label over seven times fewer images than full data labeling.", "title": "" }, { "docid": "538406cd49ca1add375e287354908740", "text": "A broader approach to research in huj man development is proposed that focuses on the pro\\ gressive accommodation, throughout the life span, between the growing human organism and the changing environments in which it actually lives and grows. \\ The latter include not only the immediate settings containing the developing person but also the larger social contexts, both formal and informal, in which these settings are embedded. In terms of method, the approach emphasizes the use of rigorousj^d^igned exp_erjments, both naturalistic and contrived, beginning in the early stages of the research process. The changing relation between person and environment is conceived in systems terms. These systems properties are set forth in a series of propositions, each illustrated by concrete research examples. This article delineates certain scientific limitations in prevailing approaches to research on human development and suggests broader perspectives in theory, method, and substance. The point of departure for this undertaking is the view that, especially in recent decades, research in human development has pursued a divided course, with each direction tangential to genuine scientific progress. To corrupt a contemporary metaphor, we risk being caught between a rock and a soft place. The rock is rigor, and the soft place relevance. As I have argued elsewhere (Bronfenbrenner, 1974; Note 1), the emphasis on rigor has led to experiments that are elegantly designed but often limited in scope. This limitation derives from the fact that many of these experiments involve situations that are unfamiliar, artificial, and short-lived and that call for unusual behaviors that are difficult to generalize to other settings. From this perspective, it can be said that much of contemporary developmental psychology is the science of the strange behavior of children in strange situations with strange adults for the briefest possible periods of time.* Partially in reaction to such shortcomings, other workers have stressed the need for social relevance in research, but often with indifference to or open rejection of rigor. In its more extreme manifestations, this trend has taken the form of excluding the scientists themselves from the research process. For example, one major foundation has recently stated as its new policy that, henceforth, grants for research will be awarded only to persons who are themselves the victims of social injusticeA Other, less radical expressions of this trend in-1 volve reliance on existential approaches in which 1 \"experience\" takes the place of observation and I analysis is foregone in favor of a more personalized I and direct \"understanding\" gained through inti\\ mate involvement in the field situation. More, N. common, and more scientifically defensible, is an /\" emphasis on naturalistic observation, but with the / stipulation that it be unguided by any hypotheses i formulated in advance and uncontaminated by V structured experimental designs imposed prior to /", "title": "" }, { "docid": "d622cf283f27a32b2846a304c0359c5f", "text": "Reliable verification of image quality of retinal screening images is a prerequisite for the development of automatic screening systems for diabetic retinopathy. A system is presented that can automatically determine whether the quality of a retinal screening image is sufficient for automatic analysis. The system is based on the assumption that an image of sufficient quality should contain particular image structures according to a certain pre-defined distribution. We cluster filterbank response vectors to obtain a compact representation of the image structures found within an image. Using this compact representation together with raw histograms of the R, G, and B color planes, a statistical classifier is trained to distinguish normal from low quality images. The presented system does not require any previous segmentation of the image in contrast with previous work. The system was evaluated on a large, representative set of 1000 images obtained in a screening program. The proposed method, using different feature sets and classifiers, was compared with the ratings of a second human observer. The best system, based on a Support Vector Machine, has performance close to optimal with an area under the ROC curve of 0.9968.", "title": "" }, { "docid": "302098f316b57ab2ac56dd0effb5cfe0", "text": "Functional Magnetic Resonance Imaging (fMRI) is a powerful non-invasive tool for localizing and analyzing brain activity. This study focuses on one very important aspect of the functional properties of human brain, specifically the estimation of the level of parallelism when performing complex cognitive tasks. Using fMRI as the main modality, the human brain activity is investigated through a purely data-driven signal processing and dimensionality analysis approach. Specifically, the fMRI signal is treated as a multi-dimensional data space and its intrinsic ‘complexity’ is studied via dataset fractal analysis and blind-source separation (BSS) methods. One simulated and two real fMRI datasets are used in combination with Independent Component Analysis (ICA) and fractal analysis for estimating the intrinsic (true) dimensionality, in order to provide data-driven experimental evidence on the number of independent brain processes that run in parallel when visual or visuomotor tasks are performed. Although this number is can not be defined as a strict threshold but rather as a continuous range, when a specific activation level is defined, a corresponding number of parallel processes or the casual equivalent of ‘cpu cores’ can be detected in normal human brain activity.", "title": "" }, { "docid": "511991822f427c3f62a4c091594e89e3", "text": "Reinforcement learning has recently gained popularity due to its many successful applications in various fields. In this project reinforcement learning is implemented in a simple warehouse situation where robots have to learn to interact with each other while performing specific tasks. The aim is to study whether reinforcement learning can be used to train multiple agents. Two different methods have been used to achieve this aim, Q-learning and deep Q-learning. Due to practical constraints, this paper cannot provide a comprehensive review of real life robot interactions. Both methods are tested on single-agent and multi-agent models in Python computer simulations. The results show that the deep Q-learning model performed better in the multiagent simulations than the Q-learning model and it was proven that agents can learn to perform their tasks to some degree. Although, the outcome of this project cannot yet be considered sufficient for moving the simulation into reallife, it was concluded that reinforcement learning and deep learning methods can be seen as suitable for modelling warehouse robots and their interactions.", "title": "" }, { "docid": "7e10aa210d6985d757a21b8b6c49ae53", "text": "Haptic devices for computers and video-game consoles aim to reproduce touch and to engage the user with `force feedback'. Although physical touch is often associated with proximity and intimacy, technologies of touch can reproduce such sensations over a distance, allowing intricate and detailed operations to be conducted through a network such as the Internet. The `virtual handshake' between Boston and London in 2002 is given as an example. This paper is therefore a critical investigation into some technologies of touch, leading to observations about the sociospatial framework in which this technological touching takes place. Haptic devices have now become routinely included with videogame consoles, and have started to be used in computer-aided design and manufacture, medical simulation, and even the cybersex industry. The implications of these new technologies are enormous, as they remould the human ^ computer interface from being primarily audiovisual to being more truly multisensory, and thereby enhance the sense of `presence' or immersion. But the main thrust of this paper is the development of ideas of presence over a large distance, and how this is enhanced by the sense of touch. By using the results of empirical research, including interviews with key figures in haptics research and engineering and personal experience of some of the haptic technologies available, I build up a picture of how `presence', `copresence', and `immersion', themselves paradoxically intangible properties, are guiding the design, marketing, and application of haptic devices, and the engendering and engineering of a set of feelings of interacting with virtual objects, across a range of distances. DOI:10.1068/d394t", "title": "" }, { "docid": "7bcfeba71527a097594fb34786372ca0", "text": "Feature extraction approach in medical magnetic resonance imaging (MRI) is very important in order to perform diagnostic image analysis [1]. Edge detection is one of the way to extract more information from magnetic resonance images. Edge detection reduces the amount of data and filters out useless information, while protecting the important structural properties in an image [2]. In this paper, we compare Sobel and Canny edge detection method. In order to compare between them, one slice of MRI image tested with both method. Both method of the edge detection operators are implemented with convolution masks. Sobel method with 3x3 masks while canny used adjustable mask. Those masks will determine the quality of the edge. Edges areas represent a strong intensity contrast which is darker or brighter. Keyword: MRI, Edge detection, Canny method", "title": "" }, { "docid": "7633393bdc807165f2042f0e9e3c7407", "text": "We present our system for the WNUT 2017 Named Entity Recognition challenge on Twitter data. We describe two modifications of a basic neural network architecture for sequence tagging. First, we show how we exploit additional labeled data, where the Named Entity tags differ from the target task. Then, we propose a way to incorporate sentence level features. Our system uses both methods and ranked second for entity level annotations, achieving an F1-score of 40.78, and second for surface form annotations, achieving an F1score of 39.33.", "title": "" }, { "docid": "064aba7f2bd824408bd94167da5d7b3a", "text": "Online comments submitted by readers of news articles can provide valuable feedback and critique, personal views and perspectives, and opportunities for discussion. The varying quality of these comments necessitates that publishers remove the low quality ones, but there is also a growing awareness that by identifying and highlighting high quality contributions this can promote the general quality of the community. In this paper we take a user-centered design approach towards developing a system, CommentIQ, which supports comment moderators in interactively identifying high quality comments using a combination of comment analytic scores as well as visualizations and flexible UI components. We evaluated this system with professional comment moderators working at local and national news outlets and provide insights into the utility and appropriateness of features for journalistic tasks, as well as how the system may enable or transform journalistic practices around online comments.", "title": "" }, { "docid": "64e4345f56508c4a77b45ecc8aab99ff", "text": "OBJECTIVES\nImmunological dysregulation is now recognised as a major pathogenic event in sepsis. Stimulation of immune response and immuno-modulation are emerging approaches for the treatment of this disease. Defining the underlying immunological alterations in sepsis is important for the design of future therapies with immuno-modulatory drugs.\n\n\nMETHODS\nClinical studies evaluating the immunological response in adult patients with Sepsis and published in PubMed were reviewed to identify features of immunological dysfunction. For this study we used key words related with innate and adaptive immunity.\n\n\nRESULTS\nTen major features of immunological dysfunction (FID) were identified involving quantitative and qualitative alterations of [antigen presentation](FID1), [T and B lymphocytes] (FID2), [natural killer cells] (FID3), [relative increase in T regulatory cells] (FID4), [increased expression of PD-1 and PD-ligand1](FID5), [low levels of immunoglobulins](FID6), [low circulating counts of neutrophils and/or increased immature forms in non survivors](FID7), [hyper-cytokinemia] (FID8), [complement consumption] (FID9), [defective bacterial killing by neutrophil extracellular traps](FID10).\n\n\nCONCLUSIONS\nThis review article identified ten major features associated with immunosuppression and immunological dysregulation in sepsis. Assessment of these features could help in utilizing precision medicine for the treatment of sepsis with immuno-modulatory drugs.", "title": "" }, { "docid": "3bea5eeea1e3b74917ea25c98b169289", "text": "Dissociation as a clinical psychiatric condition has been defined primarily in terms of the fragmentation and splitting of the mind, and perception of the self and the body. Its clinical manifestations include altered perceptions and behavior, including derealization, depersonalization, distortions of perception of time, space, and body, and conversion hysteria. Using examples of animal models, and the clinical features of the whiplash syndrome, we have developed a model of dissociation linked to the phenomenon of freeze/immobility. Also employing current concepts of the psychobiology of posttraumatic stress disorder (PTSD), we propose a model of PTSD linked to cyclical autonomic dysfunction, triggered and maintained by the laboratory model of kindling, and perpetuated by increasingly profound dorsal vagal tone and endorphinergic reward systems. These physiologic events in turn contribute to the clinical state of dissociation. The resulting autonomic dysregulation is presented as the substrate for a diverse group of chronic diseases of unknown origin.", "title": "" }, { "docid": "4c7de823cd4efc59413a7d4b119f80c5", "text": "Current domain-independent, classical planners require symbolic models of the problem domain and instance as input, resulting in a knowledge acquisition bottleneck. Meanwhile, although deep learning has achieved significant success in many fields, the knowledge is encoded in a subsymbolic representation which is incompatible with symbolic systems such as planners. We propose LatPlan, an unsupervised architecture combining deep learning and classical planning. Given only an unlabeled set of image pairs showing a subset of transitions allowed in the environment (training inputs), and a pair of images representing the initial and the goal states (planning inputs), LatPlan finds a plan to the goal state in a symbolic latent space and returns a visualized plan execution. The contribution of this paper is twofold: (1) State Autoencoder, which finds a propositional state representation of the environment using a Variational Autoencoder. It generates a discrete latent vector from the images, based on which a PDDL model can be constructed and then solved by an off-the-shelf planner. (2) Action Autoencoder / Discriminator, a neural architecture which jointly finds the action symbols and the implicit action models (preconditions/effects), and provides a successor function for the implicit graph search. We evaluate LatPlan using image-based versions of 3 planning domains: 8-puzzle, Towers of Hanoi and LightsOut. Note This is an extended manuscript of the paper accepted in AAAI-18. The contents of AAAI-18 submission itself is significantly extended from what has been published in Arxiv, KEPS-17, NeSy-17 or Cognitum-17 workshops. Over half of the paper describing (2) is new. Additionally, this manuscript contains the contents in the supplemental material of AAAI-18 submission. These implementation/experimental details are moved to the Appendix. Note to the ML / deep learning researchers This article combines the Machine Learning systems and the classical, logic-based symbolic systems. Some readers may not be familiar with NNs and related fields like you are, thus we include very basic description of the architectures and the training methods.", "title": "" }, { "docid": "52a95a4521f9a240d3b1893a909de7e4", "text": "We explore the use of song lyrics for automatic indexing of music. Using lyrics mined from the Web, we apply a standard text processing technique to characterize their semantic content. We then determine artist similarity in this space. We found lyrics can be used to discover natural genre clusters. Experiments on a publicly available set of 399 artists showed that determining artist similarity using lyrics is better than random, but inferior to a state-of-the-art acoustic similarity technique. However the approaches made different errors, suggesting they could be profitably combined", "title": "" }, { "docid": "f234f04e1adaba8a64fd4d7fcd29282f", "text": "In this paper, we introduce two different transforming steering wheel systems that can be utilized to augment user experience for future partially autonomous and fully autonomous vehicles. The first one is a robotic steering wheel that can mechanically transform by using its actuators to move the various components into different positions. The second system is a LED steering wheel that can visually transform by using LEDs embedded along the rim of wheel to change colors. Both steering wheel systems contain onboard microcontrollers developed to interface with our driving simulator. The main function of these two systems is to provide emergency warnings to drivers in a variety of safety critical scenarios, although the design space that we propose for these steering wheel systems also includes the use as interactive user interfaces. To evaluate the effectiveness of the emergency alerts, we conducted a driving simulator study examining the performance of participants (N=56) after an abrupt loss of autonomous vehicle control. Drivers who experienced the robotic steering wheel performed significantly better than those who experienced the LED steering wheel. The results of this study suggest that alerts utilizing mechanical movement are more effective than purely visual warnings.", "title": "" }, { "docid": "4f827fa8a868da051e92d03a9f5f7c75", "text": "Ever increasing volumes of biosolids (treated sewage sludge) are being produced by municipal wastewater facilities. This is a consequence of the continued expansion of urban areas, which in turn require the commissioning of new treatment plants or upgrades to existing facilities. Biosolids contain nutrients and energy which can be used in agriculture or waste-to-energy processes. Biosolids have been disposed of in landfills, but there is an increasing pressure from regulators to phase out landfilling. This article performs a critical review on options for the management of biosolids with a focus on pyrolysis and the application of the solid fraction of pyrolysis (biochar) into soil.", "title": "" }, { "docid": "a421e716d4e47b03f773d8b05fe9c808", "text": "Determining the “origin of a file” in a file system is often required during digital investigations. While the problem of “origin of a file” appears intractable in isolation, it often becomes simpler if one considers the environmental context, viz., the presence of browser history, cache logs, cookies and so on. Metadata can help bridge this contextual gap. Majority of the current tools, with their search-and-query interface, while enabling extraction of metadata stops short of leading the investigator to the “associations” that metadata potentially point to, thereby enabling an approach to solving the “origin of a file” problem. In this paper, we develop a method to identify the origin of files downloaded from the Internet using metadata based associations. Metadata based associations are derived though metadata value matches on the digital artifacts and the artifacts thus associated, are grouped together automatically. These associations can reveal certain higher-order relationships across different sources such as file systems and log files. We define four relationships between files on file systems and log records in log files which we use to determine the origin of a particular file. The files in question are tracked from the user file system under examination to the different browser logs generated during a user’s online activity to their points of origin in the Internet.", "title": "" } ]
scidocsrr
648f1f59a84a40c009d1bf9eb36c47e4
Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering
[ { "docid": "c0dbb410ebd6c84bd97b5f5e767186b3", "text": "A new hypothesis about the role of focused attention is proposed. The feature-integration theory of attention suggests that attention must be directed serially to each stimulus in a display whenever conjunctions of more than one separable feature are needed to characterize or distinguish the possible objects presented. A number of predictions were tested in a variety of paradigms including visual search, texture segregation, identification and localization, and using both separable dimensions (shape and color) and local elements or parts of figures (lines, curves, etc. in letters) as the features to be integrated into complex wholes. The results were in general consistent with the hypothesis. They offer a new set of criteria for distinguishing separable from integral features and a new rationale for predicting which tasks will show attention limits and which will not.", "title": "" } ]
[ { "docid": "9aa458acf63b94e40afbc8bb68049082", "text": "We tested the accuracy of thermal imaging as a lie detection tool in airport screening. Fifty-one passengers in an international airport departure hall told the truth or lied about their forthcoming trip in an interview. Their skin temperature was recorded via a thermal imaging camera. Liars' skin temperature rose significantly during the interview, whereas truth tellers' skin temperature remained constant. On the basis of these different patterns, 64% of truth tellers and 69% of liars were classified correctly. The interviewers made veracity judgements independently from the thermal recordings. The interviewers outperformed the thermal recordings and classified 72% of truth tellers and 77% of liars correctly. Accuracy rates based on the combination of thermal imaging scores and interviewers' judgements were the same as accuracy rates based on interviewers' judgements alone. Implications of the findings for the suitability of thermal imaging as a lie detection tool in airports are discussed.", "title": "" }, { "docid": "ccf8e1f627af3fe1327a4fa73ac12125", "text": "One of the most common needs in manufacturing plants is rejecting products not coincident with the standards as anomalies. Accurate and automatic anomaly detection improves product reliability and reduces inspection cost. Probabilistic models have been employed to detect test samples with lower likelihoods as anomalies in unsupervised manner. Recently, a probabilistic model called deep generative model (DGM) has been proposed for end-to-end modeling of natural images and already achieved a certain success. However, anomaly detection of machine components with complicated structures is still challenging because they produce a wide variety of normal image patches with low likelihoods. For overcoming this difficulty, we propose unregularized score for the DGM. As its name implies, the unregularized score is the anomaly score of the DGM without the regularization terms. The unregularized score is robust to the inherent complexity of a sample and has a smaller risk of rejecting a sample appearing less frequently but being coincident with the standards.", "title": "" }, { "docid": "86f82b9006c4e34192b79a03e71dde87", "text": "Erectile dysfunction (ED) is defined as the consistent inability to obtain or maintain an erection for satisfactory sexual relations. An estimated 20-30 million men suffer from some degree of sexual dysfunction. The past 20 years of research on erectile physiology have increased our understanding of the biochemical factors and intracellular mechanisms responsible for corpus cavernosal smooth muscle contraction and relaxation, and revealed that ED is predominantly a disease of vascular origin. Since the advent of sildenafil (Viagra), there has been a resurgence of interest in ED, and an increase in patients presenting with this disease. A thorough knowledge of the physiology of erection is essential for future pharmacological innovations in the field of male ED.", "title": "" }, { "docid": "b40e5cf2b979c51f87c0e517f8578fae", "text": "The osteopathic treatment of the fascia involves several techniques, each aimed at allowing the various layers of the connective system to slide over each other, improving the responses of the afferents in case of dysfunction. However, before becoming acquainted with a method, one must be aware of the structure and function of the tissue that needs treating, in order to not only better understand the manual approach, but also make a more conscious choice of the therapeutic technique to employ, in order to adjust the treatment to the specific needs of the patient. This paper examines the current literature regarding the function and structure of the fascial system and its foundation, that is, the fibroblasts. These connective cells have many properties, including the ability to contract and to communicate with one another. They play a key role in the transmission of the tension produced by the muscles and in the management of the interstitial fluids. They are a source of nociceptive and proprioceptive information as well, which is useful for proper functioning of the body system. Therefore, the fibroblasts are an invaluable instrument, essential to the understanding of the therapeutic effects of osteopathic treatment. Scientific research should make greater efforts to better understand their functioning and relationships.", "title": "" }, { "docid": "5b51fb07c0c8c9317ee2c81c54ba4c60", "text": "Aim The aim of this paper is to explore the role of values-based service for sustainable business. The two basic questions addressed are: What is ‘values-based service’? How can values create value for customers and other stakeholders? Design/ methodology/ approach This paper is based on extensive empirical studies focusing on the role of values at the corporate, country and store levels in the retail company IKEA and a comparison of the results with data from Starbucks, H&M and Body Shop. The theoretical point of departure is a business model based on the service-dominant logic (SDL) on the one hand and control through values focusing on social and environmental values forming the basis for a sustainable business. Findings Based on a comparative, inductive empirical analysis, five principles for a sustainable values-based service business were identified: (1) Strong company values drive customer value, (2) CSR as a strategy for sustainable service business, (3) Values-based service experience for co-creating value with customers, (4) Values-based service brand and communication for values resonance and (5) Values-based service leadership for living the values. A company built on an entrepreneurial business model often has the original entrepreneur’s values and leadership style as a model for future generations of leaders. However, the challenge for subsequent leaders is to develop these values and communicate what they mean today. Orginality/ value We suggest a new framework for managing values-based service to create a sustainable business based on values resonance.", "title": "" }, { "docid": "00ea9078f610b14ed0ed00ed6d0455a7", "text": "Boosting is a general method for improving the performance of learning algorithms. A recently proposed boosting algorithm, Ada Boost, has been applied with great success to several benchmark machine learning problems using mainly decision trees as base classifiers. In this article we investigate whether Ada Boost also works as well with neural networks, and we discuss the advantages and drawbacks of different versions of the Ada Boost algorithm. In particular, we compare training methods based on sampling the training set and weighting the cost function. The results suggest that random resampling of the training data is not the main explanation of the success of the improvements brought by Ada Boost. This is in contrast to bagging, which directly aims at reducing variance and for which random resampling is essential to obtain the reduction in generalization error. Our system achieves about 1.4 error on a data set of on-line handwritten digits from more than 200 writers. A boosted multilayer network achieved 1.5 error on the UCI letters and 8.1 error on the UCI satellite data set, which is significantly better than boosted decision trees.", "title": "" }, { "docid": "0ccc233ea8225de88882883d678793c8", "text": "Sustaining of Moore's Law over the next decade will require not only continued scaling of the physical dimensions of transistors but also performance improvement and aggressive reduction in power consumption. Heterojunction Tunnel FET (TFET) has emerged as promising transistor candidate for supply voltage scaling down to sub-0.5V due to the possibility of sub-kT/q switching without compromising on-current (ION). Recently, n-type III-V HTFET with reasonable on-current and sub-kT/q switching at supply voltage of 0.5V have been experimentally demonstrated. However, steep switching performance of III-V HTFET till date has been limited to range of drain current (IDS) spanning over less than a decade. In this work, we will present progress on complimentary Tunnel FETs and analyze primary roadblocks in the path towards achieving steep switching performance in III-V HTFET.", "title": "" }, { "docid": "cdcdbb6dca02bdafdf9f5d636acb8b3d", "text": "BACKGROUND\nExpertise has been extensively studied in several sports over recent years. The specificities of how excellence is achieved in Association Football, a sport practiced worldwide, are being repeatedly investigated by many researchers through a variety of approaches and scientific disciplines.\n\n\nOBJECTIVE\nThe aim of this review was to identify and synthesise the most significant literature addressing talent identification and development in football. We identified the most frequently researched topics and characterised their methodologies.\n\n\nMETHODS\nA systematic review of Web of Science™ Core Collection and Scopus databases was performed according to PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) guidelines. The following keywords were used: \"football\" and \"soccer\". Each word was associated with the terms \"talent\", \"expert*\", \"elite\", \"elite athlete\", \"identification\", \"career transition\" or \"career progression\". The selection was for the original articles in English containing relevant data about talent development/identification on male footballers.\n\n\nRESULTS\nThe search returned 2944 records. After screening against set criteria, a total of 70 manuscripts were fully reviewed. The quality of the evidence reviewed was generally excellent. The most common topics of analysis were (1) task constraints: (a) specificity and volume of practice; (2) performers' constraints: (a) psychological factors; (b) technical and tactical skills; (c) anthropometric and physiological factors; (3) environmental constraints: (a) relative age effect; (b) socio-cultural influences; and (4) multidimensional analysis. Results indicate that the most successful players present technical, tactical, anthropometric, physiological and psychological advantages that change non-linearly with age, maturational status and playing positions. These findings should be carefully considered by those involved in the identification and development of football players.\n\n\nCONCLUSION\nThis review highlights the need for coaches and scouts to consider the players' technical and tactical skills combined with their anthropometric and physiological characteristics scaled to age. Moreover, research addressing the psychological and environmental aspects that influence talent identification and development in football is currently lacking. The limitations detected in the reviewed studies suggest that future research should include the best performers and adopt a longitudinal and multidimensional perspective.", "title": "" }, { "docid": "901fbd46cdd4403c8398cb21e1c75ba1", "text": "Hidden Markov Model (HMM) based applications are common in various areas, but the incorporation of HMM's for anomaly detection is still in its infancy. This paper aims at classifying the TCP network traffic as an attack or normal using HMM. The paper's main objective is to build an anomaly detection system, a predictive model capable of discriminating between normal and abnormal behavior of network traffic. In the training phase, special attention is given to the initialization and model selection issues, which makes the training phase particularly effective. For training HMM, 12.195% features out of the total features (5 features out of 41 features) present in the KDD Cup 1999 data set are used. Result of tests on the KDD Cup 1999 data set shows that the proposed system is able to classify network traffic in proportion to the number of features used for training HMM. We are extending our work on a larger data set for building an anomaly detection system.", "title": "" }, { "docid": "124c73eb861c0b2fb64d0084b3961859", "text": "Treemaps are an important and commonly-used approach to hierarchy visualization, but an important limitation of treemaps is the difficulty of discerning the structure of a hierarchy. This paper presents cascaded treemaps, a new approach to treemap presentation that is based in cascaded rectangles instead of the traditional nested rectangles. Cascading uses less space to present the same containment relationship, and the space savings enable a depth effect and natural padding between siblings in complex hierarchies. In addition, we discuss two general limitations of existing treemap layout algorithms: disparities between node weight and relative node size that are introduced by layout algorithms ignoring the space dedicated to presenting internal nodes, and a lack of stability when generating views of different levels of treemaps as a part of supporting interactive zooming. We finally present a two-stage layout process that addresses both concerns, computing a stable structure for the treemap and then using that structure to consider the presentation of internal nodes when arranging the treemap. All of this work is presented in the context of two large real-world hierarchies, the Java package hierarchy and the eBay auction hierarchy.", "title": "" }, { "docid": "2beeca8dc7b19299c7201594599d3992", "text": "The aims of the present study were to assess the effectiveness of skeletal anchorage for intrusion of maxillary posterior teeth, to correct open bite malocclusion, and to evaluate the usage of titanium miniplates for orthodontic anchorage. Anterior open bite is one of the most difficult malocclusions to treat orthodontically. Currently, surgical impaction of the maxillary posterior segment is considered to be the most effective treatment option in adult patients. Various studies have reported the use of implants as anchorage units at different sites of midfacial bones for orthodontic tooth movement. The zygomatic buttress area could be a valuable anchorage site to achieve intrusion of maxillary posterior teeth. Ten patients, 17 to 23 years old and characterized with an anterior open bite and excessive maxillary posterior growth, were included in this preliminary study. Titanium miniplates were fixed bilaterally to the zygomatic buttress area, and a force was applied bilaterally with nine mm Ni-Ti coil springs between the vertical extension of the miniplate and the first molar buccal tube. The results showed that, with the help of skeletal anchorage, maxillary posterior teeth were intruded effectively. As compared with an osteotomy, this minimally invasive surgical procedure eased treatment and reduced treatment time and did not require headgear wear or anterior box elastics for anterior open bite correction. In conclusion, the zygomatic area was found to be a useful anchorage site for intrusion of the molars in a short period of time.", "title": "" }, { "docid": "08edbcf4f974895cfa22d80ff32d48da", "text": "This paper describes a Non-invasive measurement of blood glucose of diabetic based on infrared spectroscopy. We measured the spectrum of human finger by using the Fourier transform infrared spectroscopy (FT-IR) of attenuated total reflection (ATR). In this paper, We would like to report the accuracy of the calibration models when we measured the blood glucose of diabetic.", "title": "" }, { "docid": "4fc67f5a4616db0906b943d7f13c856d", "text": "Overview. A blockchain is best understood in the model of state-machine replication [8], where a service maintains some state and clients invoke operations that transform the state and generate outputs. A blockchain emulates a “trusted” computing service through a distributed protocol, run by nodes connected over the Internet. The service represents or creates an asset, in which all nodes have some stake. The nodes share the common goal of running the service but do not necessarily trust each other for more. In a “permissionless” blockchain such as the one underlying the Bitcoin cryptocurrency, anyone can operate a node and participate through spending CPU cycles and demonstrating a “proof-of-work.” On the other hand, blockchains in the “permissioned” model control who participates in validation and in the protocol; these nodes typically have established identities and form a consortium. A report of Swanson compares the two models [9].", "title": "" }, { "docid": "c2baa873bc2850b14b3868cdd164019f", "text": "It is expensive to obtain labeled real-world visual data for use in training of supervised algorithms. Therefore, it is valuable to leverage existing databases of labeled data. However, the data in the source databases is often obtained under conditions that differ from those in the new task. Transfer learning provides techniques for transferring learned knowledge from a source domain to a target domain by finding a mapping between them. In this paper, we discuss a method for projecting both source and target data to a generalized subspace where each target sample can be represented by some combination of source samples. By employing a low-rank constraint during this transfer, the structure of source and target domains are preserved. This approach has three benefits. First, good alignment between the domains is ensured through the use of only relevant data in some subspace of the source domain in reconstructing the data in the target domain. Second, the discriminative power of the source domain is naturally passed on to the target domain. Third, noisy information will be filtered out during knowledge transfer. Extensive experiments on synthetic data, and important computer vision problems such as face recognition application and visual domain adaptation for object recognition demonstrate the superiority of the proposed approach over the existing, well-established methods.", "title": "" }, { "docid": "f715f471118b169502941797d17ceac6", "text": "Software is a knowledge intensive product, which can only evolve if there is effective and efficient information exchange between developers. Complying to coding conventions improves information exchange by improving the readability of source code. However, without some form of enforcement, compliance to coding conventions is limited. We look at the problem of information exchange in code and propose gamification as a way to motivate developers to invest in compliance. Our concept consists of a technical prototype and its integration into a Scrum environment. By means of two experiments with agile software teams and subsequent surveys, we show that gamification can effectively improve adherence to coding conventions.", "title": "" }, { "docid": "f451ca49b2bca088632ad055d78fbf2a", "text": "Intrabody communications (IBC) is a novel communication technique which uses the human body itself as the signal propagation medium. This communication method is categorized as a physical layer of IEEE 802.15.6 or Wireless Body Area Network (WBAN) standard. It is significant to investigate the IBC systems to improve the transceiver design characteristics such as data rate and power consumption. In this paper, we propose a new IBC transmitter implementing pulse position modulation (PPM) scheme based on impulse radio. A FPGA is employed to implement the architecture of a carrier-free PPM transmission. Results demonstrate the data rate of 1.56 Mb/s which is suitable for the galvanic coupling IBC method. The PPM transmitter power consumption is 2.0 mW with 3.3 V supply voltage. Having energy efficiency as low as 1.28 nJ/bit provides an enhanced solution for portable biomedical applications based on body area networks.", "title": "" }, { "docid": "2e0b3d2b61e7cccf725202f73275dffb", "text": "Introduction.........................................................................................................76 Scope and purpose of the chapter....................................................................79 Sustainability, globalization and organic agriculture ..........................................79 Dimensions of sustainability ...........................................................................80 Different meanings of globalization and sustainability...................................82 Sustainability and organic agriculture.............................................................83 The ethics and justice of ecological justice .........................................................84 Ecological justice as an ethical concept ..........................................................85 The justice of ecological justice ......................................................................87 Summing up ....................................................................................................89 Challenges for organic agriculture ......................................................................90 Commodification of commons........................................................................91 How to address externalities ...........................................................................92 Growing distances...........................................................................................94 Putting ecological justice into organic practice...................................................97 The way of certified organic agriculture .........................................................98 The way of non-certified organic agriculture................................................102 Organic agriculture as an alternative example ..............................................106 Conclusions.......................................................................................................108", "title": "" }, { "docid": "39385365a8515f69fecf6f7cceac7a54", "text": "SOA is rapidly emerging as the premier integration and architectural approach in contemporary complex, heterogeneous computing environments. SOA is not simply about deploying software: it also requires that organizations evaluate their business models, come up with service-oriented analysis and design techniques, deployment and support plans, and carefully evaluate partner/customer/supplier relationships. Since SOA is based on open standards and is frequently realized using Web services, developing meaningful Web service and business process specifications is an important requirement for SOA applications that leverage Web services. Designers and developers cannot be expected to oversee a complex service-oriented development project without relying on a sound design and development methodology. This paper provides an overview of the methods and techniques used in service-oriented design and development. Aim of this paper is to examine a service development methodology from the point of view of both service producers and requesters and review the range of elements in this methodology that are available to them.", "title": "" }, { "docid": "5fb0931dafbb024663f2d68faca2f552", "text": "The instrumentation and control (I&C) systems in nuclear power plants (NPPs) collect signals from sensors measuring plant parameters, integrate and evaluate sensor information, monitor plant performance, and generate signals to control plant devices for a safe operation of NPPs. Although the application of digital technology in industrial control systems (ICS) started a few decades ago, I&C systems in NPPs have utilized analog technology longer than any other industries. The reason for this stems from the fact that NPPs require strong assurance for safety and reliability. In recent years, however, digital I&C systems have been developed and installed in new and operating NPPs. This application of digital computers, and communication system and network technologies in NPP I&C systems accompanies cyber security concerns, similar to other critical infrastructures based on digital technologies. The Stuxnet case in 2010 evoked enormous concern regarding cyber security in NPPs. Thus, performing appropriate cyber security risk assessment for the digital I&C systems of NPPs, and applying security measures to the systems, has become more important nowadays. In general, approaches to assure cyber security in NPPs may be compatible with those for ICS and/or supervisory control and data acquisition (SCADA) systems in many aspects. Cyber security requirements and the risk assessment methodologies for ICS and SCADA systems are adopted from those for information technology (IT) systems. Many standards and guidance documents have been published for these areas [1~10]. Among them NIST SP 800-30 [4], NIST SP 800-37 [5], and NIST 800-39 [6] describe the risk assessment methods, NIST SP 800-53 [7] and NIST SP 800-53A [8] address security controls for IT systems. NIST SP 800-82 [10] describes the differences between IT systems and ICS and provides guidance for securing ICS, including SCADA systems, distributed control systems (DCS), and other systems performing control functions. As NIST SP 800-82 noted the differences between IT The applications of computers and communication system and network technologies in nuclear power plants have expanded recently. This application of digital technologies to the instrumentation and control systems of nuclear power plants brings with it the cyber security concerns similar to other critical infrastructures. Cyber security risk assessments for digital instrumentation and control systems have become more crucial in the development of new systems and in the operation of existing systems. Although the instrumentation and control systems of nuclear power plants are similar to industrial control systems, the former have specifications that differ from the latter in terms of architecture and function, in order to satisfy nuclear safety requirements, which need different methods for the application of cyber security risk assessment. In this paper, the characteristics of nuclear power plant instrumentation and control systems are described, and the considerations needed when conducting cyber security risk assessments in accordance with the lifecycle process of instrumentation and control systems are discussed. For cyber security risk assessments of instrumentation and control systems, the activities and considerations necessary for assessments during the system design phase or component design and equipment supply phase are presented in the following 6 steps: 1) System Identification and Cyber Security Modeling, 2) Asset and Impact Analysis, 3) Threat Analysis, 4) Vulnerability Analysis, 5) Security Control Design, and 6) Penetration test. The results from an application of the method to a digital reactor protection system are described.", "title": "" }, { "docid": "c034cb6e72bc023a60b54d0f8316045a", "text": "This thesis presents the design, implementation, and valid ation of a system that enables a micro air vehicle to autonomously explore and map unstruct u ed and unknown indoor environments. Such a vehicle would be of considerable use in many real-world applications such as search and rescue, civil engineering inspection, an d a host of military tasks where it is dangerous or difficult to send people. While mapping and exploration capabilities are common for ground vehicles today, air vehicles seeking t o achieve these capabilities face unique challenges. While there has been recent progres s toward sensing, control, and navigation suites for GPS-denied flight, there have been few demonstrations of stable, goal-directed flight in real environments. The main focus of this research is the development of real-ti me state estimation techniques that allow our quadrotor helicopter to fly autonomous ly in indoor, GPS-denied environments. Accomplishing this feat required the developm ent of a large integrated system that brought together many components into a cohesive packa ge. As such, the primary contribution is the development of the complete working sys tem. I show experimental results that illustrate the MAV’s ability to navigate accurat ely in unknown environments, and demonstrate that our algorithms enable the MAV to operate au tonomously in a variety of indoor environments. Thesis Supervisor: Nicholas Roy Title: Associate Professor of Aeronautics and Astronautic s", "title": "" } ]
scidocsrr
accae1648894564c95b0f580b5146c54
Interpretable Multimodal Retrieval for Fashion Products
[ { "docid": "9d672a1d45bfd078c16915b7f5d949b0", "text": "To design a useful recommender system, it is important to understand how products relate to each other. For example, while a user is browsing mobile phones, it might make sense to recommend other phones, but once they buy a phone, we might instead want to recommend batteries, cases, or chargers. In economics, these two types of recommendations are referred to as substitutes and complements: substitutes are products that can be purchased instead of each other, while complements are products that can be purchased in addition to each other. Such relationships are essential as they help us to identify items that are relevant to a user's search.\n Our goal in this paper is to learn the semantics of substitutes and complements from the text of online reviews. We treat this as a supervised learning problem, trained using networks of products derived from browsing and co-purchasing logs. Methodologically, we build topic models that are trained to automatically discover topics from product reviews that are successful at predicting and explaining such relationships. Experimentally, we evaluate our system on the Amazon product catalog, a large dataset consisting of 9 million products, 237 million links, and 144 million reviews.", "title": "" }, { "docid": "1d8cd516cec4ef74d72fa283059bf269", "text": "Current high-quality object detection approaches use the same scheme: salience-based object proposal methods followed by post-classification using deep convolutional features. This spurred recent research in improving object proposal methods [18, 32, 15, 11, 2]. However, domain agnostic proposal generation has the principal drawback that the proposals come unranked or with very weak ranking, making it hard to trade-off quality for running time. Also, it raises the more fundamental question of whether high-quality proposal generation requires careful engineering or can be derived just from data alone. We demonstrate that learning-based proposal methods can effectively match the performance of hand-engineered methods while allowing for very efficient runtime-quality trade-offs. Using our new multi-scale convolutional MultiBox (MSC-MultiBox) approach, we substantially advance the state-of-the-art on the ILSVRC 2014 detection challenge data set, with 0.5 mAP for a single model and 0.52 mAP for an ensemble of two models. MSC-Multibox significantly improves the proposal quality over its predecessor Multibox [4] method: AP increases from 0.42 to 0.53 for the ILSVRC detection challenge. Finally, we demonstrate improved bounding-box recall compared to Multiscale Combinatorial Grouping [18] with less proposals on the Microsoft-COCO [14] data set.", "title": "" }, { "docid": "225204d66c371372debb3bb2a37c795b", "text": "We present two novel methods for face verification. Our first method - “attribute” classifiers - uses binary classifiers trained to recognize the presence or absence of describable aspects of visual appearance (e.g., gender, race, and age). Our second method - “simile” classifiers - removes the manual labeling required for attribute classification and instead learns the similarity of faces, or regions of faces, to specific reference people. Neither method requires costly, often brittle, alignment between image pairs; yet, both methods produce compact visual descriptions, and work on real-world images. Furthermore, both the attribute and simile classifiers improve on the current state-of-the-art for the LFW data set, reducing the error rates compared to the current best by 23.92% and 26.34%, respectively, and 31.68% when combined. For further testing across pose, illumination, and expression, we introduce a new data set - termed PubFig - of real-world images of public figures (celebrities and politicians) acquired from the internet. This data set is both larger (60,000 images) and deeper (300 images per individual) than existing data sets of its kind. Finally, we present an evaluation of human performance.", "title": "" } ]
[ { "docid": "df4fbaf83a761235c5d77654973b5eb1", "text": "We add to the discussion of how to assess the creativity of programs which generate artefacts such as poems, theorems, paintings, melodies, etc. To do so, we first review some existing frameworks for assessing artefact generation programs. Then, drawing on our experience of building both a mathematical discovery system and an automated painter, we argue that it is not appropriate to base the assessment of a system on its output alone, and that the way it produces artefacts also needs to be taken into account. We suggest a simple framework within which the behaviour of a program can be categorised and described which may add to the perception of creativity in the system.", "title": "" }, { "docid": "08af9ad2017a2ae8e2fc739f818e89fd", "text": "It is proved that cloud computing has many benefits, and cloud services with flexible licensing model and cost saving features bring opportunities to enterprise's IT operations. Conventional ERP system, an important component of enterprises, is impacted by on-demand cloud services also. Integrating conventional ERP system with cloud services becomes a trend because it brings new processing capabilities without introducing significant changes to existing system. But previous studies about integrating ERP and cloud usually focused on SaaS (Software as a Service). PaaS (Platform as a Service) and IaaS (Infrastructure as a Service) are not discussed much. People sometimes even confuse cloud service with SaaS. Therefore, the objective of this paper is to present a framework that can describe respective implications of integrating ERP with three types of cloud services: SaaS, PaaS and IaaS. Qualitative methods such as systematic literature review and interviews are adopted to execute the research, so data collected through different sources can complement each other in order to support presenting the framework of integrating conventional ERP system and cloud services. The integration at SaaS level is for achieving immediate business value and productivity enhancement. At PaaS level the objective of integration is to enhance software development life cycle management. And the main integrating intent at IaaS level is to enable scalability and reliability of hardware resources without changing existing IT infrastructure. Furthermore, challenges and opportunities for integrating ERP with different cloud services are studied and motivation is derived from analyzing. Finally the key points are arranged in the presented framework of integration between conventional ERP systems and cloud services.", "title": "" }, { "docid": "a666d94704812b3a0c9dff72d1808b30", "text": "Security metrics for software products provide quantitative measurement for the degree of trustworthiness for software systems. This paper proposes a new approach to define software security metrics based on vulnerabilities included in the software systems and their impacts on software quality. We use the Common Vulnerabilities and Exposures (CVE), an industry standard for vulnerability and exposure names, and the Common Vulnerability Scoring System (CVSS), a vulnerability scoring system designed to provide an open and standardized method for rating software vulnerabilities, in our metric definition and calculation. Examples are provided in the paper, which show that our definition of security metrics is consistent with the common practice and real-world experience about software quality in trustworthiness.", "title": "" }, { "docid": "66ad5e67a06504b1062316c3e3bbc5cf", "text": "We investigate the community structure of physics subfields in the citation network of all Physical Review publications between 1893 and August 2007. We focus on well-cited publications (those receiving more than 100 citations), and apply modularity maximization to uncover major communities that correspond to clearly identifiable subfields of physics. While most of the links between communities connect those with obvious intellectual overlap, there sometimes exist unexpected connections between disparate fields due to the development of a widely applicable theoretical technique or by cross fertilization between theory and experiment. We also examine communities decade by decade and also uncover a small number of significant links between communities that are widely separated in time. © 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "bfd23678afff2ac4cd4650cf46195590", "text": "The Islamic State of Iraq and ash-Sham (ISIS) continues to use social media as an essential element of its campaign to motivate support. On Twitter, ISIS' unique ability to leverage unaffiliated sympathizers that simply retweet propaganda has been identified as a primary mechanism in their success in motivating both recruitment and \"lone wolf\" attacks. The present work explores a large community of Twitter users whose activity supports ISIS propaganda diffusion in varying degrees. Within this ISIS supporting community, we observe a diverse range of actor types, including fighters, propagandists, recruiters, religious scholars, and unaffiliated sympathizers. The interaction between these users offers unique insight into the people and narratives critical to ISIS' sustainment. In their entirety, we refer to this diverse set of users as an online extremist community or OEC. We present Iterative Vertex Clustering and Classification (IVCC), a scalable analytic approach for OEC detection in annotated heterogeneous networks, and provide an illustrative case study of an online community of over 22,000 Twitter users whose online behavior directly advocates support for ISIS or contibutes to the group's propaganda dissemination through retweets.", "title": "" }, { "docid": "b0e04a8cc73642c65b51739cb66aba84", "text": "This paper addresses the harmonic stability caused by the interactions among the wideband control of power converters and passive components in an ac power-electronics-based power system. The impedance-based analytical approach is employed and expanded to a meshed and balanced three-phase network which is dominated by multiple current- and voltage-controlled inverters with LCL- and LC-filters. A method of deriving the impedance ratios for the different inverters is proposed by means of the nodal admittance matrix. Thus, the contribution of each inverter to the harmonic stability of the power system can be readily predicted through Nyquist diagrams. Time-domain simulations and experimental tests on a three-inverter-based power system are presented. The results validate the effectiveness of the theoretical approach.", "title": "" }, { "docid": "db78d1774913ac42d7eb31b9db726cc8", "text": "This work addresses the development and application of a novel approach, called sparser relative bundle adjustment (SRBA), which exploits the inherent flexibility of the relative bundle adjustment (RBA) framework to devise a continuum of strategies, ranging from RBA with linear graphs to classic bundle adjustment (BA) in global coordinates, where submapping with local maps emerges as a natural intermediate solution. This method leads to graphs that can be optimized in bounded time even at loop closures, regardless of the loop length. Furthermore, it is shown that the pattern in which relative coordinate variables are defined among keyframes has a significant impact on the graph optimization problem. By using the proposed scheme, optimization can be done more efficiently than in standard RBA, allowing the optimization of larger local maps for any given maximum computational cost. The main algorithms involved in the graph management, along with their complexity analyses, are presented to prove their bounded-time nature. One key advance of the present work is the demonstration that, under mild assumptions, the spanning trees for every single keyframe in the map can be incrementally built by a constant-time algorithm, even for arbitrary graph topologies. We validate our proposal within the scope of visual stereo simultaneous localization and mapping (SLAM) by developing a complete system that includes a front-end that seamlessly integrates several state-of-the-art computer vision techniques such as ORB features and bag-ofwords, along with a decision scheme for keyframe insertion and a SRBA-based back-end that operates as graph optimizer. Finally, a set of experiments in both indoor and outdoor conditions is presented to test the capabilities of this approach. Open-source implementations of the SRBA back-end and the stereo front-end have been released online.", "title": "" }, { "docid": "86ef6a2a5c4f32c466bd3595a828bafb", "text": "Rectus femoris muscle proximal injuries are not rare conditions. The proximal rectus femoris tendinous anatomy is complex and may be affected by traumatic, microtraumatic, or nontraumatic disorders. A good knowledge of the proximal rectus femoris anatomy allows a better understanding of injury and disorder patterns. A new sonographic lateral approach was recently described to assess the indirect head of the proximal rectus femoris, hence allowing for a complete sonographic assessment of the proximal rectus femoris tendons. This article will review sonographic features of direct, indirect, and conjoined rectus femoris tendon disorders.", "title": "" }, { "docid": "12c3f5a20fd197e96cd03fa2ff03a81a", "text": "Topic Detection and Tracking (TDT) is an important research topic in data mining and information retrieval and has been explored for many years. Most of the studies have approached the problem from the event tracking point of view. We argue that the definition of stories as events is not reflecting the full picture. In this work we propose a story tracking method built on crowd-tagging in social media, where news articles are labeled with hashtags in real-time. The social tags act as rich meta-data for news articles, with the advantage that, if carefully employed, they can capture emerging concepts and address concept drift in a story. We present an approach for employing social tags for the purpose of story detection and tracking and show initial empirical results. We compare our method to classic keyword query retrieval and discuss an example of story tracking over time.", "title": "" }, { "docid": "bbeb6f28ae02876dcce8a4cf205b6194", "text": "We propose the design of a programming language for quantum computing. Traditionally, quantum algorithms are frequently expressed at the hardware level, for instance in terms of the quantum circuit model or quantum Turing machines. These approaches do not encourage structured programming or abstractions such as data types. In this paper, we describe the syntax and semantics of a simple quantum programming language with high-level features such as loops, recursive procedures, and structured data types. The language is functional in nature, statically typed, free of run-time errors, and it has an interesting denotational semantics in terms of complete partial orders of superoperators.", "title": "" }, { "docid": "d19503f965e637089d9fa200329f1349", "text": "Almost a half century ago, regular endurance exercise was shown to improve the capacity of skeletal muscle to oxidize substrates to produce ATP for muscle work. Since then, adaptations in skeletal muscle mRNA level were shown to happen with a single bout of exercise. Protein changes occur within days if daily endurance exercise continues. Some of the mRNA and protein changes cause increases in mitochondrial concentrations. One mitochondrial adaptation that occurs is an increase in fatty acid oxidation at a given absolute, submaximal workload. Mechanisms have been described as to how endurance training increases mitochondria. Importantly, Pgc-1α is a master regulator of mitochondrial biogenesis by increasing many mitochondrial proteins. However, not all adaptations to endurance training are associated with increased mitochondrial concentrations. Recent evidence suggests that the energetic demands of muscle contraction are by themselves stronger controllers of body weight and glucose control than is muscle mitochondrial content. Endurance exercise has also been shown to regulate the processes of mitochondrial fusion and fission. Mitophagy removes damaged mitochondria, a process that maintains mitochondrial quality. Skeletal muscle fibers are composed of different phenotypes, which are based on concentrations of mitochondria and various myosin heavy chain protein isoforms. Endurance training at physiological levels increases type IIa fiber type with increased mitochondria and type IIa myosin heavy chain. Endurance training also improves capacity of skeletal muscle blood flow. Endurance athletes possess enlarged arteries, which may also exhibit decreased wall thickness. VEGF is required for endurance training-induced increases in capillary-muscle fiber ratio and capillary density.", "title": "" }, { "docid": "f783fa4cfa6eb85fdf4943ae9916d5cf", "text": "There are difficulties in presenting nontextual or dynamic information to blind or visually impaired users through computers. This article examines the potential of haptic and auditory trajectory playback as a method of teaching shapes and gestures to visually impaired people. Two studies are described which test the success of teaching simple shapes. The first study examines haptic trajectory playback alone, played through a force-feedback device, and compares performance of visually impaired users with sighted users. It demonstrates that the task is significantly harder for visually impaired users. The second study builds on these results, combining force-feedback with audio to teach visually impaired users to recreate shapes. The results suggest that users performed significantly better when presented with multimodal haptic and audio playback of the shape, rather than haptic only. Finally, an initial test of these ideas in an application context is described, with sighted participants describing drawings to visually impaired participants through touch and sound. This study demonstrates in what situations trajectory playback can prove a useful role in a collaborative setting.", "title": "" }, { "docid": "da70744d008c2d0f76d6214e2172f1f8", "text": "Advanced mobile technology continues to shape professional environments. Smart cell phones, pocket computers and laptop computers reduce the need of users to remain close to a wired information system infrastructure and allow for task performance in many different contexts. Among the consequences are changes in technology requirements, such as the need to limit weight and size of the devices. In the current paper, we focus on the factors that users find important in mobile devices. Based on a content analysis of online user reviews that was followed by structural equation modeling, we found four factors to be significantly related with overall user evaluation, namely functionality, portability, performance, and usability. Besides the practical relevance for technology developers and managers, our research results contribute to the discussion about the extent to which previously established theories of technology adoption and use are applicable to mobile technology. We also discuss the methodological suitability of online user reviews for the assessment of user requirements, and the complementarity of automated and non-automated forms of content analysis.", "title": "" }, { "docid": "2ce36ce9de500ba2367b1af83ac3e816", "text": "We examine whether the information content of the earnings report, as captured by the earnings response coefficient (ERC), increases when investors’ uncertainty about the manager’s reporting objectives decreases, as predicted in Fischer and Verrecchia (2000). We use the 2006 mandatory compensation disclosures as an instrument to capture a decrease in investors’ uncertainty about managers’ incentives and reporting objectives. Employing a difference-in-differences design and exploiting the staggered adoption of the new rules, we find a statistically and economically significant increase in ERC for treated firms relative to control firms, largely driven by profit firms. Cross-sectional tests suggest that the effect is more pronounced in subsets of firms most affected by the new rules. Our findings represent the first empirical evidence of a role of compensation disclosures in enhancing the information content of financial reports. JEL Classification: G38, G30, G34, M41", "title": "" }, { "docid": "4fe79eacc0c7213e9075b7d31864aa4c", "text": "This study is one of the few attempts to investigate students’ acceptance of an Internet-based learning medium (ILM). By integrating a motivational perspective into the technology acceptance model, our model captured both extrinsic (perceived usefulness and ease of use) and intrinsic (perceived enjoyment) motivators for explaining students’ intention to use the new learning medium. Data collected from 544 undergraduate students were examined through the LISREL VIII framework. The results showed that both perceived usefulness and perceived enjoyment significantly and directly impacted their intention to use ILM. Surprisingly, perceive ease of use did not posit a significant impact on student attitude or intention towards ILM usage. Implications of this study are important for both researchers and practitioners. # 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "47b5e127b64cf1842841afcdb67d6d84", "text": "This work describes the aerodynamic characteristic for aircraft wing model with and without bird feather like winglet. The aerofoil used to construct the whole structure is NACA 653-218 Rectangular wing and this aerofoil has been used to compare the result with previous research using winglet. The model of the rectangular wing with bird feather like winglet has been fabricated using polystyrene before design using CATIA P3 V5R13 software and finally fabricated in wood. The experimental analysis for the aerodynamic characteristic for rectangular wing without winglet, wing with horizontal winglet and wing with 60 degree inclination winglet for Reynolds number 1.66×10, 2.08×10 and 2.50×10 have been carried out in open loop low speed wind tunnel at the Aerodynamics laboratory in Universiti Putra Malaysia. The experimental result shows 25-30 % reduction in drag coefficient and 10-20 % increase in lift coefficient by using bird feather like winglet for angle of attack of 8 degree. Keywords—Aerofoil, Wind tunnel, Winglet, Drag Coefficient.", "title": "" }, { "docid": "bddb4a82140d3f2de06361164445abda", "text": "Executive Overview Recent advances in the field of neuroscience can significantly add to our understanding of leadership and its development. Specifically, we are interested in what neuroscience can tell us about inspirational leadership. Based on our findings, we discuss how future research in leadership can be combined with neuroscience, as well as potential neurofeedback interventions for the purpose of leadership development. We also consider ethical implications and applications to management-related areas beyond leadership.", "title": "" }, { "docid": "197f5af02ea53b1dd32167780c4126ed", "text": "A new technique for summarization is presented here for summarizing articles known as text summarization using neural network and rhetorical structure theory. A neural network is trained to learn the relevant characteristics of sentences by using back propagation technique to train the neural network which will be used in the summary of the article. After training neural network is then modified to feature fusion and pruning the relevant characteristics apparent in summary sentences. Finally, the modified neural network is used to summarize articles and combining it with the rhetorical structure theory to form final summary of an article.", "title": "" }, { "docid": "f9b56de3658ef90b611c78bdb787d85b", "text": "Time series prediction techniques have been used in many real-world applications such as financial market prediction, electric utility load forecasting , weather and environmental state prediction, and reliability forecasting. The underlying system models and time series data generating processes are generally complex for these applications and the models for these systems are usually not known a priori. Accurate and unbiased estimation of the time series data produced by these systems cannot always be achieved using well known linear techniques, and thus the estimation process requires more advanced time series prediction algorithms. This paper provides a survey of time series prediction applications using a novel machine learning approach: support vector machines (SVM). The underlying motivation for using SVMs is the ability of this methodology to accurately forecast time series data when the underlying system processes are typically nonlinear, non-stationary and not defined a-priori. SVMs have also been proven to outperform other non-linear techniques including neural-network based non-linear prediction techniques such as multi-layer perceptrons.The ultimate goal is to provide the reader with insight into the applications using SVM for time series prediction, to give a brief tutorial on SVMs for time series prediction, to outline some of the advantages and challenges in using SVMs for time series prediction, and to provide a source for the reader to locate books, technical journals, and other online SVM research resources.", "title": "" }, { "docid": "9bd06a8a8c490cd8b686169d1a984a14", "text": "This review of research explores characteristics associated with massive open online courses (MOOCs). Three key characteristics are revealed: varied definitions of openness, barriers to persistence, and a distinct structure that takes the form as one of two pedagogical approaches. The concept of openness shifts among different MOOCs, models, researchers, and facilitators. The high dropout rates show that the barriers to learning are a significant challenge. Research has focused on engagement, motivation, and presence to mitigate risks of learner isolation. The pedagogical structure of the connectivist MOOC model (cMOOC) incorporates a social, distributed, networked approach and significant learner autonomy that is geared towards adult lifelong learners interested in personal or professional development. This connectivist approach relates to situated and social learning theories such as social constructivism (Kop, 2011). By contrast, the design of the Stanford Artificial Intelligence (AI) model (xMOOC) uses conventional directed instruction in the context of formal postsecondary educational institutions. This traditional pedagogical approach is categorized as cognitive-behaviorist (Rodriguez, 2012). These two distinct MOOC models attract different audiences, use different learning approaches, and employ different teaching methods. The purpose of this review is to synthesize the research describing the phenomenon of MOOCs in informal and postsecondary online learning. Massive open online courses (MOOCs) are a relatively new phenomenon sweeping higher education. By definition, MOOCs take place online. They could be affiliated with a university, but not necessarily. They are larger than typical college classes, sometimes much larger. They are open, which has multiple meanings evident in this research. While the literature is growing on this topic, it is yet limited. Scholars are taking notice of the literature around MOOCs in all its forms from conceptual to technical. Conference proceedings and magazine articles make up the majority of literature on MOOCs (Liyanagunawardena, Adams, & Williams, 2013). In order to better understand the characteristics associated with MOOCs, this review of literature focuses solely on original research published in scholarly journals. This emphasis on peer-reviewed research is an essential first step to form a more critical and comprehensive perspective by tempering the media hype. While most of the early scholarly research examines aspects of the cMOOC model, much of the hype and controversy surrounds the scaling innovation of the xMOOC model in postsecondary learning contexts. Naidu (2013) calls out the massive open online repetitions of failed pedagogy (MOORFAPs) and forecasts a transformation to massive open online learning opportunities (MOOLOs). Informed educators will be better equipped to make evidence-based decisions, foster the positive growth of this innovation, and adapt it for their own unique contexts. This research synthesis is framed by a withinand Journal of Interactive Online Learning Kennedy 2 between-study literature analysis (Onwuegbuzie, Leech, & Collins, 2012) and situated within the context of online teaching and learning.", "title": "" } ]
scidocsrr
5ef9eb94477cd6fe0b2cd8a854d03dc1
Vehicle logo recognition in traffic images using HOG features and SVM
[ { "docid": "1c8b8d8322e403fae0d2f361bc00c969", "text": "We explore several image processing methods to automatically identify the make of a vehicle based focused on the manufacturer’s iconic logo. Our findings reveal that large variations in brightness, vehicle features in the foreground, and specular reflections render the scale-invariant feature transform (SIFT) approach practically useless. Methods such as Fourier shape descriptors and inner structure mean square error analysis are able to achieve more reliable results.", "title": "" } ]
[ { "docid": "a1112151de31a27a02965e9e48c26b8c", "text": "The primary goal of this paper is to define and study the interactive information complexity of functions. Let f(x,y) be a function, and suppose Alice is given x and Bob is given y. Informally, the interactive information complexity IC(f) of f is the least amount of information Alice and Bob need to reveal to each other to compute f. Previously, information complexity has been defined with respect to a prior distribution on the input pairs (x,y). Our first goal is to give a definition that is independent of the prior distribution. We show that several possible definitions are essentially equivalent.\n We establish some basic properties of the interactive information complexity IC(f). In particular, we show that IC(f) is equal to the amortized (randomized) communication complexity of f. We also show a direct sum theorem for IC(f) and give the first general connection between information complexity and (non-amortized) communication complexity. This connection implies that a non-trivial exchange of information is required when solving problems that have non-trivial communication complexity.\n We explore the information complexity of two specific problems - Equality and Disjointness. We show that only a constant amount of information needs to be exchanged when solving Equality with no errors, while solving Disjointness with a constant error probability requires the parties to reveal a linear amount of information to each other.", "title": "" }, { "docid": "26d6ffbc4ee2e0f5e3e6699fd33bdc5f", "text": "We present a method for efficient learning of control policies for multiple related robotic motor skills. Our approach consists of two stages, joint training and specialization training. During the joint training stage, a neural network policy is trained with minimal information to disambiguate the motor skills. This forces the policy to learn a common representation of the different tasks. Then, during the specialization training stage we selectively split the weights of the policy based on a per-weight metric that measures the disagreement among the multiple tasks. By splitting part of the control policy, it can be further trained to specialize to each task. To update the control policy during learning, we use Trust Region Policy Optimization with Generalized Advantage Function (TRPOGAE). We propose a modification to the gradient update stage of TRPO to better accommodate multi-task learning scenarios. We evaluate our approach on three continuous motor skill learning problems in simulation: 1) a locomotion task where three single legged robots with considerable difference in shape and size are trained to hop forward, 2) a manipulation task where three robot manipulators with different sizes and joint types are trained to reach different locations in 3D space, and 3) locomotion of a two-legged robot, whose range of motion of one leg is constrained in different ways. We compare our training method to three baselines. The first baseline uses only jointtraining for the policy, the second trains independent policies for each task, and the last randomly selects weights to split. We show that our approach learns more efficiently than each of the baseline methods.", "title": "" }, { "docid": "9e3de4720dade2bb73d78502d7cccc8b", "text": "Skeletonization is a way to reduce dimensionality of digital objects. Here, we present an algorithm that computes the curve skeleton of a surface-like object in a 3D image, i.e., an object that in one of the three dimensions is at most twovoxel thick. A surface-like object consists of surfaces and curves crossing each other. Its curve skeleton is a 1D set centred within the surface-like object and with preserved topological properties. It can be useful to achieve a qualitative shape representation of the object with reduced dimensionality. The basic idea behind our algorithm is to detect the curves and the junctions between different surfaces and prevent their removal as they retain the most significant shape representation. 2002 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "a63aee3bb6f93567e68535e6ee94cd79", "text": "While users trust the selections of their social friends in recommendation systems, the preferences of friends do not necessarily match. In this study, we introduce a deep learning approach to learn both about user preferences and the social influence of friends when generating recommendations. In our model we design a deep learning architecture by stacking multiple marginalized Denoising Autoencoders. We define a joint objective function to enforce the latent representation of social relationships in the Autoencoder's hidden layer to be as close as possible to the users' latent representation when factorizing the user-item matrix. We formulate a joint objective function as a minimization problem to learn both user preferences and friends' social influence and we present an optimization algorithm to solve the joint minimization problem. Our experiments on four benchmark datasets show that the proposed approach achieves high recommendation accuracy, compared to other state-of-the-art methods.", "title": "" }, { "docid": "1dc4a8f02dfe105220db5daae06c2229", "text": "Photosynthesis begins with light harvesting, where specialized pigment-protein complexes transform sunlight into electronic excitations delivered to reaction centres to initiate charge separation. There is evidence that quantum coherence between electronic excited states plays a role in energy transfer. In this review, we discuss how quantum coherence manifests in photosynthetic light harvesting and its implications. We begin by examining the concept of an exciton, an excited electronic state delocalized over several spatially separated molecules, which is the most widely available signature of quantum coherence in light harvesting. We then discuss recent results concerning the possibility that quantum coherence between electronically excited states of donors and acceptors may give rise to a quantum coherent evolution of excitations, modifying the traditional incoherent picture of energy transfer. Key to this (partially) coherent energy transfer appears to be the structure of the environment, in particular the participation of non-equilibrium vibrational modes. We discuss the open questions and controversies regarding quantum coherent energy transfer and how these can be addressed using new experimental techniques.", "title": "" }, { "docid": "b9aa1b23ee957f61337e731611a6301a", "text": "We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFatNet opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 4-bit gradients to get 47% top-1 accuracy on ImageNet validation set.1 The DoReFa-Net AlexNet model is released publicly.", "title": "" }, { "docid": "1e59d0a96b5b652a9a1f9bec77aac29e", "text": "BACKGROUND\n2015 was the target year for malaria goals set by the World Health Assembly and other international institutions to reduce malaria incidence and mortality. A review of progress indicates that malaria programme financing and coverage have been transformed since the beginning of the millennium, and have contributed to substantial reductions in the burden of disease.\n\n\nFINDINGS\nInvestments in malaria programmes increased by more than 2.5 times between 2005 and 2014 from US$ 960 million to US$ 2.5 billion, allowing an expansion in malaria prevention, diagnostic testing and treatment programmes. In 2015 more than half of the population of sub-Saharan Africa slept under insecticide-treated mosquito nets, compared to just 2 % in 2000. Increased availability of rapid diagnostic tests and antimalarial medicines has allowed many more people to access timely and appropriate treatment. Malaria incidence rates have decreased by 37 % globally and mortality rates by 60 % since 2000. It is estimated that 70 % of the reductions in numbers of cases in sub-Saharan Africa can be attributed to malaria interventions.\n\n\nCONCLUSIONS\nReductions in malaria incidence and mortality rates have been made in every WHO region and almost every country. However, decreases in malaria case incidence and mortality rates were slowest in countries that had the largest numbers of malaria cases and deaths in 2000; reductions in incidence need to be greatly accelerated in these countries to achieve future malaria targets. Progress is made challenging because malaria is concentrated in countries and areas with the least resourced health systems and the least ability to pay for system improvements. Malaria interventions are nevertheless highly cost-effective and have not only led to significant reductions in the incidence of the disease but are estimated to have saved about US$ 900 million in malaria case management costs to public providers in sub-Saharan Africa between 2000 and 2014. Investments in malaria programmes can not only reduce malaria morbidity and mortality, thereby contributing to the health targets of the Sustainable Development Goals, but they can also transform the well-being and livelihood of some of the poorest communities across the globe.", "title": "" }, { "docid": "b7b1153067a784a681f2c6d0105acb2a", "text": "Investigations of the human connectome have elucidated core features of adult structural networks, particularly the crucial role of hub-regions. However, little is known regarding network organisation of the healthy elderly connectome, a crucial prelude to the systematic study of neurodegenerative disorders. Here, whole-brain probabilistic tractography was performed on high-angular diffusion-weighted images acquired from 115 healthy elderly subjects (age 76-94 years; 65 females). Structural networks were reconstructed between 512 cortical and subcortical brain regions. We sought to investigate the architectural features of hub-regions, as well as left-right asymmetries, and sexual dimorphisms. We observed that the topology of hub-regions is consistent with a young adult population, and previously published adult connectomic data. More importantly, the architectural features of hub connections reflect their ongoing vital role in network communication. We also found substantial sexual dimorphisms, with females exhibiting stronger inter-hemispheric connections between cingulate and prefrontal cortices. Lastly, we demonstrate intriguing left-lateralized subnetworks consistent with the neural circuitry specialised for language and executive functions, whilst rightward subnetworks were dominant in visual and visuospatial streams. These findings provide insights into healthy brain ageing and provide a benchmark for the study of neurodegenerative disorders such as Alzheimer's disease (AD) and frontotemporal dementia (FTD).", "title": "" }, { "docid": "c80f1e893d723b82569d93f78659275a", "text": "Under the framework of PU(Positive data and Unlabeled data), this paper originally proposes a three-setp algorithm. First, CoTraining is employed for filtering out the likely positive data from the unlabeled dataset U. Second, affinity propagation (AP) approach attempts to pick out the strong positive from likely positive set which is produced in first step. Those data picked out can be supplied to positive dataset P. Finally, a linear One-Class SVM will learn from both the purified U as negative and the expanded P as positive. Because of the algorithm's characteristic of automatic expanding positive dataset, the proposed algorithm especially performs well in situations where given positive dataset P is insufficient. A comprehensive experiment had proved that our algorithm is preferable to the existing ones.", "title": "" }, { "docid": "49e574e30b35811205e55c582eccc284", "text": "Intracerebral hemorrhage (ICH) is a devastating disease with high rates of mortality and morbidity. The major risk factors for ICH include chronic arterial hypertension and oral anticoagulation. After the initial hemorrhage, hematoma expansion and perihematoma edema result in secondary brain damage and worsened outcome. A rapid onset of focal neurological deficit with clinical signs of increased intracranial pressure is strongly suggestive of a diagnosis of ICH, although cranial imaging is required to differentiate it from ischemic stroke. ICH is a medical emergency and initial management should focus on urgent stabilization of cardiorespiratory variables and treatment of intracranial complications. More than 90% of patients present with acute hypertension, and there is some evidence that acute arterial blood pressure reduction is safe and associated with slowed hematoma growth and reduced risk of early neurological deterioration. However, early optimism that outcome might be improved by the early administration of recombinant factor VIIa (rFVIIa) has not been substantiated by a large phase III study. ICH is the most feared complication of warfarin anticoagulation, and the need to arrest intracranial bleeding outweighs all other considerations. Treatment options for warfarin reversal include vitamin K, fresh frozen plasma, prothrombin complex concentrates, and rFVIIa. There is no evidence to guide the specific management of antiplatelet therapy-related ICH. With the exceptions of placement of a ventricular drain in patients with hydrocephalus and evacuation of a large posterior fossa hematoma, the timing and nature of other neurosurgical interventions is also controversial. There is substantial evidence that management of patients with ICH in a specialist neurointensive care unit, where treatment is directed toward monitoring and managing cardiorespiratory variables and intracranial pressure, is associated with improved outcomes. Attention must be given to fluid and glycemic management, minimizing the risk of ventilator-acquired pneumonia, fever control, provision of enteral nutrition, and thromboembolic prophylaxis. There is an increasing awareness that aggressive management in the acute phase can translate into improved outcomes after ICH.", "title": "" }, { "docid": "e0f6878845e02e966908311e6818dbe9", "text": "Smart Home is one of emerging application domains of The Internet of things which following the computer and Internet. Although home automation technologies have been commercially available already, they are basically designed for signal-family smart homes with a high cost, and along with the constant growth of digital appliances in smart home, we merge smart home into smart-home-oriented Cloud to release the stress on the smart home system which mostly installs application software on their local computers. In this paper, we present a framework for Cloud-based smart home for enabling home automation, household mobility and interconnection which easy extensible and fit for future demands. Through subscribing services of the Cloud, smart home consumers can easily enjoy smart home services without purchasing computers which owns strong power and huge storage. We focus on the overall Smart Home framework, the features and architecture of the components of Smart Home, the interaction and cooperation between them in detail.", "title": "" }, { "docid": "e440ad1afbbfbf5845724fd301051d92", "text": "The paper considers the conceptual approach for organization of the vertical hierarchical links between the scalable distributed computing paradigms: Cloud Computing, Fog Computing and Dew Computing. In this paper, the Dew Computing is described and recognized as a new structural layer in the existing distributed computing hierarchy. In the existing computing hierarchy, the Dew computing is positioned as the ground level for the Cloud and Fog computing paradigms. Vertical, complementary, hierarchical division from Cloud to Dew Computing satisfies the needs of highand low-end computing demands in everyday life and work. These new computing paradigms lower the cost and improve the performance, particularly for concepts and applications such as the Internet of Things (IoT) and the Internet of Everything (IoE). In addition, the Dew computing paradigm will require new programming models that will efficiently reduce the complexity and improve the productivity and usability of scalable distributed computing, following the principles of High-Productivity computing. TYPE OF PAPER AND", "title": "" }, { "docid": "1c16d6b5072283cfc9301f6ae509ede1", "text": "T paper introduces a model of collective creativity that explains how the locus of creative problem solving shifts, at times, from the individual to the interactions of a collective. The model is grounded in observations, interviews, informal conversations, and archival data gathered in intensive field studies of work in professional service firms. The evidence suggests that although some creative solutions can be seen as the products of individual insight, others should be regarded as the products of a momentary collective process. Such collective creativity reflects a qualitative shift in the nature of the creative process, as the comprehension of a problematic situation and the generation of creative solutions draw from—and reframe—the past experiences of participants in ways that lead to new and valuable insights. This research investigates the origins of such moments, and builds a model of collective creativity that identifies the precipitating roles played by four types of social interaction: help seeking, help giving, reflective reframing, and reinforcing. Implications of this research include shifting the emphasis in research and management of creativity from identifying and managing creative individuals to understanding the social context and developing interactive approaches to creativity, and from a focus on relatively constant contextual variables to the alignment of fluctuating variables and their precipitation of momentary phenomena.", "title": "" }, { "docid": "5cce041182c2aa0fbd72fa2fe8070595", "text": "Convolutional neural networks are modern models that are very efficient in many classification tasks. They were originally created for image processing purposes. Then some trials were performed to use them in different domains like natural language processing. The artificial intelligence systems (like humanoid robots) are very often based on embedded systems with constraints on memory, power consumption etc. Therefore convolutional neural network because of its memory capacity should be reduced to be mapped to given hardware. In this paper, results are presented of compressing the efficient convolutional neural networks for sentiment analysis. The main steps are quantization and pruning processes. The method responsible for mapping compressed network to FPGA and results of this implementation are presented. The described simulations showed that 5-bit width is enough to have no drop in accuracy from floating point version of the network. Additionally, significant memory footprint reduction was achieved (from 85% up to 93%).", "title": "" }, { "docid": "11c7ceb4d63be002154cf162f635687c", "text": "Inter-network interference is a significant source of difficulty for wireless body area networks. Movement, proximity and the lack of central coordination all contribute to this problem. We compare the interference power of multiple Body Area Network (BAN) devices when a group of people move randomly within an office area. We find that the path loss trend is dominated by local variations in the signal, and not free-space path loss exponent.", "title": "" }, { "docid": "a95ca56f64150700cd899a5b0ee1c4b8", "text": "Due to the pervasiveness of digital technologies in all aspects of human lives, it is increasingly unlikely that a digital device is involved as goal, medium or simply ’witness’ of a criminal event. Forensic investigations include recovery, analysis and presentation of information stored in digital devices and related to computer crimes. These activities often involve the adoption of a wide range of imaging and analysis tools and the application of different techniques on different devices, with the consequence that the reconstruction and presentation activities result complicated. This work presents a method, based on Semantic Web technologies, that helps digital investigators to correlate and present information acquired from forensic data, with the aim to get a more valuable reconstruction of events or actions in order to reach case conclusions.", "title": "" }, { "docid": "4d7b93ee9c6036c5915dd1166c9ae2f8", "text": "In this paper, we present a developed NS-3 based emulation platform for evaluating and optimizing the performance of the LTE networks. The developed emulation platform is designed to provide real-time measurements. Thus it eliminates the need for the high cost spent on real equipment. The developed platform consists of three main parts, which are video server, video client(s), and NS-3 based simulation environment for LTE network. Using the developed platform, the server streams video clips to the existing clients going through the LTE simulated network. We utilize this setup to evaluate multiple cases such as mobility and handover. Moreover, we use it for evaluating multiple streaming protocols such as UDP, RTP, and Dynamic Adaptive Streaming over HTTP (DASH). Keywords-DASH, Emulation, LTE, NS-3, Real-time, RTP, UDP.", "title": "" }, { "docid": "60e85f090d31b8834849ce93d98f8e2e", "text": "Many methods have been developed for monitoring network traffic, both using visualization and statistics. Most of these methods focus on the detection of suspicious or malicious activities. But what they often fail to do refine and exercise measures that contribute to the characterization of such activities and their sources, once they are detected. In particular, many tools exist that detect network scans or visualize them at a high level, but not very many tools exist that are capable of categorizing and analyzing network scans. This paper presents a means of facilitating the process of characterization by using visualization and statistics techniques to analyze the patterns found in the timing of network scans through a method of continuous improvement in measures that serve to separate the components of interest in the characterization so the user can control separately for the effects of attack tool employed, performance characteristics of the attack platform, and the effects of network routing in the arrival patterns of hostile probes. The end result is a system that allows large numbers of network scans to be rapidly compared and subsequently identified.", "title": "" } ]
scidocsrr
77e0dc19dd23dcaa94a73efbafbc8ecc
Design and Analysis of the Droop Control Method for Parallel Inverters Considering the Impact of the Complex Impedance on the Power Sharing
[ { "docid": "3216434dce13125f4d49c2e6890fd36a", "text": "In this paper, a power control strategy is proposed for a low-voltage microgrid, where the mainly resistive line impedance, the unequal impedance among distributed generation (DG) units, and the microgrid load locations make the conventional frequency and voltage droop method unpractical. The proposed power control strategy contains a virtual inductor at the interfacing inverter output and an accurate power control and sharing algorithm with consideration of both impedance voltage drop effect and DG local load effect. Specifically, the virtual inductance can effectively prevent the coupling between the real and reactive powers by introducing a predominantly inductive impedance even in a low-voltage network with resistive line impedances. On the other hand, based on the predominantly inductive impedance, the proposed accurate reactive power sharing algorithm functions by estimating the impedance voltage drops and significantly improves the reactive power control and sharing accuracy. Finally, considering the different locations of loads in a multibus microgrid, the reactive power control accuracy is further improved by employing an online estimated reactive power offset to compensate the effects of DG local load power demands. The proposed power control strategy has been tested in simulation and experimentally on a low-voltage microgrid prototype.", "title": "" } ]
[ { "docid": "804cee969d47d912d8bdc40f3a3eeb32", "text": "The problem of matching a forensic sketch to a gallery of mug shot images is addressed in this paper. Previous research in sketch matching only offered solutions to matching highly accurate sketches that were drawn while looking at the subject (viewed sketches). Forensic sketches differ from viewed sketches in that they are drawn by a police sketch artist using the description of the subject provided by an eyewitness. To identify forensic sketches, we present a framework called local feature-based discriminant analysis (LFDA). In LFDA, we individually represent both sketches and photos using SIFT feature descriptors and multiscale local binary patterns (MLBP). Multiple discriminant projections are then used on partitioned vectors of the feature-based representation for minimum distance matching. We apply this method to match a data set of 159 forensic sketches against a mug shot gallery containing 10,159 images. Compared to a leading commercial face recognition system, LFDA offers substantial improvements in matching forensic sketches to the corresponding face images. We were able to further improve the matching performance using race and gender information to reduce the target gallery size. Additional experiments demonstrate that the proposed framework leads to state-of-the-art accuracys when matching viewed sketches.", "title": "" }, { "docid": "1fd83e5db732a1169aef1e1aae71fe54", "text": "In the present paper, we analyze the past, present and future of medicinal plants, both as potential antimicrobial crude drugs as well as a source for natural compounds that act as new anti-infection agents. In the past few decades, the search for new anti-infection agents has occupied many research groups in the field of ethnopharmacology. When we reviewed the number of articles published on the antimicrobial activity of medicinal plants in PubMed during the period between 1966 and 1994, we found 115; however, in the following decade between 1995 and 2004, this number more than doubled to 307. In the studies themselves one finds a wide range of criteria. Many focus on determining the antimicrobial activity of plant extracts found in folk medicine, essential oils or isolated compounds such as alkaloids, flavonoids, sesquiterpene lactones, diterpenes, triterpenes or naphtoquinones, among others. Some of these compounds were isolated or obtained by bio-guided isolation after previously detecting antimicrobial activity on the part of the plant. A second block of studies focuses on the natural flora of a specific region or country; the third relevant group of papers is made up of specific studies of the activity of a plant or principle against a concrete pathological microorganism. Some general considerations must be established for the study of the antimicrobial activity of plant extracts, essential oils and the compounds isolated from them. Of utmost relevance is the definition of common parameters, such as plant material, techniques employed, growth medium and microorganisms tested.", "title": "" }, { "docid": "acbb920f48119857f598388a39cdebb6", "text": "Quantitative analyses in landscape ecology have traditionally been dominated by the patch-mosaic concept in which landscapes are modeled as a mosaic of discrete patches. This model is useful for analyzing categorical data but cannot sufficiently account for the spatial heterogeneity present in continuous landscapes. Sub-pixel remote sensing classifications offer a potential data source for capturing continuous spatial heterogeneity but lack discrete land cover classes and therefore cannot be analyzed using standard landscape metric tools. This research introduces the threshold gradient method to allow transformation of continuous sub-pixel classifications into a series of discrete maps based on land cover proportion (i.e., intensity) that can be analyzed using landscape metric tools. Sub-pixel data are reclassified at multiple thresholds along a land cover continuum and landscape metrics are computed for each map. Metrics are plotted in response to intensity and these ‘scalograms’ are mathematically modeled using curve fitting techniques to allow determination of critical land cover thresholds (e.g., inflection points) where considerable landscape changes are occurring. Results show that critical land cover intensities vary between metrics, and the approach can generate increased ecological information not available with other landscape characterization methods.", "title": "" }, { "docid": "3936d7cf086384ac24afec31f49235bc", "text": "Purpose: To compare the Percentage of Consonants Correct (PCC) index of children with and without hearing loss, and to verify whether the time using hearing aids, the time in therapy, and the time spent until hearing loss was diagnosed influence the performance of deaf children. Methods: Participants were 30 children, 15 with hearing impairment and 15 with normal hearing, paired by gender and age. The PCC index was calculated in three different tasks: picture naming, imitation and spontaneous speech. The phonology tasks of the ABFW – Teste de Linguagem Infantil were used in the evaluation. Results: Differences were found between groups in all tasks, and normally hearing children had better results. PCC indexes presented by children with hearing loss characterized a moderately severe phonological disorder. Children enrolled in therapy for a longer period had better PCC indexes, and the longer they had been using hearing aids, the better their performances on the imitation task. Conclusion: Children with hearing loss have lower PCC indexes when compared to normally hearing children. The average performance and imitation are influenced by time in therapy and time using hearing aids.", "title": "" }, { "docid": "abe0205896b0edb31e1a527456b33184", "text": "MouseLight is a spatially-aware standalone mobile projector with the form factor of a mouse that can be used in combination with digital pens on paper. By interacting with the projector and the pen bimanually, users can visualize and modify the virtually augmented contents on top of the paper, and seamlessly transition between virtual and physical information. We present a high fidelity hardware prototype of the system and demonstrate a set of novel interactions specifically tailored to the unique properties of MouseLight. MouseLight differentiates itself from related systems such as PenLight in two aspects. First, MouseLight presents a rich set of bimanual interactions inspired by the ToolGlass interaction metaphor, but applied to physical paper. Secondly, our system explores novel displaced interactions, that take advantage of the independent input and output that is spatially aware of the underneath paper. These properties enable users to issue remote commands such as copy and paste or search. We also report on a preliminary evaluation of the system which produced encouraging observations and feedback.", "title": "" }, { "docid": "c2a32d79289299ef255ab53af02b7c6a", "text": "Deep latent variable models have been shown to facilitate the response generation for open-domain dialog systems. However, these latent variables are highly randomized, leading to uncontrollable generated responses. In this paper, we propose a framework allowing conditional response generation based on specific attributes. These attributes can be either manually assigned or automatically detected. Moreover, the dialog states for both speakers are modeled separately in order to reflect personal features. We validate this framework on two different scenarios, where the attribute refers to genericness and sentiment states respectively. The experiment result testified the potential of our model, where meaningful responses can be generated in accordance with the specified attributes.", "title": "" }, { "docid": "84b4228c5fdeb8df274bf2d60651b3ac", "text": "THE multiplayer game (MPG) market is segmented into a handful of readily identifiable genres, the most popular being first-person shooters, realtime strategy games, and role-playing games. First-person shooters (FPS) such as Quake [11], Half-Life [17], and Unreal Tournament [9] are fast-paced conflicts between up to thirty heavily armed players. Players in realtime strategy (RTS) games like Command & Conquer [19], StarCraft [8], and Age of Empires [18] or role-playing game (RPG) such as Diablo II [7] command tens or hundreds of units in battle against up to seven other players. Persistent virtual worlds such as Ultima Online [2], Everquest [12], and Lineage [14] encompass hundreds of thousands of players at a time (typically served by multiple servers). Cheating has always been a problem in computer games, and when prizes are involved can become a contractual issue for the game service provider. Here we examine a cheat where players lie about their network latency (and therefore the amount of time they have to react to their opponents) to see into the future and stay", "title": "" }, { "docid": "46e8318e76a1b2e539d7eafd65617993", "text": "A super wideband printed modified bow-tie antenna loaded with rounded-T shaped slots fed through a microstrip balun is proposed for microwave and millimeter-wave band imaging applications. The modified slot-loaded bow-tie pattern increases the electrical length of the bow-tie antenna reducing the lower band to 3.1 GHz. In addition, over the investigated frequency band up to 40 GHz, the proposed modified bow-tie pattern considerably flattens the input impedance response of the bow-tie resulting in a smooth impedance matching performance enhancing the reflection coefficient (S11) characteristics. The introduction of the modified ground plane printed underneath the bow-tie, on the other hand, yields to directional far-field radiation patterns with considerably enhanced gain performance. The S11 and E-plane/H-plane far-field radiation pattern measurements have been carried out and it is demonstrated that the fabricated bow-tie antenna operates across a measured frequency band of 3.1-40 GHz with an average broadband gain of 7.1 dBi.", "title": "" }, { "docid": "a3cb91fb614f3f772a277b3d125c4088", "text": "Exploring the inherent technical challenges in realizing the potential of Big Data.", "title": "" }, { "docid": "932d97b9a24f80c3b45782804e74881e", "text": "Deep learning (DL) has achieved remarkable progress over the past decade and been widely applied to many safety-critical applications. However, the robustness of DL systems recently receives great concerns, such as adversarial examples against computer vision systems, which could potentially result in severe consequences. Adopting testing techniques could help to evaluate the robustness of a DL system and therefore detect vulnerabilities at an early stage. The main challenge of testing such systems is that its runtime state space is too large: if we view each neuron as a runtime state for DL, then a DL system often contains massive states, rendering testing each state almost impossible. For traditional software, combinatorial testing (CT) is an effective testing technique to reduce the testing space while obtaining relatively high defect detection abilities. In this paper, we perform an exploratory study of CT on DL systems. We adapt the concept in CT and propose a set of coverage criteria for DL systems, as well as a CT coverage guided test generation technique. Our evaluation demonstrates that CT provides a promising avenue for testing DL systems. We further pose several open questions and interesting directions for combinatorial testing of DL systems.", "title": "" }, { "docid": "a40d3b98ab50a5cd924be09ab1f1cc40", "text": "Feeling comfortable reading and understanding financial statements is critical to the success of healthcare executives and physicians involved in management. Businesses use three primary financial statements: a balance sheet represents the equation, Assets = Liabilities + Equity; an income statement represents the equation, Revenues - Expenses = Net Income; a statement of cash flows reports all sources and uses of cash during the represented period. The balance sheet expresses financial indicators at one particular moment in time, whereas the income statement and the statement of cash flows show activity that occurred over a stretch of time. Additional information is disclosed in attached footnotes and other supplementary materials. There are two ways to prepare financial statements. Cash-basis accounting recognizes revenue when it is received and expenses when they are paid. Accrual-basis accounting recognizes revenue when it is earned and expenses when they are incurred. Although cash-basis is acceptable, periodically using the accrual method reveals important information about receivables and liabilities that could otherwise remain hidden. Become more engaged with your financial statements by spending time reading them, tracking key performance indicators, and asking accountants and financial advisors questions. This will help you better understand your business and build a successful future.", "title": "" }, { "docid": "5acad83ce99c6403ef20bfa62672eafd", "text": "A large class of sequential decision-making problems under uncertainty can be modeled as Markov and Semi-Markov Decision Problems, when their underlying probability structure has a Markov chain. They may be solved by using classical dynamic programming methods. However, dynamic programming methods suffer from the curse of dimensionality and break down rapidly in face of large state spaces. In addition, dynamic programming methods require the exact computation of the so-called transition probabilities, which are often hard to obtain and are hence said to suffer from the curse of modeling as well. In recent years, a simulation-based method, called reinforcement learning, has emerged in the literature. It can, to a great extent, alleviate stochastic dynamic programming of its curses by generating near-optimal solutions to problems having large state-spaces and complex transition mechanisms. In this paper, a simulation-based algorithm that solves Markov and Semi-Markov decision problems is presented, along with its convergence analysis. The algorithm involves a step-size based transformation on two time scales. Its convergence analysis is based on a recent result on asynchronous convergence of iterates on two time scales. We present numerical results from the new algorithm on a classical preventive maintenance case study of a reasonable size, where results on the optimal policy are also available. In addition, we present a tutorial that explains the framework of reinforcement learning in the context of semi-Markov decision problems for long-run average cost.", "title": "" }, { "docid": "b6c94af660b76a66154a973a4cfbe03f", "text": "Deep neural networks have evolved remarkably over the past few years and they are currently the fundamental tools of many intelligent systems. At the same time, the computational complexity and resource consumption of these networks continue to increase. This poses a significant challenge to the deployment of such networks, especially in real-time applications or on resource-limited devices. Thus, network acceleration has become a hot topic within the deep learning community. As for hardware implementation of deep neural networks, a batch of accelerators based on a field-programmable gate array (FPGA) or an application-specific integrated circuit (ASIC) have been proposed in recent years. In this paper, we provide a comprehensive survey of recent advances in network acceleration, compression, and accelerator design from both algorithm and hardware points of view. Specifically, we provide a thorough analysis of each of the following topics: network pruning, low-rank approximation, network quantization, teacher–student networks, compact network design, and hardware accelerators. Finally, we introduce and discuss a few possible future directions.", "title": "" }, { "docid": "03764875c88a1480264050b0b0a16437", "text": "Social media anomaly detection is of critical importance to prevent malicious activities such as bullying, terrorist attack planning, and fraud information dissemination. With the recent popularity of social media, new types of anomalous behaviors arise, causing concerns from various parties. While a large amount of work have been dedicated to traditional anomaly detection problems, we observe a surge of research interests in the new realm of social media anomaly detection. In this paper, we present a survey on existing approaches to address this problem. We focus on the new type of anomalous phenomena in the social media and review the recent developed techniques to detect those special types of anomalies. We provide a general overview of the problem domain, common formulations, existing methodologies and potential directions. With this work, we hope to call out the attention from the research community on this challenging problem and open up new directions that we can contribute in the future.", "title": "" }, { "docid": "9006f257d25a9ba4dd2ae07eccccb0c2", "text": "Using memoization and various other optimization techniques, the number of dissections of the n × n square into n polyominoes of size n is computed for n ≤ 8. On this task our method outperforms Donald Knuth’s Algorithm X with Dancing Links. The number of jigsaw sudoku puzzle solutions is computed for n ≤ 7. For every jigsaw sudoku puzzle polyomino cover with n ≤ 6 the size of its smallest critical sets is determined. Furthermore it is shown that for every n ≥ 4 there exists a polyomino cover that does not allow for any sudoku puzzle solution. We give a closed formula for the number of possible ways to fill the border of an n × n square with numbers while obeying Latin square constraints. We define a cannibal as a nonempty hyperpolyomino that disconnects its exterior from its interior, where the interior is exactly the size of the hyperpolyomino itself, and we present the smallest found cannibals in two and three dimensions.", "title": "" }, { "docid": "bf83b9fef9b4558538b2207ba57b4779", "text": "This paper presents preliminary results for the design, development and evaluation of a hand rehabilitation glove fabricated using soft robotic technology. Soft actuators comprised of elastomeric materials with integrated channels that function as pneumatic networks (PneuNets), are designed and geometrically analyzed to produce bending motions that can safely conform with the human finger motion. Bending curvature and force response of these actuators are investigated using geometrical analysis and a finite element model (FEM) prior to fabrication. The fabrication procedure of the chosen actuator is described followed by a series of experiments that mechanically characterize the actuators. The experimental data is compared to results obtained from FEM simulations showing good agreement. Finally, an open-palm glove design and the integration of the actuators to it are described, followed by a qualitative evaluation study.", "title": "" }, { "docid": "8caaea6ffb668c019977809773a6d8c5", "text": "In the past several years, a number of different language modeling improvements over simple trigram models have been found, including caching, higher-order n-grams, skipping, interpolated Kneser–Ney smoothing, and clustering. We present explorations of variations on, or of the limits of, each of these techniques, including showing that sentence mixture models may have more potential. While all of these techniques have been studied separately, they have rarely been studied in combination. We compare a combination of all techniques together to a Katz smoothed trigram model with no count cutoffs. We achieve perplexity reductions between 38 and 50% (1 bit of entropy), depending on training data size, as well as a word error rate reduction of 8 .9%. Our perplexity reductions are perhaps the highest reported compared to a fair baseline. c © 2001 Academic Press", "title": "" }, { "docid": "bd92af2495300beb16e8832a80e9fc25", "text": "Increasingly, business analytics is seen to provide the possibilities for businesses to effectively support strategic decision-making, thereby to become a source of strategic business value. However, little research exists regarding the mechanism through which business analytics supports strategic decisionmaking and ultimately organisational performance. This paper draws upon literature on IT affordances and strategic decision-making to (1) understand the decision-making affordances provided by business analytics, and (2) develop a research model linking business analytics, data-driven culture, decision-making affordances, strategic decision-making, and organisational performance. The model is empirically tested using structural equation modelling based on 296 survey responses collected from UK businesses. The study produces four main findings: (1) business analytics has a positive effect on decision-making affordances both directly and indirectly through the mediation of a data-driven culture; (2) decision-making affordances significantly influence strategic decision comprehensiveness positively and intuitive decision-making negatively; (3) data-driven culture has a significant and positive effect on strategic decision comprehensiveness; and (4) strategic decision comprehensiveness has a positive effect on organisational performance but a negative effect on intuitive decision-making.", "title": "" }, { "docid": "46927d9d8eff9727154b0aa5ac5e4d9a", "text": "Intelligent human agents exist in a coop erative social environment that facilitates learning They learn not only by trial and error but also through cooperation by sharing instantaneous information episodic experience and learned knowledge The key investigations of this paper are Given the same number of reinforcement learning agents will cooperative agents outperform independent agents who do not communicate during learning and What is the price for such cooperation Using independent agents as a benchmark cooperative agents are studied in following ways sharing sensation sharing episodes and shar ing learned policies This paper shows that a additional sensation from another agent is bene cial if it can be used e ciently b shar ing learned policies or episodes among agents speeds up learning at the cost of communica tion and c for joint tasks agents engaging in partnership can signi cantly outperform independent agents although they may learn slowly in the beginning These tradeo s are not just limited to multi agent reinforcement learning", "title": "" }, { "docid": "cfee5bd5aaee1e8ea40ce6ce88746902", "text": "A CPW-fed planar monopole antenna for triple band operation is presented. The antenna consists of an elliptical radiating patch with a curved ground plane with embedded slots. When two narrow slots are introduced on a wideband elliptical monopole antenna (2.2-7 GHz), two bands are rejected without affecting the antenna properties at the rest of the operating frequencies. By properly choosing the length and location of the slots, a triple band antenna design is achieved. Impedance and radiation characteristics of the antenna are studied and results indicate that it is suitable for the 2.5-2.69 GHz, 3.4-3.69 GHz, and 5.25-5.85 GHz WiMAX applications and also the 2.4-2.484 GHz, 5.15-5.35 GHz, and 5.725-5.825 GHz WLAN applications. The antenna exhibits omnidirectional radiation coverage with its gain significantly reduced at the notched frequency bands.", "title": "" } ]
scidocsrr
3d2adcb8dc3f5229f54f072cfddfa500
Chunking based malayalam paraphrase identification using unfolding recursive autoencoders
[ { "docid": "e273298153872073e463662b5d6d8931", "text": "The lack of readily-available large corpora of aligned monolingual sentence pairs is a major obstacle to the development of Statistical Machine Translation-based paraphrase models. In this paper, we describe the use of annotated datasets and Support Vector Machines to induce larger monolingual paraphrase corpora from a comparable corpus of news clusters found on the World Wide Web. Features include: morphological variants; WordNet synonyms and hypernyms; loglikelihood-based word pairings dynamically obtained from baseline sentence alignments; and formal string features such as word-based edit distance. Use of this technique dramatically reduces the Alignment Error Rate of the extracted corpora over heuristic methods based on position of the sentences in the text.", "title": "" }, { "docid": "98a6dce997d7d8e93648d68c8be867a6", "text": "The first work of this kind in a monolingual setting successfully generates two and threeword phrases with predetermined syntactic structures by decoupling the task into three phases: synthesis, decomposition, and search [4]. During the synthesis phase, a vector is constructed from some input text. This vector is decomposed into multiple output vectors that are then matched to words in the vocabulary using a nearest-neighbor search.", "title": "" }, { "docid": "87f0a390580c452d77fcfc7040352832", "text": "• J. Wieting, M. Bansal, K. Gimpel, K. Livescu, and D. Roth. 2015. From paraphrase database to compositional paraphrase model and back. TACL. • K. S. Tai, R. Socher, and C. D. Manning. 2015. Improved semantic representations from treestructured long short-term memory networks. ACL. • W. Yin and H. Schutze. 2015. Convolutional neural network for paraphrase identification. NAACL. The product also streams internet radio and comes with a 30-day free trial for realnetworks' rhapsody music subscription. The device plays internet radio streams and comes with a 30-day trial of realnetworks rhapsody music service. Given two sentences, measure their similarity:", "title": "" } ]
[ { "docid": "7954b2262edac9a45cf3e13fc50e9aa2", "text": "Recent research has focused heavily on the practicality and feasibility of alternative architectures for supporting continuous auditing. In this paper, we explore the alternative architectures for continuous auditing that have been proposed in both the research and practice environments. We blend a focus on the practical realities of the current technological options and ERP structures with the emerging theory and research on continuous assurance models. The focus is on identifying the strengths and weaknesses of each architectural form as a basis for forming a research agenda that could allow researchers to contribute to the future evolution of both ERP system designs and auditor implementation strategies. There are substantial implications and insights that should be of interest to both researchers and practitioners interested in exploring continuous audit feasibility, capability, and organizational impact.", "title": "" }, { "docid": "3eaaab00b9b9f7441cda99d659ad7abd", "text": "Limited angle problem is a challenging issue in x-ray computed tomography (CT) field. Iterative reconstruction methods that utilize the additional prior can suppress artifacts and improve image quality, but unfortunately require increased computation time. An interesting way is to restrain the artifacts in the images reconstructed from the practical filtered back projection (FBP) method. Frikel and Quinto have proved that the streak artifacts in FBP results could be characterized. It indicates that the artifacts created by FBP method have specific and similar characteristics in a stationary limited-angle scanning configuration. Based on this understanding, this work aims at developing a method to extract and suppress specific artifacts of FBP reconstructions for limited-angle tomography. A data-driven learning-based method is proposed based on a deep convolutional neural network. An end-to-end mapping between the FBP and artifact-free images is learned and the implicit features involving artifacts will be extracted and suppressed via nonlinear mapping. The qualitative and quantitative evaluations of experimental results indicate that the proposed method show a † Corresponding authors: Liang Li, liliang@tsinghua.edu.cn Bin Yan, ybspace@hotmail.com Contacts for other authors: Hanmung Zhang: z.hanming@hotmail.com Kai Qiao: 15517181502@163.com Linyuan Wang: wanglinyuanwly@163.com Lei Li: leehotline@aliyun.com Guoen Hu: 13838265028@126.com stable and prospective performance on artifacts reduction and detail recovery for limited angle tomography. The presented strategy provides a simple and efficient approach for improving image quality of the reconstruction results from limited projection data.", "title": "" }, { "docid": "be989252cdad4886613f53c7831454cb", "text": "Stress and cortisol are known to impair memory retrieval of well-consolidated declarative material. The effects of cortisol on memory retrieval may in particular be due to glucocorticoid (GC) receptors in the hippocampus and prefrontal cortex (PFC). Therefore, effects of stress and cortisol should be observable on both hippocampal-dependent declarative memory retrieval and PFC-dependent working memory (WM). In the present study, it was tested whether psychosocial stress would impair both WM and memory retrieval in 20 young healthy men. In addition, the association between cortisol levels and cognitive performance was assessed. It was found that stress impaired WM at high loads, but not at low loads in a Sternberg paradigm. High cortisol levels at the time of testing were associated with slow WM performance at high loads, and with impaired recall of moderately emotional, but not of highly emotional paragraphs. Furthermore, performance at high WM loads was associated with memory retrieval. These data extend previous results of pharmacological studies in finding WM impairments after acute stress at high workloads and cortisol-related retrieval impairments.", "title": "" }, { "docid": "0e9367417aa348de3d396a0d292ad1c3", "text": "Wheelchair control requires multiple degrees of freedom and fast intention detection, which makes electroencephalography (EEG)-based wheelchair control a big challenge. In our previous study, we have achieved direction (turning left and right) and speed (acceleration and deceleration) control of a wheelchair using a hybrid brain–computer interface (BCI) combining motor imagery and P300 potentials. In this paper, we proposed hybrid EEG-EOG BCI, which combines motor imagery, P300 potentials, and eye blinking to implement forward, backward, and stop control of a wheelchair. By performing relevant activities, users (e.g., those with amyotrophic lateral sclerosis and locked-in syndrome) can navigate the wheelchair with seven steering behaviors. Experimental results on four healthy subjects not only demonstrate the efficiency and robustness of our brain-controlled wheelchair system but also indicate that all the four subjects could control the wheelchair spontaneously and efficiently without any other assistance (e.g., an automatic navigation system).", "title": "" }, { "docid": "562a86a07858a118fd5beef075247341", "text": "Despite the criticism concerning the value of TV content, research reveals several worthwhile aspects -- one of them is the opportunity to learn. In this article we explore the characteristics of interactive TV applications that facilitate education and interactive entertainment. In doing so we analyze research methods and empirical results from experimental and field studies. The findings suggest that interactive TV applications provide support for education and entertainment for children and young people, as well as continuous education for all. In particular, interactive TV is especially suitable for (1) informal learning and (2) for engaging and motivating its audience. We conclude with an agenda for future interactive TV research in entertainment and education.", "title": "" }, { "docid": "a91c43ef77f03672011d0353f00a1c5d", "text": "Presence, the experience of ‘being there’ in a mediated environment, has become closely associated with VR and other advanced media. Different types of presence are discussed, including physical presence, social presence, and co-presence. Fidelity-based approaches to presence research emphasize the fact that as media become increasingly interactive, perceptually realistic, and immersive, the experience of presence becomes more convincing. In addition, the ecological-cultural approach is described, pointing out the importance of the possibility of action in mediated environments, as well as the role that a common cultural framework plays in engendering a sense of presence. In particular for multi-user or collaborative virtual environments (CVEs), processes of negotiation and community creation need to be supported by the CVE design to enable communication and the creation of a social context within the CVE.", "title": "" }, { "docid": "2c6fd73e6ec0ebc0ae257676c712d024", "text": "This paper addresses the problem of spatiotemporal localization of actions in videos. Compared to leading approaches, which all learn to localize based on carefully annotated boxes on training video frames, we adhere to a weakly-supervised solution that only requires a video class label. We introduce an actor-supervised architecture that exploits the inherent compositionality of actions in terms of actor transformations, to localize actions. We make two contributions. First, we propose actor proposals derived from a detector for human and non-human actors intended for images, which is linked over time by Siamese similarity matching to account for actor deformations. Second, we propose an actor-based attention mechanism that enables the localization of the actions from action class labels and actor proposals and is end-to-end trainable. Experiments on three human and non-human action datasets show actor supervision is state-of-the-art for weakly-supervised action localization and is even competitive to some fullysupervised alternatives.", "title": "" }, { "docid": "10b6b29254236c600040d27498f40feb", "text": "Large-scale clustering has been widely used in many applications, and has received much attention. Most existing clustering methods suffer from both expensive computation and memory costs when applied to large-scale datasets. In this paper, we propose a novel clustering method, dubbed compressed k-means (CKM), for fast large-scale clustering. Specifically, high-dimensional data are compressed into short binary codes, which are well suited for fast clustering. CKM enjoys two key benefits: 1) storage can be significantly reduced by representing data points as binary codes; 2) distance computation is very efficient using Hamming metric between binary codes. We propose to jointly learn binary codes and clusters within one framework. Extensive experimental results on four large-scale datasets, including two million-scale datasets demonstrate that CKM outperforms the state-of-theart large-scale clustering methods in terms of both computation and memory cost, while achieving comparable clustering accuracy.", "title": "" }, { "docid": "948b157586c75674e75bd50b96162861", "text": "We propose a database design methodology for NoSQL systems. The approach is based on NoAM (NoSQL Abstract Model), a novel abstrac t d ta model for NoSQL databases, which exploits the commonalities of various N SQL systems and is used to specify a system-independent representatio n of the application data. This intermediate representation can be then implemented in target NoSQL databases, taking into account their specific features. Ov rall, the methodology aims at supporting scalability, performance, and consisten cy, as needed by next-generation web applications.", "title": "" }, { "docid": "c574570eb7366fcf0c15fb0fa833365c", "text": "Many time-critical applications require predictable performance and tasks in these applications have deadlines to be met. In this paper, we propose an efficient algorithm for nonpreemptive scheduling of dynamically arriving real-time tasks (aperiodic tasks) in multiprocessor systems. A real-time task is characterized by its deadline, resource requirements, and worst case computation time on p processors, where p is the degree of parallelization of the task. We use this parallelism in tasks to meet their deadlines and, thus, obtain better schedulability compared to nonparallelizable task scheduling algorithms. To study the effectiveness of the proposed scheduling algorithm, we have conducted extensive simulation studies and compared its performance with the myopic [8] scheduling algorithm. The simulation studies show that the schedulability of the proposed algorithm is always higher than that of the myopic algorithm for a wide variety of task parameters. Index Terms —Multiprocessor, real-time systems, dynamic scheduling, parallelizable tasks, resource constraints. —————————— ✦ ——————————", "title": "" }, { "docid": "d956c805ee88d1b0ca33ce3f0f838441", "text": "The task of relation classification in the biomedical domain is complex due to the presence of samples obtained from heterogeneous sources such as research articles, discharge summaries, or electronic health records. It is also a constraint for classifiers which employ manual feature engineering. In this paper, we propose a convolutional recurrent neural network (CRNN) architecture that combines RNNs and CNNs in sequence to solve this problem. The rationale behind our approach is that CNNs can effectively identify coarse-grained local features in a sentence, while RNNs are more suited for long-term dependencies. We compare our CRNN model with several baselines on two biomedical datasets, namely the i2b22010 clinical relation extraction challenge dataset, and the SemEval-2013 DDI extraction dataset. We also evaluate an attentive pooling technique and report its performance in comparison with the conventional max pooling method. Our results indicate that the proposed model achieves state-of-the-art performance on both datasets.1", "title": "" }, { "docid": "e974bb34f97d2afb9739f27ec9375376", "text": "In portable, 3-D, or ultra-fast ultrasound (US) imaging systems, there is an increasing demand to reconstruct high quality images from limited number of data. However, the existing solutions require either hardware changes or computationally expansive algorithms. To overcome these limitations, here we propose a novel deep learning approach that interpolates the missing RF data by utilizing the sparsity of the RF data in the Fourier domain. Extensive experimental results from sub-sampled RF data from a real US system confirmed that the proposed method can effectively reduce the data rate without sacrificing the image quality.", "title": "" }, { "docid": "72485a3c94c2dfa5121e91f2a3fc0f4a", "text": "Four experiments support the hypothesis that syntactically relevant information about verbs is encoded in the lexicon in semantic event templates. A verb's event template represents the participants in an event described by the verb and the relations among the participants. The experiments show that lexical decision times are longer for verbs with more complex templates than verbs with less complex templates and that, for both transitive and intransitive sentences, sentences containing verbs with more complex templates take longer to process. In contrast, sentence processing times did not depend on the probabilities with which the verbs appear in transitive versus intransitive constructions in a large corpus of naturally produced sentences.", "title": "" }, { "docid": "3a106eb1d70a5a867d13a7a976f8c49a", "text": "Many large organizations are adopting agile software development as part of their continuous push towards higher flexibility and shorter lead times, yet few reports on large-scale agile transformations are available in the literature. In this paper we report how Ericsson introduced agile in a new R&D product development program developing a XaaS platform and a related set of services, while simultaneously scaling it up aggressively. The overarching goal for the R&D organization, distributed to five sites at two continents, was to achieve continuous feature delivery. This single case study is based on 45 semi-structured interviews during visits at four sites, and five observation sessions at three sites. We describe how the organization experimented with different set-ups for their tens of agile teams aiming for rapid end-to-end development: from component-based virtual teams to totally cross-functional, cross-component, cross-site teams. Moreover, we discuss the challenges the organization faced and how they mitigated them on their journey towards continuous and rapid software engineering. We present four lessons learned for large-scale agile transformations: 1) consider using an experimental approach to transformation, 2) consider implementing the transformation step-wise in complex large-scale settings, 3) team inter-changeability can be limited in a complex large-scale product — specialization might be needed, and 4) not using a common agile framework for the whole organization, in combination with insufficient common trainings and coaching may lead to a lack of common direction in the agile implementation. Further in-depth case studies on large-scale agile transformations, on customizing agile to large-scale settings, as well as on the use of scaling frameworks are needed.", "title": "" }, { "docid": "a289829cb63b56280a1e06f69c6670a9", "text": "This article presents an overview of the ability model of emotional intelligence and includes a discussion about how and why the concept became useful in both educational and workplace settings. We review the four underlying emotional abilities comprising emotional intelligence and the assessment tools that that have been developed to measure the construct. A primary goal is to provide a review of the research describing the correlates of emotional intelligence. We describe what is known about how emotionally intelligent people function both intraand interpersonally and in both academic and workplace settings. The facts point in one direction: The job offer you have in hand is perfect – great salary, ideal location, and tremendous growth opportunities. Yet, there is something that makes you feel uneasy about resigning from your current position and moving on. What will you do? Ignore the feeling and choose what appears to be the logical path, or go with your gut and risk disappointing your family? Or, might you consider both your thoughts and feelings about the job in order to make the decision? Solving problems and making wise decisions using both thoughts and feelings or logic and intuition is a part of what we refer to as emotional intelligence (Mayer & Salovey, 1997; Salovey & Mayer, 1990). Linking emotions and intelligence was relatively novel when first introduced in a theoretical model about twenty years ago (Salovey & Mayer, 1990; but see Gardner, 1983 ⁄1993). Among the many questions posed by both researchers and laypersons alike were: Is emotional intelligence an innate, nonmalleable mental ability? Can it be acquired with instruction and training? Is it a new intelligence or just the repackaging of existing constructs? How can it be measured reliably and validly? What does the existence of an emotional intelligence mean in everyday life? In what ways does emotional intelligence affect mental health, relationships, daily decisions, and academic and workplace performance? In this article, we provide an overview of the theory of emotional intelligence, including a brief discussion about how and why the concept has been used in both educational and workplace settings. Because the field is now replete with articles, books, and training manuals on the topic – and because the definitions, claims, and measures of emotional intelligence have become extremely diverse – we also clarify definitional and measurement issues. A final goal is to provide an up-to-date review of the research describing what the lives of emotionally intelligent people ‘look like’ personally, socially, academically, and in the workplace. Social and Personality Psychology Compass 5/1 (2011): 88–103, 10.1111/j.1751-9004.2010.00334.x a 2011 The Authors Social and Personality Psychology Compass a 2011 Blackwell Publishing Ltd What is Emotional Intelligence? Initial conception of emotional intelligence Emotional intelligence was described formally by Salovey and Mayer (1990). They defined it as ‘the ability to monitor one’s own and others’ feelings and emotions, to discriminate among them and to use this information to guide one’s thinking and actions’ (p. 189). They also provided an initial empirical demonstration of how an aspect of emotional intelligence could be measured as a mental ability (Mayer, DiPaolo, & Salovey, 1990). In both articles, emotional intelligence was presented as a way to conceptualize the relation between cognition and affect. Historically, ‘emotion’ and ‘intelligence’ were viewed as being in opposition to one another (Lloyd, 1979). How could one be intelligent about the emotional aspects of life when emotions derail individuals from achieving their goals (e.g., Young, 1943)? The theory of emotional intelligence suggested the opposite: emotions make cognitive processes adaptive and individuals can think rationally about emotions. Emotional intelligence is an outgrowth of two areas of psychological research that emerged over forty years ago. The first area, cognition and affect, involved how cognitive and emotional processes interact to enhance thinking (Bower, 1981; Isen, Shalker, Clark, & Karp, 1978; Zajonc, 1980). Emotions like anger, happiness, and fear, as well as mood states, preferences, and bodily states, influence how people think, make decisions, and perform different tasks (Forgas & Moylan, 1987; Mayer & Bremer, 1985; Salovey & Birnbaum, 1989). The second was an evolution in models of intelligence itself. Rather than viewing intelligence strictly as how well one engaged in analytic tasks associated with memory, reasoning, judgment, and abstract thought, theorists and investigators began considering intelligence as a broader array of mental abilities (e.g., Cantor & Kihlstrom, 1987; Gardner, 1983 ⁄1993; Sternberg, 1985). Sternberg (1985), for example, urged educators and scientists to place an emphasis on creative abilities and practical knowledge that could be acquired through careful navigation of one’s everyday environment. Gardner’s (1983) ‘personal intelligences,’ including the capacities involved in accessing one’s own feeling life (intrapersonal intelligence) and the ability to monitor others’ emotions and mood (interpersonal intelligence), provided a compatible backdrop for considering emotional intelligence as a viable construct. Popularization of emotional intelligence The term ‘emotional intelligence’ was mostly unfamiliar to researchers and the general public until Goleman (1995) wrote the best-selling trade book, Emotional Intelligence: Why it can Matter More than IQ. The book quickly caught the eye of the media, public, and researchers. In it, Goleman described how scientists had discovered a connection between emotional competencies and prosocial behavior; he also declared that emotional intelligence was both an answer to the violence plaguing our schools and ‘as powerful and at times more powerful than IQ’ in predicting success in life (Goleman, 1995; p. 34). Both in the 1995 book and in a later book focusing on workplace applications of emotional intelligence (Goleman, 1998), Goleman described the construct as an array of positive attributes including political awareness, self-confidence, conscientiousness, and achievement motives rather than focusing only on an intelligence that could help individuals solve problems effectively (Brackett & Geher, 2006). Goleman’s views on emotional intelligence, in part because they were articulated for ⁄ to the general public, extended Emotional Intelligence 89 a 2011 The Authors Social and Personality Psychology Compass 5/1 (2011): 88–103, 10.1111/j.1751-9004.2010.00334.x Social and Personality Psychology Compass a 2011 Blackwell Publishing Ltd beyond the empirical evidence that was available (Davies, Stankov, & Roberts, 1998; Hedlund & Sternberg, 2000; Mayer & Cobb, 2000). Yet, people from all professions – educators, psychologists, human resource professionals, and corporate executives – began to incorporate emotional intelligence into their daily vernacular and professional practices. Definitions and measures of emotional intelligence varied widely, with little consensus about what emotional intelligence is and is not. Alternative models of emotional intelligence Today, there are two scientific approaches to emotional intelligence. They can be characterized as the ability model and mixed models (Mayer, Caruso, & Salovey, 2000). The ability model views emotional intelligence as a standard intelligence and argues that the construct meets traditional criteria for an intelligence (Mayer, Roberts, & Barsade, 2008b; Mayer & Salovey, 1997; Mayer, Salovey, & Caruso, 2008a). Proponents of the ability model measure emotional intelligence as a mental ability with performance assessments that have a criterion of correctness (i.e., there are better and worse answers, which are determined using complex scoring algorithms). Mixed models are so called because they mix the ability conception with personality traits and competencies such as optimism, self-esteem, and emotional self-efficacy (see Cherniss, 2010, for a review). Proponents of this approach use self-report instruments as opposed to performance assessments to measure emotional intelligence (i.e., instead of asking people to demonstrate how they perceive an emotional expression accurately, self-report measures ask people to judge and report how good they are at perceiving others’ emotions accurately). There has been a debate about the ideal method to measure emotional intelligence. On the surface, self-report (or self-judgment) scales are desirable: they are less costly, easier to administer, and take considerably less time to complete than performance tests (Brackett, Rivers, Shiffman, Lerner, & Salovey, 2006). However, it is well known that self-report measures are problematic because respondents can provide socially desirable responses rather than truthful ones, or respondents may not actually know how good they are at emotion-based tasks – to whom do they compare themselves (e.g., DeNisi & Shaw, 1977; Paulhus, Lysy, & Yik, 1998)? As they apply to emotional intelligence, selfreport measures are related weakly to performance assessments and lack discriminant validity from existing measures of personality (Brackett & Mayer, 2003; Brackett et al., 2006). In a meta-analysis of 13 studies that compared performance tests (e.g., Mayer, Salovey, & Caruso, 2002) and self-report scales (e.g., EQ-i; Bar-On, 1997), Van Rooy, Viswesvaran, and Pluta (2005) reported that performance tests were relatively distinct from self-report measures (r = 0.14). Even when a self-report measure is designed to map onto performance tests, correlations are very low (Brackett et al., 2006a). Finally, self-report measures of emotional intelligence are more susceptible to faking than performance tests (Day & Carroll, 2008). For the reasons described in this section, we assert that the ability-based definition and performance-based measure", "title": "" }, { "docid": "ac8a620e752144e3f4e20c16efb56ebc", "text": "or as ventricular fibrillation, the circulation must be restored promptly; otherwise anoxia will result in irreversible damage. There are two techniques that may be used to meet the emergency: one is to open the chest and massage the heart directly and the other is to accomplish the same end by a new method of closed-chest cardiac massage. The latter method is described in this communication. The closed-chest alternating current defibrillator ' that", "title": "" }, { "docid": "d01b8d59f5e710bcf75978d1f7dcdfa3", "text": "Over the last few decades, the use of electroencephalography (EEG) signals for motor imagery based brain-computer interface (MI-BCI) has gained widespread attention. Deep learning have also gained widespread attention and used in various application such as natural language processing, computer vision and speech processing. However, deep learning has been rarely used for MI EEG signal classification. In this paper, we present a deep learning approach for classification of MI-BCI that uses adaptive method to determine the threshold. The widely used common spatial pattern (CSP) method is used to extract the variance based CSP features, which is then fed to the deep neural network for classification. Use of deep neural network (DNN) has been extensively explored for MI-BCI classification and the best framework obtained is presented. The effectiveness of the proposed framework has been evaluated using dataset IVa of the BCI Competition III. It is found that the proposed framework outperforms all other competing methods in terms of reducing the maximum error. The framework can be used for developing BCI systems using wearable devices as it is computationally less expensive and more reliable compared to the best competing methods.", "title": "" }, { "docid": "61b7275a150b34cf9a0585bdedd22106", "text": "The objective of knowledge graph embedding is to encode both entities and relations of knowledge graphs into continuous low-dimensional vector spaces. Previously, most works focused on symbolic representation of knowledge graph with structure information, which can not handle new entities or entities with few facts well. In this paper, we propose a novel deep architecture to utilize both structural and textual information of entities. Specifically, we introduce three neural models to encode the valuable information from text description of entity, among which an attentive model can select related information as needed. Then, a gating mechanism is applied to integrate representations of structure and text into a unified architecture. Experiments show that our models outperform baseline by margin on link prediction and triplet classification tasks. Source codes of this paper will be available on Github.", "title": "" }, { "docid": "7a3f69a9da7fc754f6de7e5720147857", "text": "We compare four high-profile waterfall security-engineering processes (CLASP, Microsoft SDL, Cigital Touchpoints and Common Criteria) with the available preconditions within agile processes. Then, using a survey study, agile security activities are identified and evaluated by practitioners from large companies, e.g. software and telecommunication companies. Those activities are compared and a specific security engineering process is suggested for an agile process setting that can provide high benefit with low integration cost.", "title": "" }, { "docid": "e7adf9c63fd7a3814b0c565c3a4c14a3", "text": "A capsule is a group of neurons whose outputs represent different properties of the same entity. Each layer in a capsule network contains many capsules. We describe a version of capsules in which each capsule has a logistic unit to represent the presence of an entity and a 4x4 matrix which could learn to represent the relationship between that entity and the viewer (the pose). A capsule in one layer votes for the pose matrix of many different capsules in the layer above by multiplying its own pose matrix by trainable viewpoint-invariant transformation matrices that could learn to represent part-whole relationships. Each of these votes is weighted by an assignment coefficient. These coefficients are iteratively updated for each image using the Expectation-Maximization algorithm such that the output of each capsule is routed to a capsule in the layer above that receives a cluster of similar votes. The transformation matrices are trained discriminatively by backpropagating through the unrolled iterations of EM between each pair of adjacent capsule layers. On the smallNORB benchmark, capsules reduce the number of test errors by 45% compared to the state-of-the-art. Capsules also show far more resistance to white box adversarial attacks than our baseline convolutional neural network.", "title": "" } ]
scidocsrr
1d21c7fbb3c16f8a9554dd590bba0cc9
Towards Monocular Digital Elevation Model (DEM) Estimation by Convolutional Neural Networks - Application on Synthetic Aperture Radar Images
[ { "docid": "bd5f2e7a0966a277c5c26b42c0fcf16f", "text": "Obstacle Detection is a central problem for any robotic system, and critical for autonomous systems that travel at high speeds in unpredictable environment. This is often achieved through scene depth estimation, by various means. When fast motion is considered, the detection range must be longer enough to allow for safe avoidance and path planning. Current solutions often make assumption on the motion of the vehicle that limit their applicability, or work at very limited ranges due to intrinsic constraints. We propose a novel appearance-based Object Detection system that is able to detect obstacles at very long range and at a very high speed (~ 300Hz), without making assumptions on the type of motion. We achieve these results using a Deep Neural Network approach trained on real and synthetic images and trading some depth accuracy for fast, robust and consistent operation. We show how photo-realistic synthetic images are able to solve the problem of training set dimension and variety typical of machine learning approaches, and how our system is robust to massive blurring of test images.", "title": "" }, { "docid": "3a50df4f64df3c65fbac1727ebe7725a", "text": "Modern autonomous mobile robots require a strong understanding of their surroundings in order to safely operate in cluttered and dynamic environments. Monocular depth estimation offers a geometry-independent paradigm to detect free, navigable space with minimum space, and power consumption. These represent highly desirable features, especially for microaerial vehicles. In order to guarantee robust operation in real-world scenarios, the estimator is required to generalize well in diverse environments. Most of the existent depth estimators do not consider generalization, and only benchmark their performance on publicly available datasets after specific fine tuning. Generalization can be achieved by training on several heterogeneous datasets, but their collection and labeling is costly. In this letter, we propose a deep neural network for scene depth estimation that is trained on synthetic datasets, which allow inexpensive generation of ground truth data. We show how this approach is able to generalize well across different scenarios. In addition, we show how the addition of long short-term memory layers in the network helps to alleviate, in sequential image streams, some of the intrinsic limitations of monocular vision, such as global scale estimation, with low computational overhead. We demonstrate that the network is able to generalize well with respect to different real-world environments without any fine tuning, achieving comparable performance to state-of-the-art methods on the KITTI dataset.", "title": "" } ]
[ { "docid": "41b7b8638fa1d3042873ca70f9c338f1", "text": "The LC50 (78, 85 ppm) and LC90 (88, 135 ppm) of Anagalis arvensis and Calendula micrantha respectively against Biomphalaria alexandrina were higher than those of the non-target snails, Physa acuta, Planorbis planorbis, Helisoma duryi and Melanoides tuberculata. In contrast, the LC50 of Niclosamide (0.11 ppm) and Copper sulphate (CuSO4) (0.42 ppm) against B. alexandrina were lower than those of the non-target snails. The mortalities percentage among non-target snails ranged between 0.0 & 20% when sublethal concentrations of CuSO4 against B. alexandrina mixed with those of C. micrantha and between 0.0 & 40% when mixed with A. arvensis. Mortalities ranged between 0.0 & 50% when Niclosamide was mixed with each of A. arvensis and C. micrantha. A. arvensis induced 100% mortality on Oreochromis niloticus after 48 hrs exposure and after 24 hrs for Gambusia affinis. C. micrantha was non-toxic to the fish. The survival rate of O. niloticus and G. affinis after 48 hrs exposure to 0.11 ppm of Niclosamide were 83.3% & 100% respectively. These rates were 91.7% & 93.3% respectively when each of the two fish species was exposed to 0.42 ppm of CuSO4. Mixture of sub-lethal concentrations of A. arvensis against B. alexandrina and those of Niclosamide or CuSO4 at ratios 10:40 & 25:25 induced 66.6% mortalities on O. niloticus and 83.3% at 40:10. These mixtures caused 100% mortalities on G. affinis at all ratios. A. arvensis CuSO4 mixtures at 10:40 induced 83.3% & 40% mortalities on O. niloticus and G. affinis respectively and 100% mortalities on both fish species at ratios 25:25 & 40:10. A mixture of sub-lethal concentrations of C. micrantha against B. alexandrina and of Niclosamide or CuSO4 caused mortalities of O. niloticus between 0.0 & 33.3% and between 5% & 35% of G. affinis. The residue of Cu in O. niloticus were 4.69, 19.06 & 25.37 mg/1kgm fish after 24, 48 & 72 hrs exposure to LC0 of CuSO4 against B. alexandrina respectively.", "title": "" }, { "docid": "036c7fc778fdacfd2595c65dc7af9f78", "text": "A new transformerless buck–boost converterwith simple structure is proposed in this study. Comparedwith the traditional buck–boost converter, the proposedbuck–boost converter’s voltage gain is squared times of theformer’s and its output voltage polarity is positive. Theseadvantages enable it to work in a wider range of positiveoutput. The two power switches of the proposed buck–boost converter operate synchronously. In the continuousconduction mode (CCM), two inductors are magnetized andtwo capacitors are discharged during the switch-on period,while two inductors are demagnetized and two capacitorsare charged during the switch-off period. The power electronicssimulator (PSIM) and the circuit experiments are providedto validate the effectiveness of the proposed buck–boostconverter.", "title": "" }, { "docid": "85cdebb26246db1d5a9e6094b0a0c2e6", "text": "The fast simulation of large networks of spiking neurons is a major task for the examination of biology-inspired vision systems. Networks of this type label features by synchronization of spikes and there is strong demand to simulate these e,ects in real world environments. As the calculations for one model neuron are complex, the digital simulation of large networks is not e>cient using existing simulation systems. Consequently, it is necessary to develop special simulation techniques. This article introduces a wide range of concepts for the di,erent parts of digital simulator systems for large vision networks and presents accelerators based on these foundations. c © 2002 Elsevier Science B.V. All rights", "title": "" }, { "docid": "b57cbb1f6eeb34946df47f2be390aaf8", "text": "The automatic detection of software vulnerabilities is an important research problem. However, existing solutions to this problem rely on human experts to define features and often miss many vulnerabilities (i.e., incurring high false negative rate). In this paper, we initiate the study of using deep learning-based vulnerability detection to relieve human experts from the tedious and subjective task of manually defining features. Since deep learning is motivated to deal with problems that are very different from the problem of vulnerability detection, we need some guiding principles for applying deep learning to vulnerability detection. In particular, we need to find representations of software programs that are suitable for deep learning. For this purpose, we propose using code gadgets to represent programs and then transform them into vectors, where a code gadget is a number of (not necessarily consecutive) lines of code that are semantically related to each other. This leads to the design and implementation of a deep learning-based vulnerability detection system, called Vulnerability Deep Pecker (VulDeePecker). In order to evaluate VulDeePecker, we present the first vulnerability dataset for deep learning approaches. Experimental results show that VulDeePecker can achieve much fewer false negatives (with reasonable false positives) than other approaches. We further apply VulDeePecker to 3 software products (namely Xen, Seamonkey, and Libav) and detect 4 vulnerabilities, which are not reported in the National Vulnerability Database but were “silently” patched by the vendors when releasing later versions of these products; in contrast, these vulnerabilities are almost entirely missed by the other vulnerability detection systems we experimented with.", "title": "" }, { "docid": "a2575a6a0516db2e47aab0388c5e9677", "text": "Isaac Miller and Mark Campbell Sibley School of Mechanical and Aerospace Engineering Dan Huttenlocher and Frank-Robert Kline Computer Science Department Aaron Nathan, Sergei Lupashin, and Jason Catlin School of Electrical and Computer Engineering Brian Schimpf School of Operations Research and Information Engineering Pete Moran, Noah Zych, Ephrahim Garcia, Mike Kurdziel, and Hikaru Fujishima Sibley School of Mechanical and Aerospace Engineering Cornell University Ithaca, New York 14853 e-mail: itm2@cornell.edu, mc288@cornell.edu, dph@cs.cornell.edu, amn32@cornell.edu, fk36@cornell.edu, pfm24@cornell.edu, ncz2@cornell.edu, bws22@cornell.edu, sv15@cornell.edu, eg84@cornell.edu, jac267@cornell.edu, msk244@cornell.edu, hf86@cornell.edu", "title": "" }, { "docid": "c49ffcb45cc0a7377d9cbdcf6dc07057", "text": "Dermoscopy is an in vivo method for the early diagnosis of malignant melanoma and the differential diagnosis of pigmented lesions of the skin. It has been shown to increase diagnostic accuracy over clinical visual inspection in the hands of experienced physicians. This article is a review of the principles of dermoscopy as well as recent technological developments.", "title": "" }, { "docid": "8b5066ed70312e40a1c5a50a94d32de9", "text": "Assessment plays an important role in the process of learning. Multiple choice questions (MCQs) are suitable candidates to fulfill this role. We present an approach for automatically generating MCQs based on the content presented in slides. It extracts named entities from slides, and queries a knowledge base to create different varieties of MCQs and appropriate answer options. Users can choose between different levels of difficulty for the generated questions and answer options. The approach can be easily extended to generate other varieties of MCQs. Results from a user study confirm the applicability and appropriateness of the approach.", "title": "" }, { "docid": "eeed4c3f13f50f269bcfd51d2157f5a6", "text": "DRAM energy is an important component to optimize in modern computing systems. One outstanding source of DRAM energy is the energy to fetch data stored on cells to the row buffer, which occurs during two DRAM operations, row activate and refresh. This work exploits previously proposed half page row access, modifying the wordline connections within a bank to halve the number of cells fetched to the row buffer, to save energy in both cases. To accomplish this, we first change the data wire connections in the sub-array to reduce the cost of row buffer overfetch in multi-core systems which yields a 12% energy savings on average and a slight performance improvement in quad-core systems. We also propose charge recycling refresh, which reuses charge left over from a prior half page refresh to refresh another half page. Our charge recycling scheme is capable of reducing both auto- and self-refresh energy, saving more than 15% of refresh energy at 85°C, and provides even shorter refresh cycle time. Finally, we propose a refresh scheduling scheme that can dynamically adjust the number of charge recycled half pages, which can save up to 30% of refresh energy at 85°C.", "title": "" }, { "docid": "16e1174454d62c69d831effce532bcad", "text": "We report on the quantitative determination of acetaminophen (paracetamol; NAPAP-d(0)) in human plasma and urine by GC-MS and GC-MS/MS in the electron-capture negative-ion chemical ionization (ECNICI) mode after derivatization with pentafluorobenzyl (PFB) bromide (PFB-Br). Commercially available tetradeuterated acetaminophen (NAPAP-d(4)) was used as the internal standard. NAPAP-d(0) and NAPAP-d(4) were extracted from 100-μL aliquots of plasma and urine with 300 μL ethyl acetate (EA) by vortexing (60s). After centrifugation the EA phase was collected, the solvent was removed under a stream of nitrogen gas, and the residue was reconstituted in acetonitrile (MeCN, 100 μL). PFB-Br (10 μL, 30 vol% in MeCN) and N,N-diisopropylethylamine (10 μL) were added and the mixture was incubated for 60 min at 30 °C. Then, solvents and reagents were removed under nitrogen and the residue was taken up with 1000 μL of toluene, from which 1-μL aliquots were injected in the splitless mode. GC-MS quantification was performed by selected-ion monitoring ions due to [M-PFB](-) and [M-PFB-H](-), m/z 150 and m/z 149 for NAPAP-d(0) and m/z 154 and m/z 153 for NAPAP-d(4), respectively. GC-MS/MS quantification was performed by selected-reaction monitoring the transition m/z 150 → m/z 107 and m/z 149 → m/z 134 for NAPAP-d(0) and m/z 154 → m/z 111 and m/z 153 → m/z 138 for NAPAP-d(4). The method was validated for human plasma (range, 0-130 μM NAPAP-d(0)) and urine (range, 0-1300 μM NAPAP-d(0)). Accuracy (recovery, %) ranged between 89 and 119%, and imprecision (RSD, %) was below 19% in these matrices and ranges. A close correlation (r>0.999) was found between the concentrations measured by GC-MS and GC-MS/MS. By this method, acetaminophen can be reliably quantified in small plasma and urine sample volumes (e.g., 10 μL). The analytical performance of the method makes it especially useful in pediatrics.", "title": "" }, { "docid": "1be6aecdc3200ed70ede2d5e96cb43be", "text": "In this paper we are exploring different models and methods for improving the performance of text independent speaker identification system for mobile devices. The major issues in speaker recognition for mobile devices are (i) presence of varying background environment, (ii) effect of speech coding introduced by the mobile device, and (iii) impairments due to wireless channel. In this paper, we are proposing multi-SNR multi-environment speaker models and speech enhancement (preprocessing) methods for improving the performance of speaker recognition system in mobile environment. For this study, we have simulated five different background environments (Car, Factory, High frequency, pink noise and white Gaussian noise) using NOISEX data. Speaker recognition studies are carried out on TIMIT, cellular, and microphone speech databases. Autoassociative neural network models are explored for developing these multi-SNR multi-environment speaker models. The results indicate that the proposed multi-SNR multi-environment speaker models and speech enhancement preprocessing methods have enhanced the speaker recognition performance in the presence of different noisy environments.", "title": "" }, { "docid": "4d5e0240108cab2a2391314c3c8a2150", "text": "The impending global energy crisis has opened up new opportunities for the automotive industry to meet the ever-increasing demand for cleaner and fuel-efficient vehicles. This has necessitated the development of drivetrains that are either fully or partially electrified in the form of electric and plug-in hybrid electric vehicles (EVs and HEVs), respectively, which are collectively addressed as plug-in EVs (PEVs). PEVs in general are equipped with larger on-board storage and power electronics for charging or discharging the battery, in comparison with HEVs. The extent to which PEVs are adopted significantly depends on the nature of the charging solution utilized. In this paper, a comprehensive topological survey of the currently available PEV charging solutions is presented. PEV chargers based on the nature of charging (conductive or inductive), stages of conversion (integrated single stage or two stages), power level (level 1, 2, or 3), and type of semiconductor devices utilized (silicon, silicon carbide, or gallium nitride) are thoroughly reviewed in this paper.", "title": "" }, { "docid": "caaca962473382e40a08f90240cc88b6", "text": "Lysergic acid diethylamide (LSD) was synthesized in 1938 and its psychoactive effects discovered in 1943. It was used during the 1950s and 1960s as an experimental drug in psychiatric research for producing so-called \"experimental psychosis\" by altering neurotransmitter system and in psychotherapeutic procedures (\"psycholytic\" and \"psychedelic\" therapy). From the mid 1960s, it became an illegal drug of abuse with widespread use that continues today. With the entry of new methods of research and better study oversight, scientific interest in LSD has resumed for brain research and experimental treatments. Due to the lack of any comprehensive review since the 1950s and the widely dispersed experimental literature, the present review focuses on all aspects of the pharmacology and psychopharmacology of LSD. A thorough search of the experimental literature regarding the pharmacology of LSD was performed and the extracted results are given in this review. (Psycho-) pharmacological research on LSD was extensive and produced nearly 10,000 scientific papers. The pharmacology of LSD is complex and its mechanisms of action are still not completely understood. LSD is physiologically well tolerated and psychological reactions can be controlled in a medically supervised setting, but complications may easily result from uncontrolled use by layman. Actually there is new interest in LSD as an experimental tool for elucidating neural mechanisms of (states of) consciousness and there are recently discovered treatment options with LSD in cluster headache and with the terminally ill.", "title": "" }, { "docid": "2b09fad4433a2046902cdf17cb865753", "text": "1 Lawrence CM, Lonsdale Eccles AA. Selective sweat gland removal with minimal skin excision in the treatment of axillary hyperhidrosis: a retrospective clinical and histological review of 15 patients. Br J Dermatol 2006; 155:115–18. 2 Bechara FG, Sand M, Sand D et al. Surgical treatment of axillary hyperhidrosis: a study comparing liposuction cannulas with a suction-curettage cannula. Ann Plast Surg 2006; 56:654–7. 3 Perng CK, Yeh FL, Ma H et al. Is the treatment of axillary osmidrosis with liposuction better than open surgery? Plast Reconstr Surg 2004; 114:93–7. 4 Bechara FG, Sand M, Sand D et al. Postoperative situation after axillary suction-curettage: an endoscopical view. J Plast Reconstr Aesthet Surg 2006; 59:304–6.", "title": "" }, { "docid": "c9224a342b5cef0838279f0b659ce137", "text": "DIGITAL transformation is a major challenge for many organizations. IT managers in particular not only wonder what the next digital trends in their industry will be, they also need to understand how today's IT organizations will change in light of digital transformation. I will first discuss some foundations of digital transformation and will then present 10 theses on how digital transformation will influence corporate IT.", "title": "" }, { "docid": "5876bb91b0cbe851b8af677c93c5e708", "text": "This paper proposes an effective end-to-end face detection and recognition framework based on deep convolutional neural networks for home service robots. We combine the state-of-the-art region proposal based deep detection network with the deep face embedding network into an end-to-end system, so that the detection and recognition networks can share the same deep convolutional layers, enabling significant reduction of computation through sharing convolutional features. The detection network is robust to large occlusion, and scale, pose, and lighting variations. The recognition network does not require explicit face alignment, which enables an effective training strategy to generate a unified network. A practical robot system is also developed based on the proposed framework, where the system automatically asks for a minimum level of human supervision when needed, and no complicated region-level face annotation is required. Experiments are conducted over WIDER and LFW benchmarks, as well as a personalized dataset collected from an office setting, which demonstrate state-of-the-art performance of our system.", "title": "" }, { "docid": "98b78340925729e580f888f9ab2d8453", "text": "This paper describes the Jensen-Shannon divergence (JSD) and Hilbert space embedding. With natural definitions making these considerations precise, one finds that the general Jensen-Shannon divergence related to the mixture is the minimum redundancy, which can be achieved by the observer. The set of distributions with the metric /spl radic/JSD can even be embedded isometrically into Hilbert space and the embedding can be identified.", "title": "" }, { "docid": "d96114c33e4eb4dedc245f8962a9d8ce", "text": "As a backbone of the Semantic Web, Ontologies provide a shared understanding of a domain of text. Ontologies, with their appearance, usage, and classification address for concrete ontology language which is important for the Semantic Web. They can be used to support a great variety of tasks in different domains such as knowledge representation, natural language processing, information retrieval, information exchange, collaborative systems, databases, knowledge management, database integration, digital libraries, information retrieval, or multi agent systems. Thus a fast and efficient ontology development is a requirement for the success of many knowledge based systems and for the Semantic Web itself. This paper provides discussion on existing ontology tools and methodologies and the state of the art of the field.", "title": "" }, { "docid": "0c28d531fd97c01ee0b73d4ad2633aaa", "text": "We consider problems of sequential robot manipulation (aka. combined task and motion planning) where the objective is primarily given in terms of a cost function over the final geometric state, rather than a symbolic goal description. In this case we should leverage optimization methods to inform search over potential action sequences. We propose to formulate the problem holistically as a 1storder logic extension of a mathematical program: a non-linear constrained program over the full world trajectory where the symbolic state-action sequence defines the (in-)equality constraints. We tackle the challenge of solving such programs by proposing three levels of approximation: The coarsest level introduces the concept of the effective end state kinematics, parametrically describing all possible end state configurations conditional to a given symbolic action sequence. Optimization on this level is fast and can inform symbolic search. The other two levels optimize over interaction keyframes and eventually over the full world trajectory across interactions. We demonstrate the approach on a problem of maximizing the height of a physically stable construction from an assortment of boards, cylinders and blocks.", "title": "" }, { "docid": "87dcbe160478f2eb2e12849f0ee2833a", "text": "Healing of hard and soft tissue is mediated by a complex array of intracellular and extracellular events that are regulated by signaling proteins, a process that is, at present, incompletely understood. What is certain, however, is that platelets play a prominent if not deciding role. Controlled animal studies of soft and hard tissues have suggested that the application of autogenous platelet-rich plasma can enhance wound healing. The clinical use of platelet-rich plasma for a wide variety of applications has been reported; however, many reports are anecdotal and few include controls to definitively determine the role of platelet-rich plasma. The authors describe platelet biology and its role in wound healing; the preparation, characterization, and use of platelet-rich plasma; and those applications in plastic surgery for which it may be useful.", "title": "" }, { "docid": "da61b8bd6c1951b109399629f47dad16", "text": "In this paper, we introduce an approach for distributed nonlinear control of multiple hovercraft-type underactuated vehicles with bounded and unidirectional inputs. First, a bounded nonlinear controller is given for stabilization and tracking of a single vehicle, using a cascade backstepping method. Then, this controller is combined with a distributed gradient-based control for multi-vehicle formation stabilization using formation potential functions previously constructed. The vehicles are used in the Caltech Multi-Vehicle Wireless Testbed (MVWT). We provide simulation and experimental results for stabilization and tracking of a single vehicle, and a simulation of stabilization of a six-vehicle formation, demonstrating that in all cases the control bounds and the control objective are satisfied.", "title": "" } ]
scidocsrr
89092911b2d3b10e5ba2e2db84a5491c
DLAU: A Scalable Deep Learning Accelerator Unit on FPGA
[ { "docid": "9497731525a996844714d5bdbca6ae03", "text": "Recently, machine learning is widely used in applications and cloud services. And as the emerging field of machine learning, deep learning shows excellent ability in solving complex learning problems. To give users better experience, high performance implementations of deep learning applications seem very important. As a common means to accelerate algorithms, FPGA has high performance, low power consumption, small size and other characteristics. So we use FPGA to design a deep learning accelerator, the accelerator focuses on the implementation of the prediction process, data access optimization and pipeline structure. Compared with Core 2 CPU 2.3GHz, our accelerator can achieve promising result.", "title": "" } ]
[ { "docid": "2f07d575b5cd1f780b9574afe7173a5f", "text": "This paper emphasizes total self-development of the individual for improved motivation and organization management. It builds on Maslow’s hierarchy of needs theory to examine motivational levels for four levels of engineering staff at a public construction agency. The researchers studied these engineering groups qualitatively through interviews and quantitatively using a questionnaire. Using a holistic approach, this study focused on 15 parameters from Maslow’s five essential needs—the physical, safety, social, esteem, and selfactualization levels. Considerable emphasis was placed on the development of Maslow’s principle of ‘‘self-actualization.’’ This difficult-to-grasp concept, as Maslow reported, is a prerequisite for enlightened management. The researchers analyzed engineers’ perceptions regarding the fulfillment of need parameters and measured their perception of the importance of those parameters. Among specific findings were that junior project engineers had higher scores on self-actualization than senior engineers. Findings showed that changes are desirable for satisfying meaningfulness of tasks, increasing self-sufficiency in doing the job, and improving individuality and sense of mission of employees. The application of the findings based on Maslow’s model directly benefits the agency studied and other organizations in development programs. It also helps in increasing morale.", "title": "" }, { "docid": "71fd5cd95c13df2f80a3659d1e010b14", "text": "Large-scale graph processing is gaining increasing attentions in many domains. Meanwhile, FPGA provides a power-efficient and highly parallel platform for many applications, and has been applied to custom computing in many domains. In this paper, we describe FPGP (FPGA Graph Processing), a streamlined vertex-centric graph processing framework on FPGA, based on the interval-shard structure. FPGP is adaptable to different graph algorithms and users do not need to change the whole implementation on the FPGA. In our implementation, an on-chip parallel graph processor is proposed to both maximize the off-chip bandwidth of graph data and fully utilize the parallelism of graph processing. Meanwhile, we analyze the performance of FPGP and show the scalability of FPGP when the bandwidth of data path increases. FPGP is more power-efficient than single machine systems and scalable to larger graphs compared with other FPGA-based graph systems.", "title": "" }, { "docid": "75a15ef2ce8dd6b4c58a36b9fd352d18", "text": "Business growth and technology advancements have resulted in growing amounts of enterprise data. To gain valuable business insight and competitive advantage, businesses demand the capability of performing real-time analytics on such data. This, however, involves expensive query operations that are very time consuming on traditional CPUs. Additionally, in traditional database management systems (DBMS), the CPU resources are dedicated to mission-critical transactional workloads. Offloading expensive analytics query operations to a co-processor can allow efficient execution of analytics workloads in parallel with transactional workloads.\n In this paper, we present a Field Programmable Gate Array (FPGA) based acceleration engine for database operations in analytics queries. The proposed solution provides a mechanism for a DBMS to seamlessly harness the FPGA compute power without requiring any changes in the application or the existing data layout. Using a software-programmed query control block, the accelerator can be tailored to execute different queries without reconfiguration. Our prototype is implemented in a PCIe-attached FPGA system and is integrated into a commercial DBMS platform. The results demonstrate up to 94% CPU savings on real customer data compared to the baseline software cost with up to an order of magnitude speedup in the offloaded computations and up to 6.2x improvement in end-to-end performance.", "title": "" }, { "docid": "068fb08facd6172de2586d19fe3f68f4", "text": "The problem of automatically classifying the gender of a blog author has important applications in many commercial domains. Existing systems mainly use features such as words, word classes, and POS (part-ofspeech) n-grams, for classification learning. In this paper, we propose two new techniques to improve the current result. The first technique introduces a new class of features which are variable length POS sequence patterns mined from the training data using a sequence pattern mining algorithm. The second technique is a new feature selection method which is based on an ensemble of several feature selection criteria and approaches. Empirical evaluation using a real-life blog data set shows that these two techniques improve the classification accuracy of the current state-ofthe-art methods significantly.", "title": "" }, { "docid": "d07da03cde15fe7276f857832ae637af", "text": "In recent years there is a growing interest in the study of sparse representation for signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Recent activity in this field concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. In this paper we propose a novel algorithm – the K-SVD algorithm – generalizing the K-Means clustering process, for adapting dictionaries in order to achieve sparse signal representations. We analyze this algorithm and demonstrate its results on both synthetic tests and in applications on real data.", "title": "" }, { "docid": "dca74df16e3a90726d51b3222483ac94", "text": "We are concerned with the issue of detecting outliers and change points from time series. In the area of data mining, there have been increased interest in these issues since outlier detection is related to fraud detection, rare event discovery, etc., while change-point detection is related to event/trend change detection, activity monitoring, etc. Although, in most previous work, outlier detection and change point detection have not been related explicitly, this paper presents a unifying framework for dealing with both of them. In this framework, a probabilistic model of time series is incrementally learned using an online discounting learning algorithm, which can track a drifting data source adaptively by forgetting out-of-date statistics gradually. A score for any given data is calculated in terms of its deviation from the learned model, with a higher score indicating a high possibility of being an outlier. By taking an average of the scores over a window of a fixed length and sliding the window, we may obtain a new time series consisting of moving-averaged scores. Change point detection is then reduced to the issue of detecting outliers in that time series. We compare the performance of our framework with those of conventional methods to demonstrate its validity through simulation and experimental applications to incidents detection in network security.", "title": "" }, { "docid": "eb0168c889f713683da3a1b3a07acc57", "text": "The belief that personality is fixed (an entity theory of personality) can give rise to negative reactions to social adversities. Three studies showed that when social adversity is common-at the transition to high school--an entity theory can affect overall stress, health, and achievement. Study 1 showed that an entity theory of personality, measured during the 1st month of 9th grade, predicted more negative immediate reactions to social adversity and, at the end of the year, greater stress, poorer health, and lower grades in school. Studies 2 and 3, both experiments, tested a brief intervention that taught a malleable (incremental) theory of personality--the belief that people can change. The incremental theory group showed less negative reactions to an immediate experience of social adversity and, 8 months later, reported lower overall stress and physical illness. They also achieved better academic performance over the year. Discussion centers on the power of targeted psychological interventions to effect far-reaching and long-term change by shifting interpretations of recurring adversities during developmental transitions.", "title": "" }, { "docid": "3ee6ad4099e8fe99042472207e6dac09", "text": "The millimeter-wave (mmWave) band offers the potential for high-bandwidth communication channels in cellular networks. It is not clear, however, whether both high data rates and coverage in terms of signal-to-noise-plus-interference ratio can be achieved in interference-limited mmWave cellular networks due to the differences in propagation conditions and antenna topologies. This article shows that dense mmWave networks can achieve both higher data rates and comparable coverage relative to conventional microwave networks. Sum rate gains can be achieved using more advanced beamforming techniques that allow multiuser transmission. The insights are derived using a new theoretical network model that incorporates key characteristics of mmWave networks.", "title": "" }, { "docid": "01659bb903dcffc36500c349cb7dbf88", "text": "To try to decrease the preference of the attribute values for information gain and information gain ratio, in the paper, the authors puts forward a improved algorithm of C4.5 decision tree on the selection classification attribute. The basic thought of the algorithm is as follows: Firstly, computing the information gain of selection classification attribute, and then get an attribute of the information gain which is higher than the average level; Secondly, computing separately the arithmetic average value of the information gain ratio and information gain of the attribute, and then select the biggest attribute of the average value and set up a branch decision; Finally, to use recursive method to build a decision tree. The experiment shows that this method is applicable and effective.", "title": "" }, { "docid": "e55b84112fdb179faa8affbf9fed8c72", "text": "A polynomial threshold function (PTF) of degree <i>d</i> is a boolean function of the form <i>f</i>=<i>sgn</i>(<i>p</i>), where <i>p</i> is a degree-<i>d</i> polynomial, and <i>sgn</i> is the sign function. The main result of the paper is an almost optimal bound on the probability that a random restriction of a PTF is not close to a constant function, where a boolean function <i>g</i> is called δ-close to constant if, for some <i>v</i>∈{1,−1}, we have <i>g</i>(<i>x</i>)=<i>v</i> for all but at most δ fraction of inputs. We show for every PTF <i>f</i> of degree <i>d</i>≥ 1, and parameters 0<δ, <i>r</i>≤ 1/16, that \n<table class=\"display dcenter\"><tr style=\"vertical-align:middle\"><td class=\"dcell\"><i>Pr</i><sub>ρ∼ <i>R</i><sub><i>r</i></sub></sub> [<i>f</i><sub>ρ</sub> is not  δ -close to constant] ≤ </td><td class=\"dcell\">√</td><td class=\"dcell\"><table style=\"border:0;border-spacing:1;border-collapse:separate;\" class=\"cellpadding0\"><tr><td class=\"hbar\"></td></tr><tr><td style=\"text-align:center;white-space:nowrap\" ><i>r</i></td></tr></table></td><td class=\"dcell\">· (log<i>r</i><sup>−1</sup> · logδ<sup>−1</sup>)<sup><i>O</i>(<i>d</i><sup>2</sup>)</sup>,  </td></tr></table> where ρ∼ <i>R</i><sub><i>r</i></sub> is a random restriction leaving each variable, independently, free with probability <i>r</i>, and otherwise assigning it 1 or −1 uniformly at random. In fact, we show a more general result for random <em>block</em> restrictions: given an arbitrary partitioning of input variables into <i>m</i> blocks, a random block restriction picks a uniformly random block ℓ∈ [<i>m</i>] and assigns 1 or −1, uniformly at random, to all variable outside the chosen block ℓ. We prove the Block Restriction Lemma saying that a PTF <i>f</i> of degree <i>d</i> becomes δ-close to constant when hit with a random block restriction, except with probability at most <i>m</i><sup>−1/2</sup> · (log<i>m</i>· logδ<sup>−1</sup>)<sup><i>O</i>(<i>d</i><sup>2</sup>)</sup>. As an application of our Restriction Lemma, we prove lower bounds against constant-depth circuits with PTF gates of any degree 1≤ <i>d</i>≪ √log<i>n</i>/loglog<i>n</i>, generalizing the recent bounds against constant-depth circuits with linear threshold gates (LTF gates) proved by Kane and Williams (<em>STOC</em>, 2016) and Chen, Santhanam, and Srinivasan (<em>CCC</em>, 2016). In particular, we show that there is an <i>n</i>-variate boolean function <i>F</i><sub><i>n</i></sub> ∈ <i>P</i> such that every depth-2 circuit with PTF gates of degree <i>d</i>≥ 1 that computes <i>F</i><sub><i>n</i></sub> must have at least (<i>n</i><sup>3/2+1/<i>d</i></sup>)· (log<i>n</i>)<sup>−<i>O</i>(<i>d</i><sup>2</sup>)</sup> wires. For constant depths greater than 2, we also show average-case lower bounds for such circuits with super-linear number of wires. These are the first super-linear bounds on the number of wires for circuits with PTF gates. We also give short proofs of the optimal-exponent average sensitivity bound for degree-<i>d</i> PTFs due to Kane (<em>Computational Complexity</em>, 2014), and the Littlewood-Offord type anticoncentration bound for degree-<i>d</i> multilinear polynomials due to Meka, Nguyen, and Vu (<em>Theory of Computing</em>, 2016). Finally, we give <em>derandomized</em> versions of our Block Restriction Lemma and Littlewood-Offord type anticoncentration bounds, using a pseudorandom generator for PTFs due to Meka and Zuckerman (<em>SICOMP</em>, 2013).", "title": "" }, { "docid": "c938996e79711cae64bdcc23d7e3944b", "text": "Decreased antimicrobial efficiency has become a global public health issue. The paucity of new antibacterial drugs is evident, and the arsenal against infectious diseases needs to be improved urgently. The selection of plants as a source of prototype compounds is appropriate, since plant species naturally produce a wide range of secondary metabolites that act as a chemical line of defense against microorganisms in the environment. Although traditional approaches to combat microbial infections remain effective, targeting microbial virulence rather than survival seems to be an exciting strategy, since the modulation of virulence factors might lead to a milder evolutionary pressure for the development of resistance. Additionally, anti-infective chemotherapies may be successfully achieved by combining antivirulence and conventional antimicrobials, extending the lifespan of these drugs. This review presents an updated discussion of natural compounds isolated from plants with chemically characterized structures and activity against the major bacterial virulence factors: quorum sensing, bacterial biofilms, bacterial motility, bacterial toxins, bacterial pigments, bacterial enzymes, and bacterial surfactants. Moreover, a critical analysis of the most promising virulence factors is presented, highlighting their potential as targets to attenuate bacterial virulence. The ongoing progress in the field of antivirulence therapy may therefore help to translate this promising concept into real intervention strategies in clinical areas.", "title": "" }, { "docid": "c84d5386390d3988d40076289e43a0d0", "text": "Business analytics systems are seen by many to be a growing source of value and competitive advantage for businesses. However, it is not clear if increasingly advanced analytical capabilities create opportunities for radical change in business or just represent an incremental improvement to existing systems. What are the key questions that researchers should be focusing on to improve our understanding of analytics? And are IS programs teaching students the right things to be successful in this environment? This panel aims to take stock of technological possibilities, practical experience and leading research to assess the current state and future direction of business analytics. In doing so, it will bring together senior researchers and industry representatives to share the leading challenges, opportunities and good practice that they see.", "title": "" }, { "docid": "04332930bc15dad8f4bf8efef0a53bc2", "text": "Although several measurements and analyses support the idea that the brain is energy-optimized, there is one disturbing, contradictory observation: In theory, computation limited by thermal noise can occur as cheaply as ~$2.9\\cdot 10^{-21}$ joules per bit (kTln2). Unfortunately, for a neuron the ostensible discrepancy from this minimum is startling - ignoring inhibition the discrepancy is $10^7$ times this amount and taking inhibition into account $>10^9$. Here we point out that what has been defined as neural computation is actually a combination of computation and neural communication: the communication costs, transmission from each excitatory postsynaptic activation to the S4-gating-charges of the fast Na+ channels of the initial segment (fNa's), dominate the joule-costs. Making this distinction between communication to the initial segment and computation at the initial segment (i.e., adding up of the activated fNa's) implies that the size of the average synaptic event reaching the fNa's is the size of the standard deviation of the thermal noise. Moreover, defining computation as the addition of activated fNa's, yields a biophysically plausible mechanism for approaching the desired minimum. This mechanism, requiring something like the electrical engineer's equalizer (not much more than the action potential generating conductances), only operates at threshold. This active filter modifies the last few synaptic excitations, providing barely enough energy to allow the last sub-threshold gating charge to transport. That is, the last, threshold-achieving S4-subunit activation requires an energy that matches the information being provided by the last few synaptic events, a ratio that is near kTln2 joules per bit.", "title": "" }, { "docid": "4d322609543deba6bea073652b6ff932", "text": "Development of accurate system models of immunity test setups might be extremely time consuming or even impossible. Here a new generalized approach to develop accurate component-based models of different system-level EMC test setups is proposed on the example of a BCI test setup. An equivalent circuit modelling of the components in LF range is combined with measurement-based macromodelling in HF range. The developed models show high accuracy up to 1 GHz. The issues of floating PCB configurations and incorporation of low frequency behaviour could be solved. Both frequency and time-domain simulations are possible. Arbitrary system configurations can be assembled quickly using the proposed component models. Any kind of system simulation like parametric variation and worst-case analysis can be performed with high accuracy.", "title": "" }, { "docid": "05227ab021e31353700c82eb2a3375bd", "text": "Human Computer Interaction is one of the pervasive application areas of computer science to develop with multimodal interaction for information sharings. The conversation agent acts as the major core area for developing interfaces between a system and user with applied AI for proper responses. In this paper, the interactive system plays a vital role in improving knowledge in the domain of health through the intelligent interface between machine and human with text and speech. The primary aim is to enrich the knowledge and help the user in the domain of health using conversation agent to offer immediate response with human companion feel.", "title": "" }, { "docid": "7908cc9a1cd6e6f48258a300db37d4a5", "text": "This report describes the algorithms implemented in a Matlab toolbox for change detection and data segmentation. Functions are provided for simulating changes, choosing design parameters and detecting abrupt changes in signals.", "title": "" }, { "docid": "c576c08aa746ea30a528e104932047a6", "text": "Despite tremendous progress achieved in temporal action localization, state-of-the-art methods still struggle to train accurate models when annotated data is scarce. In this paper, we introduce a novel active learning framework for temporal localization that aims to mitigate this data dependency issue. We equip our framework with active selection functions that can reuse knowledge from previously annotated datasets. We study the performance of two state-of-the-art active selection functions as well as two widely used active learning baselines. To validate the effectiveness of each one of these selection functions, we conduct simulated experiments on ActivityNet. We find that using previously acquired knowledge as a bootstrapping source is crucial for active learners aiming to localize actions. When equipped with the right selection function, our proposed framework exhibits significantly better performance than standard active learning strategies, such as uncertainty sampling. Finally, we employ our framework to augment the newly compiled Kinetics action dataset with ground-truth temporal annotations. As a result, we collect Kinetics-Localization, a novel large-scale dataset for temporal action localization, which contains more than 15K YouTube videos.", "title": "" }, { "docid": "1defbf845efc29a5a9bc780e17d11a92", "text": "The Resource Description Framework (RDF) is a graphbased data model promoted by the W3C as the standard for Semantic Web applications. Its associated query language is SPARQL. RDF graphs are often large and varied, produced in a variety of contexts, e.g., scientific applications, social or online media, government data etc. They are heterogeneous, i.e., resources described in an RDF graph may have very different sets of properties. An RDF resource may have: no types, one or several types (which may or may not be related to each other). RDF Schema (RDFS) information may optionally be attached to an RDF graph, to enhance the description of its resources. Such statements also entail that in an RDF graph, some data is implicit. According to the W3C RDF and SPARQL specification, the semantics of an RDF graph comprises both its explicit and implicit data; in particular, SPARQL query answers must be computed reflecting both the explicit and implicit data. These features make RDF graphs complex, both structurally and conceptually. It is intrinsically hard to get familiar with a new RDF dataset, especially if an RDF schema is sparse or not available at all. In this work, we study the problem of RDF summarization, that is: given an input RDF graph G, find an RDF graph SG which summarizes G as accurately as possible, while being possibly orders of magnitude smaller than the original graph. Such a summary can be used in a variety of contexts: to help an RDF application designer get acquainted with a new dataset, as a first-level user interface, or as a support for query optimization as typically used in semistructured graph data management [4] etc. Our approach is query-oriented, i.e., a summary should enable static analysis and help formulating and optimizing queries; for instance, querying a summary of a graph should reflect whether the query has some answers against this graph, or finding a simpler way to formulate the query etc. Ours is the first semi-structured data summarization approach focused on partially explicit, partially implicit RDF graphs. In the sequel, Section 2 recalls RDF basics, and sets the", "title": "" }, { "docid": "6acd62836639178293b876ae5f6d7397", "text": "This paper presents a procedure in building a Thai part-of-speech (POS) tagged corpus named ORCHID. It is a collaboration project between Communications Research Laboratory (CRL) of Japan and National Electronics and Computer Technology Center (NECTEC) of Thailand. We proposed a new tagset based on the previous research on Thai parts-of-speech for using in a multi -lingual machine translation project. We marked the corpus in three levels:paragraph, sentence and word. The corpus keeps text information in text information line and numbering line, which are necessary in retrieving process. Since there are no explicit word/sentence boundary, punctuation and inflection in Thai text, we have to separate a paragraph into sentences before tagging the POS. We applied a probabili stic trigram model for simultaneously word segmenting and POS tagging. Rule for syll able construction is additionally used to reduce the number of candidates for computing the probabilit y. The problems in POS assignment are formalized to reduce the ambiguity occurring in case of the similar POSs.", "title": "" }, { "docid": "cfce53c88e07b9cd837c3182a24d9901", "text": "The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.", "title": "" } ]
scidocsrr
8de20c10b78dab205f3c5494a3e8f529
Cross-Modal Deep Variational Hand Pose Estimation
[ { "docid": "ee9bccbfecd58151569449911c624221", "text": "Hand motion capture is a popular research field, recently gaining more attention due to the ubiquity of RGB-D sensors. However, even most recent approaches focus on the case of a single isolated hand. In this work, we focus on hands that interact with other hands or objects and present a framework that successfully captures motion in such interaction scenarios for both rigid and articulated objects. Our framework combines a generative model with discriminatively trained salient points to achieve a low tracking error and with collision detection and physics simulation to achieve physically plausible estimates even in case of occlusions and missing visual data. Since all components are unified in a single objective function which is almost everywhere differentiable, it can be optimized with standard optimization techniques. Our approach works for monocular RGB-D sequences as well as setups with multiple synchronized RGB cameras. For a qualitative and quantitative evaluation, we captured 29 sequences with a large variety of interactions and up to 150 degrees of freedom.", "title": "" } ]
[ { "docid": "39070a1f503e60b8709050fc2a250378", "text": "Plants in their natural habitats adapt to drought stress in the environment through a variety of mechanisms, ranging from transient responses to low soil moisture to major survival mechanisms of escape by early flowering in absence of seasonal rainfall. However, crop plants selected by humans to yield products such as grain, vegetable, or fruit in favorable environments with high inputs of water and fertilizer are expected to yield an economic product in response to inputs. Crop plants selected for their economic yield need to survive drought stress through mechanisms that maintain crop yield. Studies on model plants for their survival under stress do not, therefore, always translate to yield of crop plants under stress, and different aspects of drought stress response need to be emphasized. The crop plant model rice ( Oryza sativa) is used here as an example to highlight mechanisms and genes for adaptation of crop plants to drought stress.", "title": "" }, { "docid": "04346e5f63684a071beeb08935c033a7", "text": "Irrigation is one of the most powerful sources in India but it is hard for an individual person to monitor continuously and regularly. This is due to laziness of mankind. In order to make this irrigation easier our system comprises some changes in the usual irrigation system. The newly developed project controls water supply automatically in water crisis areas through moisture sensor. This paper covers the application of Sensor based Irrigation system through wireless sensor networks, which uses a renewable energy as a source. In this system Wireless Sensor Networks Plays a major role in Environment monitoring system and provides unmanned irrigation. WSN consists of moisture sensors, Energy harvesting systems, embedded controllers and uses Super capacitors as storage device.", "title": "" }, { "docid": "d5ed3d05cedbee5ef3c06b152d0a19ae", "text": "The ability to ask questions is a powerful tool to gather information in order to learn about the world and resolve ambiguities. In this paper, we explore a novel problem of generating discriminative questions to help disambiguate visual instances. Our work can be seen as a complement and new extension to the rich research studies on image captioning and question answering. We introduce the first large-scale dataset with over 10,000 carefully annotated images-question tuples to facilitate benchmarking. In particular, each tuple consists of a pair of images and 4.6 discriminative questions (as positive samples) and 5.9 non-discriminative questions (as negative samples) on average. In addition, we present an effective method for visual discriminative question generation. The method can be trained in a weakly supervised manner without discriminative images-question tuples but just existing visual question answering datasets. Promising results are shown against representative baselines through quantitative evaluations and user studies.", "title": "" }, { "docid": "d35515299b37b5eb936986d33aca66e1", "text": "This paper describes an Ada framework called Cheddar which provides tools to check if a real time application meets its temporal constraints. The framework is based on the real time scheduling theory and is mostly written for educational purposes. With Cheddar, an application is defined by a set of processors, tasks, buffers, shared resources and messages. Cheddar provides feasibility tests in the cases of monoprocessor, multiprocessor and distributed systems. It also provides a flexible simulation engine which allows the designer to describe and run simulations of specific systems. The framework is open and has been designed to be easily connected to CASE tools such as editors, design tools, simulators, ...", "title": "" }, { "docid": "3827b5f919c21dc7e228eaf78ffcfb46", "text": "In this paper we described the development of both the hardware and the algorithms for a novel laser vision system suitable for measuring distances from both solid and mesh-like targets in underwater environments. The system was developed as a part of the AQUABOT project that developed an underwater robotic system for autonomous inspection of offshore aquaculture installation. The system takes into account the hemispherical optics typical in underwater vehicle designs and implements an array of line-lasers to ensure that mesh-like targets provide reflections in a consistent manner. The developed algorithms for the laser vision system are capable of providing either raw pointcloud data sets from each laser or with additional processing high level information like distance and relative orientation of the target with respect to the ROV can be recovered. An automatic calibration procedure along with the accompanying hardware that was developed, is described in this paper, to reduce the calibration overhead required by regular maintenance operations as is typical for underwater vehicles operating in sea-water. A set of experimental results in controlled laboratory environment as well as at offshore aquaculture installations demonstrate the performance of the system.", "title": "" }, { "docid": "35fbdf776186afa7d8991fa4ff22503d", "text": "Lang Linguist Compass 2016; 10: 701–719 wileyo Abstract Research and industry are becoming more and more interested in finding automatically the polarised opinion of the general public regarding a specific subject. The advent of social networks has opened the possibility of having access to massive blogs, recommendations, and reviews. The challenge is to extract the polarity from these data, which is a task of opinion mining or sentiment analysis. The specific difficulties inherent in this task include issues related to subjective interpretation and linguistic phenomena that affect the polarity of words. Recently, deep learning has become a popular method of addressing this task. However, different approaches have been proposed in the literature. This article provides an overview of deep learning for sentiment analysis in order to place these approaches in context.", "title": "" }, { "docid": "cc65c29b613deaef18758b124e7c13d5", "text": "A novel substrate integrated waveguide (SIW) fed horizontally polarized endfire magnetoelectric (ME) dipole antenna composed of an open-ended SIW with broad walls vertical to substrates and a pair of electric dipoles realized by four metallic patches is proposed. Simple configuration and excellent performance including an impedance bandwidth of 46.5%, stable gain of around 6 dBi, and symmetrical cardioid radiation patterns with low backward radiation and low cross polarizations are achieved. An SIW 90° twist integrated in three-layered substrate is implemented in order to connect the ME-dipole antenna conveniently to the SIW beam-forming network with broad walls parallel to substrates. A <inline-formula> <tex-math notation=\"LaTeX\">$4 \\times 4$ </tex-math></inline-formula> SIW Butler matrix with a three-layered zigzag topology is then designed, which enables a size reduction of 45% for the matrix compared with conventional single-layered configuration but not affecting its operating characteristics. By employing a <inline-formula> <tex-math notation=\"LaTeX\">$2 \\times 4$ </tex-math></inline-formula> ME-dipole array with 90° twists, two-folded Butler matrices and four SIW 3 dB E-plane couplers, a multibeam endfire array that can radiate eight beams scanning in two dimensions is designed at the 60 GHz band. The fabricated prototype verifies that a wide impedance bandwidth of 22.1%, gain varying from 10 to 13 dBi and stable radiation beams can be obtained. Due to good performance and the compact structure with low fabrication costs, the proposed design would be attractive for future millimeter-wave wireless applications including 5G communications and the WiGig system.", "title": "" }, { "docid": "bb1cea4fd4922b15b6aec98d43280b8c", "text": "This report is on the design, control strategy, implementation, and performance evaluation of a novel leg–wheel transformable robot called TurboQuad, which can perform fast gait/mode coordination and transitions in wheeled mode, in legged trotting, and in legged walking while in motion. This functionality was achieved by including two novel setups in the robot that were not included in its predecessor, Quattroped. First, a new leg–wheel mechanism was used, in which the leg/wheel operation and its in situ transition can be driven by the same set of motors, so the actuation system and power can be utilized efficiently. Second, a bio-inspired control strategy was applied based on the central pattern generator and coupled oscillator networks, in which the gait/mode generation, coordination, and transitions can be integrally controlled. The robot was empirically built and its performances in the described three gaits/modes as well as the transitions among them were experimentally evaluated and will be discussed in this paper.", "title": "" }, { "docid": "98cfdc1fb3c957283eb62470376edf82", "text": "In this paper we present the MDA framework (standing for Mechanics, Dynamics, and Aesthetics), developed and taught as part of the Game Design and Tuning Workshop at the Game Developers Conference, San Jose 2001-2004. MDA is a formal approach to understanding games – one which attempts to bridge the gap between game design and development, game criticism, and technical game research. We believe this methodology will clarify and strengthen the iterative processes of developers, scholars and researchers alike, making it easier for all parties to decompose, study and design a broad class of game designs and game artifacts.", "title": "" }, { "docid": "469b38e907851ea4a4a968f6df289819", "text": "Classifying Remote Sensing Images (RSI) is a hard task. There are automatic approaches whose results normally need to be revised. The identification and polygon extraction tasks usually rely on applying classification strategies that exploit visual aspects related to spectral and texture patterns identified in RSI regions. There are a lot of image descriptors proposed in the literature for content-based image retrieval purposes that may be useful for RSI classification. This paper presents a comparative study to evaluate the potential of using successful color and texture image descriptors for remote sensing retrieval and classification. Seven descriptors that encode texture information and twelve color descriptors that can be used to encode spectral information were selected. We perform experiments to evaluate the effectiveness of these descriptors, considering image retrieval and classification tasks. To evaluate descriptors in classification tasks, we also propose a methodology based on KNN classifier. Experiments demonstrate that Joint Auto-Correlogram (JAC), Color Bitmap, Invariant Steerable Pyramid Decomposition (SID) and Quantized Compound Change Histogram (QCCH) yield the", "title": "" }, { "docid": "f8b201105e3b92ed4ef2a884cb626c0d", "text": "Several years of academic and industrial research efforts have converged to a common understanding on fundamental security building blocks for the upcoming vehicular communication (VC) systems. There is a growing consensus toward deploying a special-purpose identity and credential management infrastructure, i.e., a vehicular public-key infrastructure (VPKI), enabling pseudonymous authentication, with standardization efforts toward that direction. In spite of the progress made by standardization bodies (IEEE 1609.2 and ETSI) and harmonization efforts [Car2Car Communication Consortium (C2C-CC)], significant questions remain unanswered toward deploying a VPKI. Deep understanding of the VPKI, a central building block of secure and privacy-preserving VC systems, is still lacking. This paper contributes to the closing of this gap. We present SECMACE, a VPKI system, which is compatible with the IEEE 1609.2 and ETSI standards specifications. We provide a detailed description of our state-of-the-art VPKI that improves upon existing proposals in terms of security and privacy protection, and efficiency. SECMACE facilitates multi-domain operations in the VC systems and enhances user privacy, notably preventing linking pseudonyms based on timing information and offering increased protection even against honest-but-curious VPKI entities. We propose multiple policies for the vehicle–VPKI interactions and two large-scale mobility trace data sets, based on which we evaluate the full-blown implementation of SECMACE. With very little attention on the VPKI performance thus far, our results reveal that modest computing resources can support a large area of vehicles with very few delays and the most promising policy in terms of privacy protection can be supported with moderate overhead.", "title": "" }, { "docid": "fc63f044042826814cc761904b863967", "text": "iv ABSTRACT Element Detection in Japanese Comic Book Panels Toshihiro Kuboi Comic books are a unique and increasingly popular form of entertainment combining visual and textual elements of communication. This work pertains to making comic books more accessible. Specifically, this paper explains how we detect elements such as speech bubbles present in Japanese comic book panels. Some applications of the work presented in this paper are automatic detection of text and its transformation into audio or into other languages. Automatic detection of elements can also allow reasoning and analysis at a deeper semantic level than what's possible today. Our approach uses an expert system and a machine learning system. The expert system process information from images and inspires feature sets which help train the machine learning system. The expert system detects speech bubbles based on heuristics. The machine learning system uses machine learning algorithms. Specifically, Naive Bayes, Maximum Entropy, and support vector machine are used to detect speech bubbles. The algorithms are trained in a fully-supervised way and a semi-supervised way. Both the expert system and the machine learning system achieved high accuracy. We are able to train the machine learning algorithms to detect speech bubbles just as accurately as the expert system. We also applied the same approach to eye detection of characters in the panels, and are able to detect majority of the eyes but with low precision. However, we are able to improve the performance of our eye detection system significantly by combining the SVM and either the Naive Bayes or the AdaBoost classifiers. v ACKNOWLEDGMENTS This thesis is inspired by Eriq Augustine's original project on the machine translation of Japanese comic books. I would like to thank Eriq for his guidance and assistance on the selection of this thesis topic.", "title": "" }, { "docid": "c5ff79665033fd215411069cb860d641", "text": "This paper presents a new geometry-based method to determine if a cable-driven robot operating in a d-degree-of-freedom workspace (2 ≤ d ≤ 6) with n ≥ d cables can generate a given set of wrenches in a given pose, considering acceptable minimum and maximum tensions in the cables. To this end, the fundamental nature of the Available Wrench Set is studied. The latter concept, defined here, is closely related to similar sets introduced in [23, 4]. It is shown that the Available Wrench Set can be represented mathematically by a zonotope, a special class of convex polytopes. Using the properties of zonotopes, two methods to construct the Available Wrench Set are discussed. From the representation of the Available Wrench Set, computationallyefficient and non-iterative tests are presented to verify if this set includes the Task Wrench Set, the set of wrenches needed for a given task. INTRODUCTION AND PROBLEM DEFINITION A cable-driven robot, or simply cable robot, is a parallel robot whose actuated limbs are cables. The length of the cables can be adjusted in a coordinated manner to control the pose (position and orientation) and/or wrench (force and torque) at the moving platform. Pioneer applications of such mechanisms are the NIST Robocrane [1], the Falcon high-speed manipulator [15] and the Skycam [7]. The fact that cables can only exert efforts in one direction impacts the capability of the mechanism to generate wrenches at the platform. Previous work already presented methods to test if a set of wrenches – ranging from one to all possible wrenches – could be generated by a cable robot in a given pose, considering that cables work only in tension. Some of the proposed methods focus on fully constrained cable robots while others apply to unconstrained robots. In all cases, minimum and/or maximum cable tensions is considered. A complete section of this paper is dedicated to the comparison of the proposed approach with previous methods. A general geometric approach that addresses all possible cases without using an iterative algorithm is presented here. It will be shown that the results obtained with this approach are consistent with the ones previously presented in the literature [4, 5, 14, 17, 18, 22, 23, 24, 26]. This paper does not address the workspace of cable robots. The latter challenging problem was addressed in several papers over the recent years [10, 11, 12, 19, 25]. Before looking globally at the workspace, all proposed methods must go through the intermediate step of assessing the capability of a mechanism to generate a given set of wrenches. The approach proposed here is also compared with the intermediate steps of the papers on the workspace determination of cable robots. The task that a robot has to achieve implies that it will have to be able to generate a given set of wrenches in a given pose x. This Task Wrench Set, T , depends on the various applications of the considered robot, which can be for example to move a camera or other sensors [7, 6, 9, 3], manipulate payloads [15, 1] or simulate walking sensations to a user immersed in virtual reality [21], just to name a few. The Available Wrench Set, A, is the set of wrenches that the mechanism can generate. This set depends on the architecture of the robot, i.e., where the cables are attached on the platform and where the fixed winches are located. It also depends on the configuration pose as well as on the minimum and maximum acceptable tension in the cables. All the wrenches that are possibly needed to accomplish a task can 1 Copyright  2008 by ASME", "title": "" }, { "docid": "9033dbdea330344abec5807ba431f141", "text": "Abstract. In this paper the solution of nonlinear programming problems by a Sequential Quadratic Programming (SQP) trust-region algorithm is considered. The aim of the present work is to promote global convergence without the need to use a penalty function. Instead, a new concept of a “filter” is introduced which allows a step to be accepted if it reduces either the objective function or the constraint violation function. Numerical tests on a wide range of test problems are very encouraging and the new algorithm compares favourably with LANCELOT and an implementation of Sl1QP.", "title": "" }, { "docid": "def6762457fd4e95a35e3c83990c4943", "text": "The possibility of controlling dexterous hand prostheses by using a direct connection with the nervous system is particularly interesting for the significant improvement of the quality of life of patients, which can derive from this achievement. Among the various approaches, peripheral nerve based intrafascicular electrodes are excellent neural interface candidates, representing an excellent compromise between high selectivity and relatively low invasiveness. Moreover, this approach has undergone preliminary testing in human volunteers and has shown promise. In this paper, we investigate whether the use of intrafascicular electrodes can be used to decode multiple sensory and motor information channels with the aim to develop a finite state algorithm that may be employed to control neuroprostheses and neurocontrolled hand prostheses. The results achieved both in animal and human experiments show that the combination of multiple sites recordings and advanced signal processing techniques (such as wavelet denoising and spike sorting algorithms) can be used to identify both sensory stimuli (in animal models) and motor commands (in a human volunteer). These findings have interesting implications, which should be investigated in future experiments.", "title": "" }, { "docid": "0cc74f4852ea59306d704d9660179faf", "text": "This article presents an overview of optical wireless (OW) communication systems that operate both in the short(personal and indoor systems) and the long-range (outdoor and hybrid) regimes. Each of these areas is discussed in terms of (a) key requirements, (b) their application framework, (c) major impairments and applicable mitigation techniques, and (d) current and/or future trends. Personal communication systems are discussed within the context of point-to-point ultra-high speed data transfer. The most relevant application framework and related standards are presented, including the next generation Giga-IR standard that extends personal communication speeds to over 1 Gb/s. As far as indoor systems are concerned, emphasis is given on modeling the dispersive nature of indoor OW channels, on the limitations that dispersion imposes on user mobility and dispersion mitigation techniques. Visible light communication systems, which provide both illumination and communication over visible or hybrid visible/ infrared LEDs, are presented as the most important representative of future indoor OW systems. The discussion on outdoor systems focuses on the impact of atmospheric effects on the optical channel and associated mitigation techniques that extend the realizable link lengths and transfer rates. Currently, outdoor OW is commercially available at 10 Gb/s Ethernet speeds for Metro networks and Local-Area-Network interconnections and speeds are expected to increase as faster and more reliable optical components become available. This article concludes with hybrid optical wireless/radio-frequency (OW/RF) systems that employ an additional RF link to improve the overall system reliability. Emphasis is given on cooperation techniques between the reliable RF subsystem and the broadband OW system.", "title": "" }, { "docid": "e2c1a9e2df9ab526a846c4e28ec27bae", "text": "This paper considers the multiuser power control problem in a frequency-selective interference channel. The interference channel is modeled as a noncooperative game, and the existence and uniqueness of a Nash equilibrium are established for a two-player version of the game. An iterative water-filling algorithm is proposed to efficiently reach the Nash equilibrium. The iterative water-filling algorithm can be implemented distributively without the need for centralized control. It implicitly takes into account the loop transfer functions and cross couplings, and it reaches a competitively optimal power allocation by offering an opportunity for loops to negotiate the best use of power and frequency with each other. When applied to the upstream power backoff problem in very-high bit-rate digital subscriber lines and the downstream spectral compatibility problem in asymmetric digital subscriber lines, the new power control algorithm is found to give a significant performance improvement when compared with existing methods.", "title": "" }, { "docid": "65a4ec1b13d740ae38f7b896edb2eaff", "text": "The problem of evolutionary network analysis has gained increasing attention in recent years, because of an increasing number of networks, which are encountered in temporal settings. For example, social networks, communication networks, and information networks continuously evolve over time, and it is desirable to learn interesting trends about how the network structure evolves over time, and in terms of other interesting trends. One challenging aspect of networks is that they are inherently resistant to parametric modeling, which allows us to truly express the edges in the network as functions of time. This is because, unlike multidimensional data, the edges in the network reflect interactions among nodes, and it is difficult to independently model the edge as a function of time, without taking into account its correlations and interactions with neighboring edges. Fortunately, we show that it is indeed possible to achieve this goal with the use of a matrix factorization, in which the entries are parameterized by time. This approach allows us to represent the edge structure of the network purely as a function of time, and predict the evolution of the network over time. This opens the possibility of using the approach for a wide variety of temporal network analysis problems, such as predicting future trends in structures, predicting links, and node-centric anomaly/event detection. This flexibility is because of the general way in which the approach allows us to express the structure of the network as a function of time. We present a number of experimental results on a number of temporal data sets showing the effectiveness of the approach.", "title": "" }, { "docid": "ea7121fa37b2e41f202a042073c72c54", "text": "Sentiment analysis from text consists of extracting information about opinions, sentiments, and even emotions conveyed by writers towards topics of interest. It is often equated to opinion mining, but it should also encompass emotion mining. Opinion mining involves the use of natural language processing and machine learning to determine the attitude of a writer towards a subject. Emotion mining is also using similar technologies but is concerned with detecting and classifying writers emotions toward events or topics. Textual emotion-mining methods have various applications, including gaining information about customer satisfaction, helping in selecting teaching materials in e-learning, recommending products based on users emotions, and even predicting mental-health disorders. In surveys on sentiment analysis, which are often old or incomplete, the strong link between opinion mining and emotion mining is understated. This motivates the need for a different and new perspective on the literature on sentiment analysis, with a focus on emotion mining. We present the state-of-the-art methods and propose the following contributions: (1) a taxonomy of sentiment analysis; (2) a survey on polarity classification methods and resources, especially those related to emotion mining; (3) a complete survey on emotion theories and emotion-mining research; and (4) some useful resources, including lexicons and datasets.", "title": "" }, { "docid": "f8ddedb1bdc57d75fb5ea9bf81ec51f5", "text": "Given a text description, most existing semantic parsers synthesize a program in one shot. However, it is quite challenging to produce a correct program solely based on the description, which in reality is often ambiguous or incomplete. In this paper, we investigate interactive semantic parsing, where the agent can ask the user clarification questions to resolve ambiguities via a multi-turn dialogue, on an important type of programs called “If-Then recipes.” We develop a hierarchical reinforcement learning (HRL) based agent that significantly improves the parsing performance with minimal questions to the user. Results under both simulation and human evaluation show that our agent substantially outperforms non-interactive semantic parsers and rule-based agents.", "title": "" } ]
scidocsrr
0d222c483d76a2a15b45b03cd42c1487
The importance of Being Recurrent for Modeling Hierarchical Structure
[ { "docid": "346349308d49ac2d3bb1cfa5cc1b429c", "text": "The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks.1 Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT’14 EnglishGerman and WMT’14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.", "title": "" } ]
[ { "docid": "9a515a1266a868ca5680fc5676ca4b37", "text": "To assure that an autonomous car is driving safely on public roads, its object detection module should not only work correctly, but show its prediction confidence as well. Previous object detectors driven by deep learning do not explicitly model uncertainties in the neural network. We tackle with this problem by presenting practical methods to capture uncertainties in a 3D vehicle detector for Lidar point clouds. The proposed probabilistic detector represents reliable epistemic uncertainty and aleatoric uncertainty in classification and localization tasks. Experimental results show that the epistemic uncertainty is related to the detection accuracy, whereas the aleatoric uncertainty is influenced by vehicle distance and occlusion. The results also show that we can improve the detection performance by 1%–5% by modeling the aleatoric uncertainty.", "title": "" }, { "docid": "bd7e66df779f7ab73f81fc3008ce3eb8", "text": "In recent days, researchers are actively analysing the human brain to understand the underlying mechanism of heterogeneous psychiatric conditions. Schizophrenia is a severe neurological disorder which has been characterized by varying symptoms namely hallucinations, delusions and cognitive problems. In this paper, we have investigated the resting state fMRI images of 15 normal controls and 12 Schizophrenia patients by constructing functional connectome through image preprocessing techniques namely Realignment, temporal correction, filtering, etc., The parcellation of neuroimage is performed based on Automated Anatomical Labelling (AAL) atlas and 74 regions of interest (ROI) are identified. Functional connectome of each subject includes Pearson correlation values of mean time courses obtained between the regions. These region to region functional connectivity is considered as features and the feature selection technique namely Fisher filtering, ReliefF filtering and Runs filtering are applied. Then the features which are found by different filtering techniques are fed as input to the supervised non linear classifiers namely Random forest, C4.5, Cost sensitive classification and regression tree and K-Nearest Neighbour classification algorithm. These algorithms have produced classification rules which are used in the prediction of Schizophrenia disorder. C4.5 has achieved the higher predictive accuracy of 93% with leave-one out cross-validation and the predominant feature or diagnostic biomarker is obtained from the rule. This feature is one among the commonly identified feature of different feature selection techniques. The work has shown that the biomarker corresponds to the alterations in the functional connectivity between the brain regions namely Rolandic operculum and Postcentral gyrus of brain's left hemisphere which is involved in sensorimotor function of human.", "title": "" }, { "docid": "95a3d8a0d180175f92bce70b69ac5f0a", "text": "This paper addresses the problem of deploying a network of robots into an environment, where the environment is hazardous to the robots. This may mean that there are adversarial agents in the environment trying to disable the robots, or that some regions of the environment tend to make the robots fail, for example due to radiation, fire, adverse weather, or caustic chemicals. A probabilistic model of the environment is formulated, under which recursive Bayesian filters are used to estimate the environment events and hazards online. The robots must control their positions both to avoid sensor failures and to provide useful sensor information by following the analytical gradient of mutual information computed using these online estimates. Mutual information is shown to combine the competing incentives of avoiding failure and collecting informative measurements under a common objective. Simulations demonstrate the performance of the algorithm. Mac Schwager GRASP Lab, University of Pennsylvania, 3330 Walnut St, PA 19104, USA, and Computer Science and Artificial Intelligence Lab, MIT, 32 Vassar St, Cambridge, MA 02139, USA e-mail: schwager@seas.upenn.edu Philip Dames GRASP Lab, University of Pennsylvania, 3330 Walnut St, PA 19104, USA e-mail: pdames@seas.upenn.edu Daniela Rus Computer Science and Artificial Intelligence Lab, MIT, 32 Vassar St, Cambridge, MA 02139, USA e-mail: rus@csail.mit.edu Vijay Kumar GRASP Lab, University of Pennsylvania, 3330 Walnut St, PA 19104, USA e-mail: kumar@seas.upenn.edu", "title": "" }, { "docid": "33b09a4689b3e948fc8a072c0d9672c2", "text": "This review article identifies and discusses some of main issues and potential problems – paradoxes and pathologies – around the communication of recorded information, and points to some possible solutions. The article considers the changing contexts of information communication, with some caveats about the identification of ‘pathologies of information’, and analyses the changes over time in the way in which issues of the quantity and quality of information available have been regarded. Two main classes of problems and issues are discussed. The first comprises issues relating to the quantity and diversity of information available: information overload, information anxiety, etc. The second comprises issues relating to the changing information environment with the advent of Web 2.0: loss of identity and authority, emphasis on micro-chunking and shallow novelty, and the impermanence of information. A final section proposes some means of solution of problems and of improvements to the situation.", "title": "" }, { "docid": "e8f79d1ea8c260cb4102de4e497ce340", "text": "We propose a unified product embedded representation that is optimized for the task of retrieval-based product recommendation. We generate this representation using Content2Vec, a new deep architecture that merges product content information such as text and image, and we analyze its performance on hard recommendation setups such as cold-start and cross-category recommendations. In the case of a normal recommendation regime where collaborative information signal is available, we merge the product co-occurrence information and propose a second architecture Content2vec+ and show its lift in performance versus non-hybrid approaches in both cold start and normal recommendation regimes.", "title": "" }, { "docid": "321479bcaa8d9a183cf7d4c75c30b772", "text": "Undirected graphical models, or Markov networks, are a popular class of statistical models, used in a wide variety of applications. Popular instances of this class include Gaussian graphical models and Ising models. In many settings, however, it might not be clear which subclass of graphical models to use, particularly for non-Gaussian and non-categorical data. In this paper, we consider a general sub-class of graphical models where the node-wise conditional distributions arise from exponential families. This allows us to derive multivariate graphical model distributions from univariate exponential family distributions, such as the Poisson, negative binomial, and exponential distributions. Our key contributions include a class of M-estimators to fit these graphical model distributions; and rigorous statistical analysis showing that these M-estimators recover the true graphical model structure exactly, with high probability. We provide examples of genomic and proteomic networks learned via instances of our class of graphical models derived from Poisson and exponential distributions.", "title": "" }, { "docid": "8b4e1dde6a9c004ae6095d3ff5232595", "text": "The authors tested the effect of ambient scents in a shopping mall environment. Two competing models were used. The first model is derived from the environmental psychology research stream by Mehrabian and Russel (1974) and Donovan and Rossiter (1982) where atmospheric cues generate pleasure and arousal, and, in turn, an approach/avoidance behavior. The emotion–cognition model is supported by Zajonc and Markus (1984). The second model to be tested is based on Lazarus’ (1991) cognitive theory of emotions. In this latter model, shoppers’ perceptions of the retail environment and product quality mediate the effects of ambient scent cues on emotions and spending behaviors. Positive affect is enhanced from shoppers’ evaluations. Using structural equation modeling the authors conclude that the cognitive theory of emotions better explains the effect of ambient scent. Managerial implications are discussed. D 2003 Elsevier Science Inc. All rights reserved.", "title": "" }, { "docid": "e485aca373cf4543e1a8eeadfa0e6772", "text": "Identifying peer-review helpfulness is an important task for improving the quality of feedback that students receive from their peers. As a first step towards enhancing existing peerreview systems with new functionality based on helpfulness detection, we examine whether standard product review analysis techniques also apply to our new context of peer reviews. In addition, we investigate the utility of incorporating additional specialized features tailored to peer review. Our preliminary results show that the structural features, review unigrams and meta-data combined are useful in modeling the helpfulness of both peer reviews and product reviews, while peer-review specific auxiliary features can further improve helpfulness prediction.", "title": "" }, { "docid": "4eadadc508a2846b5827b2ca5ae4b29f", "text": "A plethora of definitions for innovation types has resulted in an ambiguity in the way the terms ‘innovation’ and ‘innovativeness’ are operationalized and utilized in the new product development literature. The terms radical, really-new, incremental and discontinuous are used ubiquitously to identify innovations. One must question, what is the difference between these different classifications? To date consistent definitions for these innovation types have not emerged from the new product research community. A review of the literature from the marketing, engineering, and new product development disciplines attempts to put some clarity and continuity to the use of these terms. This review shows that it is important to consider both a marketing and technological perspective as well as a macrolevel and microlevel perspective when identifying innovations. Additionally, it is shown when strict classifications from the extant literature are applied, a significant shortfall appears in empirical work directed toward radical and really new innovations. A method for classifying innovations is suggested so that practitioners and academics can talk with a common understanding of how a specific innovation type is identified and how the innovation process may be unique for that particular innovation type. A recommended list of measures based on extant literature is provided for future empirical research concerning technological innovations and innovativeness. © 2002 PDMA. All rights reserved. “A rose is a rose is a rose. And a rose by any other name would smell just as sweet.” Gertrude Stein & William Shakespeare", "title": "" }, { "docid": "4d0a51bbc27ff1625d0b2d50f072526f", "text": "Realistic image forgeries involve a combination of splicing, resampling, cloning, region removal and other methods. While resampling detection algorithms are effective in detecting splicing and resampling, copy-move detection algorithms excel in detecting cloning and region removal. In this paper, we combine these complementary approaches in a way that boosts the overall accuracy of image manipulation detection. We use the copy-move detection method as a pre-filtering step and pass those images that are classified as untampered to a deep learning based resampling detection framework. Experimental results on various datasets including the 2017 NIST Nimble Challenge Evaluation dataset comprising nearly 10,000 pristine and tampered images shows that there is a consistent increase of 8%-10% in detection rates, when copy-move algorithm is combined with different resampling detection algorithms. Introduction Fake images are becoming a growing threat to information reliability. With the ubiquitous availability of various powerful image editing software tools and smartphone apps such as Photoshop, GIMP, Snapseed and Pixlr, it has become very trivial to manipulate digital images. The field of Digital Image Forensics aims to develop tools that can identify the authenticity of digital images and localize regions in an image which have been tampered with. There are many types of image forgeries such as splicing objects from one image to another, removing objects or regions from images, creating copies of objects in the same image, and more. To detect these forgeries, researchers have proposed methods based on several techniques such as JPEG compression artifacts, resampling detection, lighting artifacts, noise inconsistencies, camera sensor noise, and many more. However, most techniques in literature focus on a specific type of manipulation or a groups of similar tamper operations. In realistic scenarios, a host of operations are applied when creating tampered images. For example, when an object is spliced onto an image, it is often accompanied by other operations such as scaling, rotation, smoothing, contrast enhancement, and more. Very few studies address these challenging scenarios with the aid of Image Forensics challenges and competitions such as IEEE Image Forensics challenge [1] and the recent NIST Nimble Media Forensics challenge [2]. These competitions try to mimic a realistic scenario and contain a large number of doctored images which involves several types of image manipulations. In order to detect the tampered images, a single detection method will not be sufficient to identify the different types of manipulations. In this paper, we demonstrate the importance of combining forgery detection algorithms, especially when the features are complementary, to boost the image manipulation detection rates. We propose a simple method to identify realistic forgeries by fusing two complementary approaches: resampling detection and copy-move detection. Our experimental results show the approach is promising and achieves an increase in detection rates. Image forgeries are usually created by splicing a portion of an image onto some other image. In the case of splicing or object removal, the tampered region is often scaled or rotated to make it proportional to the neighboring untampered area. This creates resampling of the image grid and detection of resampling indicates evidence of image manipulation. Several techniques have been proposed to detect resampling in digital images [3, 4, 5, 6, 7, 8, 9]. Similarly, copy-move forgeries are common, where a part of the image is copied and pasted on another part generally to conceal unwanted portions of the image. Detection of these copied parts indicates evidence of tampering [10, 11, 12, 13, 14, 15, 16, 17]. In this paper, we combine our previous work on resampling forgery detection [18] with a dense-field based copy-move forgery detection method developed by Cozzolino et al. [16] to assign a manipulation confidence score. We demonstrate that our algorithm is effective at detecting many different types of image tampering that can be used to verify the authenticity of digital images. In [18], we designed a detector based on Radon transform and deep learning. The detector found image artifacts imposed by classic upsampling, downsampling, clockwise and counter clockwise rotations, and shearing methods. We combined these five different resampling detectors with a JPEG compression detector and for each of the six detectors we output a heatmap which indicates the regions of resampling anomalies. The generated heatmaps were smoothed to localize the detection and determine the detection score. In this work, we combine the above approach with a copy-move forgery detector [16]. Our experiments demonstrate that the resampling features are complementary to the copyar X iv :1 80 2. 03 15 4v 2 [ cs .C V ] 1 9 Fe b 20 18", "title": "" }, { "docid": "20c57c17bd2db03d017b0f3fa8e2eb23", "text": "Recent research shows that the i-vector framework for speaker recognition can significantly benefit from phonetic information. A common approach is to use a deep neural network (DNN) trained for automatic speech recognition to generate a universal background model (UBM). Studies in this area have been done in relatively clean conditions. However, strong background noise is known to severely reduce speaker recognition performance. This study investigates a phonetically-aware i-vector system in noisy conditions. We propose a front-end to tackle the noise problem by performing speech separation and examine its performance for both verification and identification tasks. The proposed separation system trains a DNN to estimate the ideal ratio mask of the noisy speech. The separated speech is then used to extract enhanced features for the i-vector framework. We compare the proposed system against a multi-condition trained baseline and a traditional GMM-UBM i-vector system. Our proposed system provides an absolute average improvement of 8% in identification accuracy and 1.2% in equal error rate.", "title": "" }, { "docid": "c2553e6256ef130fbd5bc0029bb5e7b7", "text": "Using Blockchain seems a promising approach for Business Process Reengineering (BPR) to alleviate trust issues among stakeholders, by providing decentralization, transparency, traceability, and immutability of information along with its business logic. However, little work seems to be available on utilizing Blockchain for supporting BPR in a systematic and rational way, potentially leading to disappointments and even doubts on the utility of Blockchain. In this paper, as ongoing research, we outline Fides - a framework for exploiting Blockchain towards enhancing the trustworthiness for BPR. Fides supports diagnosing trust issues with AS-IS business processes, exploring TO-BE business process alternatives using Blockchain, and selecting among the alternatives. A business process of a retail chain for a food supply chain is used throughout the paper to illustrate Fides concepts.", "title": "" }, { "docid": "aebd98db07b67b883bf124ab9e5539ed", "text": "In this paper we introduce a high-precision query classification method to identify the intent of a user query given that it has been seen in the past based on informational, navigational, and transactional categorization. We propose using three vector representations of queries which, using support vector machines, allow past queries to be classified by user’s intents. The queries have been represented as vectors using two factors drawn from click-through data: the time users take to review the documents they select and the popularity (quantity of preferences) of the selected documents. Experimental results show that time is the factor that yields higher precision in classification. The experiments shown in this work illustrate that the proposed classifiers can effectively identify the intent of past queries with high-precision.", "title": "" }, { "docid": "fda10c187c97f5c167afaa0f84085953", "text": "We provide empirical evidence that suggests social media and stock markets have a nonlinear causal relationship. We take advantage of an extensive data set composed of social media messages related to DJIA index components. By using information-theoretic measures to cope for possible nonlinear causal coupling between social media and stock markets systems, we point out stunning differences in the results with respect to linear coupling. Two main conclusions are drawn: First, social media significant causality on stocks’ returns are purely nonlinear in most cases; Second, social media dominates the directional coupling with stock market, an effect not observable within linear modeling. Results also serve as empirical guidance on model adequacy in the investigation of sociotechnical and financial systems.", "title": "" }, { "docid": "94c47638f35abc67c366ceb871898b86", "text": "The past few years have seen a growing interest in the application\" of three-dimensional image processing. With the increasing demand for 3-D spatial information for tasks of passive navigation[7,12], automatic surveillance[9], aerial cartography\\l0,l3], and inspection in industrial automation, the importance of effective stereo analysis has been made quite clear. A particular challenge is to provide reliable and accurate depth data for input to object or terrain modelling systems (such as [5]. This paper describes an algorithm for such stereo sensing It uses an edge-based line-by-line stereo correlation scheme, and appears to be fast, robust, and parallel implementable. The processing consists of extracting edge descriptions for a stereo pair of images, linking these edges to their nearest neighbors to obtain the edge connectivity structure, correlating the edge descriptions on the basis of local edge properties, then cooperatively removmg those edge correspondences determined to be in error those which violate the connectivity structure of the two images. A further correlation process, using a technique similar to that used for the edges, is applied to the image intensity values over intervals defined by the previous correlation The result of the processing is a full image array disparity map of the scene viewed. Mechanism and Constraints Edge-based stereo uses operators to reduce an image to a depiction of its intensity boundaries, which are then correlated. Area-based stereo uses area windowing mechanisms to measure local statistical properties of the intensities, which can then be correlated. The system described here deals, initially, with the former, edges, because of the: a) reduced combinatorics (there are fewer edges than pixels), b) greater accuracy (edges can be positioned to sub-pixel precision, while area positioning precision is inversely proportional to window size, and considerably poorer), and c) more realistic in variance assumptions (area-based analysis presupposes that the photometric properties of a scene arc invariant to viewing position, while edge-based analysis works with the assumption that it is the geometric properties that are invariant to viewing position). Edges are found by a convolution operator They are located at positions in the image where a change in sign of second difference in intensity occurs. A particular operator, the one described here being 1 by 7 pixels in size, measures the directional first difference in intensity at each pixel' Second differences are computed from these, and changes in sign of these second differences are used to interpolate sero crossings (i.e. peaks in first difference). Certain local properties other than position are measured and associated with each edge contrast, image slope, and intensity to either side and links are kept to nearest neighbours above, below, and to the sides. It is these properties that define an edge and provide the basis for the correlation (see the discussions in [1,2]). The correlation is & search for edge correspondence between images Fig. 2 shows the edges found in the two images of fig. 1 with the second difference operator (note, all stereo pairs in this paper are drawn for cross-eyed viewing) Although the operator works in both horizontal and vertical directions, it only allows correlation on edges whose horizontal gradient lies above the noise one standard deviation of the first difference in intensity With no prior knowledge of the viewing situation, one could have any edge in one image matching any edge in the other. By constraining the geometry of the cameras during picture taking one can vastly limit the computation that is required in determining corresponding edges in the two images. Consider fig. 3. If two balanced, equal focal length cameras are arranged with axes parallel, then they can be conceived of as sharing a single common image plane. Any point in the scene will project to two points on that joint image plane (one through each of the two lens centers), the connection of which will produce a line parallel to the baseline between the cameras. Thus corresponding edges in the two images must lie along the tame line in the joint image plane This line is termed an epipolar line. If the baseline between the two cameras happens to be parallel to an axis of the cameras, then the correlation only need consider edges lying along corresponding lines parallel to that axis in the two images. Fig. 3 indicates this camera geometry a geometry which produces rectified The edge operator is simple, basically one dimensional, and is noteworthy only in that it it fast and fairly effective.", "title": "" }, { "docid": "56c22da1da59dcc31c95a49986504031", "text": "In this paper, we develop a new framework for sensing and recovering structured signals. In contrast to compressive sensing (CS) systems that employ linear measurements, sparse representations, and computationally complex convex/greedy algorithms, we introduce a deep learning framework that supports both linear and mildly nonlinear measurements, that learns a structured representation from training data, and that efficiently computes a signal estimate. In particular, we apply a stacked denoising autoencoder (SDA), as an unsupervised feature learner. SDA enables us to capture statistical dependencies between the different elements of certain signals and improve signal recovery performance as compared to the CS approach.", "title": "" }, { "docid": "5baf228a42c64a1728e2aa881844c021", "text": "This article addresses the problem of managing Moving Objects Databases (MODs) which capture the inherent imprecision of the information about the moving object's location at a given time. We deal systematically with the issues of constructing and representing the trajectories of moving objects and querying the MOD. We propose to model an uncertain trajectory as a three-dimensional (3D) cylindrical body and we introduce a set of novel but natural spatio-temporal operators which capture the uncertainty and are used to express spatio-temporal range queries. We devise and analyze algorithms for processing the operators and demonstrate that the model incorporates the uncertainty in a manner which enables efficient querying, thus striking a balance between the modeling power and computational efficiency. We address some implementation aspects which we experienced in our DOMINO project, as a part of which the operators that we introduce have been implemented. We also report on some experimental observations of a practical relevance.", "title": "" }, { "docid": "b4fca94e4c13cecfce5aabee910d5b02", "text": "We present a narrow-size multiband inverted-F antenna (IFA), which can easily fit inside the housing of display units of ultra-slim laptops. The narrowness of the antenna is achieved by allowing some of its metallic parts to extend over the sidewalls of the dielectric substrate. The antenna is aimed to operate in all the allocated WiFi and WiMAX frequency bands while providing near-omnidirectional coverage in the horizontal plane. The multiband performance of the proposed antenna and its omnidirectionality are validated by measurements.", "title": "" }, { "docid": "331bb4a2b28c391045bcd74d76dd26fb", "text": "This paper intends to data analysis for Li-Ion and Lead Acid Batteries. The analysis based on discharge parameters input and output were processed in Simulink MATLAB. The input parameters are nominal voltage, rated capacity, and SOC, while the output parameters consist of maximum capacity, fully charged voltage, nominal discharge current, internal resistance, exponential zone voltage, and exponential zone capacity. Study and investigation of Li-Ion batteries were done by comparing them to the Lead Acids at the voltage and battery capacity of 3.7 V, 1400 mAh and 12V, 100Ah respectively. The result showed that the maximum capacity parameter of Lead Acid batteries equally 104.16% is better than Li-Ions of 100%, while Li-Ion batteries is good for almost all others parameters except internal resistance.", "title": "" }, { "docid": "bfa87a59940f6848d8d5b53b89c16735", "text": "The over-segmentation of images into atomic regions has become a standard and powerful tool in Vision. Traditional superpixel methods, that operate at the pixel level, cannot directly capture the geometric information disseminated into the images. We propose an alternative to these methods by operating at the level of geometric shapes. Our algorithm partitions images into convex polygons. It presents several interesting properties in terms of geometric guarantees, region compactness and scalability. The overall strategy consists in building a Voronoi diagram that conforms to preliminarily detected line-segments, before homogenizing the partition by spatial point process distributed over the image gradient. Our method is particularly adapted to images with strong geometric signatures, typically man-made objects and environments. We show the potential of our approach with experiments on large-scale images and comparisons with state-of-the-art superpixel methods.", "title": "" } ]
scidocsrr
5d3bafad3028e23b54fbd34ea97604ba
Upregulation of 3-MST Relates to Neuronal Autophagy After Traumatic Brain Injury in Mice
[ { "docid": "78bbb5da105b70783bb49915482efb5e", "text": "The knowledge of the pathophysiology after traumatic head injury is necessary for adequate and patient-oriented treatment. As the primary insult, which represents the direct mechanical damage, cannot be therapeutically influenced, target of the treatment is the limitation of the secondary damage (delayed non-mechanical damage). It is influenced by changes in cerebral blood flow (hypo- and hyperperfusion), impairment of cerebrovascular autoregulation, cerebral metabolic dysfunction and inadequate cerebral oxygenation. Furthermore, excitotoxic cell damage and inflammation may lead to apoptotic and necrotic cell death. Understanding the multidimensional cascade of secondary brain injury offers differentiated therapeutic options.", "title": "" } ]
[ { "docid": "d09b4b59c30925bae0983c7e56c3386d", "text": "We describe a system that automatically extracts 3D geometry of an indoor scene from a single 2D panorama. Our system recovers the spatial layout by finding the floor, walls, and ceiling; it also recovers shapes of typical indoor objects such as furniture. Using sampled perspective sub-views, we extract geometric cues (lines, vanishing points, orientation map, and surface normals) and semantic cues (saliency and object detection information). These cues are used for ground plane estimation and occlusion reasoning. The global spatial layout is inferred through a constraint graph on line segments and planar superpixels. The recovered layout is then used to guide shape estimation of the remaining objects using their normal information. Experiments on synthetic and real datasets show that our approach is state-of-the-art in both accuracy and efficiency. Our system can handle cluttered scenes with complex geometry that are challenging to existing techniques.", "title": "" }, { "docid": "d95fc5f0e47faa05708df823195c98a4", "text": "As part of ongoing studies in developing new antimicrobials, a class of structurally novel 4-thiazolidinone derivatives incorporating three known bioactive nuclei such as thiazole, thiazolidinone and adamantane was synthesized by the multi-step reaction protocol, already reported in the literature. NMR and Molecular Modeling techniques were employed for structure elucidation and Z/E potential isomerism configuration of the analogues. Evaluation of antibacterial and antifungal activity showed that almost all compounds exhibited better results than reference drugs thus they could be promising candidates for novel drugs.", "title": "" }, { "docid": "16e1197633329b615bd4a07b6c9c5e27", "text": "This paper presents an analog front-end (AFE) IC for mutual capacitance touch sensing with 224 sensor channels in 0.18 μm CMOS with 3.3 V drive voltage. A 32-in touch sensing system and a 70-in one having 37 dB SNR for 1 mm diameter stylus at 240 Hz reporting rate are realized with the AFE. The AFE adopts a parallel drive method to achieve the large format and the high SNR simultaneously. With the parallel drive method, the measured SNRs of the AFE stay almost constant at a higher level regardless of the number of sensor channels, which was impossible by conventional sequential drive methods. A novel differential sensing scheme which enhances the immunity against the noise from a display device is also realized in the AFE. While the coupled LCD is on and off, the differences between the measured SNRs are less than 2 dB.", "title": "" }, { "docid": "49c19e5417aa6a01c59f666ba7cc3522", "text": "The effect of various drugs on the extracellular concentration of dopamine in two terminal dopaminergic areas, the nucleus accumbens septi (a limbic area) and the dorsal caudate nucleus (a subcortical motor area), was studied in freely moving rats by using brain dialysis. Drugs abused by humans (e.g., opiates, ethanol, nicotine, amphetamine, and cocaine) increased extracellular dopamine concentrations in both areas, but especially in the accumbens, and elicited hypermotility at low doses. On the other hand, drugs with aversive properties (e.g., agonists of kappa opioid receptors, U-50,488, tifluadom, and bremazocine) reduced dopamine release in the accumbens and in the caudate and elicited hypomotility. Haloperidol, a neuroleptic drug, increased extracellular dopamine concentrations, but this effect was not preferential for the accumbens and was associated with hypomotility and sedation. Drugs not abused by humans [e.g., imipramine (an antidepressant), atropine (an antimuscarinic drug), and diphenhydramine (an antihistamine)] failed to modify synaptic dopamine concentrations. These results provide biochemical evidence for the hypothesis that stimulation of dopamine transmission in the limbic system might be a fundamental property of drugs that are abused.", "title": "" }, { "docid": "1f17644b65aa6d6e7353b3da3780592b", "text": "We address the problem of search on graphs with multiple nodal attributes. We call such graphs weighted attribute graphs (WAGs). Nodes of a WAG exhibit multiple attributes with varying, non-negative weights. WAGs are ubiquitous in real-world applications. For example, in a co-authorship WAG, each author is a node; each attribute corresponds to a particular topic (e.g., databases, data mining, and machine learning); and the amount of expertise in a particular topic is represented by a non-negative weight on that attribute. A typical search in this setting specifies both connectivity between nodes and constraints on weights of nodal attributes. For example, a user's search may be: find three coauthors (i.e., a triangle) where each author's expertise is greater than 50 percent in at least one topic area (i.e., attribute). We propose a ranking function which unifies ranking between the graph structure and attribute weights of nodes. We prove that the problem of retrieving the optimal answer for graph search on WAGs is NP-complete. Moreover, we propose a fast and effective top-k graph search algorithm for WAGs. In an extensive experimental study with multiple real-world graphs, our proposed algorithm exhibits significant speed-up over competing approaches. On average, our proposed method is more than 7χ faster in query processing than the best competitor.", "title": "" }, { "docid": "46a8022eea9ed7bcfa1cd8041cab466f", "text": "In this paper, a bidirectional converter with a uniform controller for Vehicle to grid (V2G) application is designed. The bidirectional converter consists of two stages one is ac-dc converter and second is dc-dc converter. For ac-dc converter bipolar modulation is used. Two separate controller systems are designed for converters which follow active and reactive power commands from grid. Uniform controller provides reactive power support to the grid. The charger operates in two quadrants I and IV. There are three modes of operation viz. charging only operation, charging-capacitive operation and charging-inductive operation. During operation under these three operating modes vehicle's battery is not affected. The whole system is tested using MATLAB/SIMULINK.", "title": "" }, { "docid": "f1d542d7537a05049448c3ee2c58f089", "text": "Any injectable filler may elicit moderate-to-severe adverse events, ranging from nodules to abscesses to vascular occlusion. Fortunately, severe adverse events are uncommon for the majority of fillers currently on the market. Because these are rare events, it is difficult to identify the relevant risk factors and to design the most efficacious treatment strategies. Poor aesthetic outcomes are far more common than severe adverse events. These in contrast should be easily avoidable by ensuring that colleagues receive proper training and follow best practices.", "title": "" }, { "docid": "60f9a34771b844228e1d8da363e89359", "text": "3-mercaptopyruvate sulfurtransferase (3-MST) was a novel hydrogen sulfide (H2S)-synthesizing enzyme that may be involved in cyanide degradation and in thiosulfate biosynthesis. Over recent years, considerable attention has been focused on the biochemistry and molecular biology of H2S-synthesizing enzyme. In contrast, there have been few concerted attempts to investigate the changes in the expression of the H2S-synthesizing enzymes with disease states. To investigate the changes of 3-MST after traumatic brain injury (TBI) and its possible role, mice TBI model was established by controlled cortical impact system, and the expression and cellular localization of 3-MST after TBI was investigated in the present study. Western blot analysis revealed that 3-MST was present in normal mice brain cortex. It gradually increased, reached a peak on the first day after TBI, and then reached a valley on the third day. Importantly, 3-MST was colocalized with neuron. In addition, Western blot detection showed that the first day post injury was also the autophagic peak indicated by the elevated expression of LC3. Importantly, immunohistochemistry analysis revealed that injury-induced expression of 3-MST was partly colabeled by LC3. However, there was no colocalization of 3-MST with propidium iodide (cell death marker) and LC3 positive cells were partly colocalized with propidium iodide. These data suggested that 3-MST was mainly located in living neurons and may be implicated in the autophagy of neuron and involved in the pathophysiology of brain after TBI.", "title": "" }, { "docid": "f2e9083262c2680de3cf756e7960074a", "text": "Social commerce is a new development in e-commerce generated by the use of social media to empower customers to interact on the Internet. The recent advancements in ICTs and the emergence of Web 2.0 technologies along with the popularity of social media and social networking sites have seen the development of new social platforms. These platforms facilitate the use of social commerce. Drawing on literature from marketing and information systems (IS) the author proposes a new model to develop our underocial media ocial networking site rust LS-SEM standing of social commerce using a PLS-SEM methodology to test the model. Results show that Web 2.0 applications are attracting individuals to have interactions as well as generate content on the Internet. Consumers use social commerce constructs for these activities, which in turn increase the level of trust and intention to buy. Implications, limitations, discussion, and future research directions are discussed at the end of the paper. © 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "185a5ea62aeffb3a5d8071d6eb21d461", "text": "Current mobile applications treat the end-user device as a \"thin client,\" with all of the heavy computations being offloaded to an infrastructure cloud. However, the computational capabilities of mobile devices are constantly improving, and it is worthwhile considering whether an edge-cloud that consists purely of mobile devices (operating effectively as \"thick clients\") can perform as well as, or even better than, an infrastructure cloud. In this paper, we study the trade-offs between offloading computation to an infrastructure cloud versus retaining the computation within a mobile edge-cloud. To this end, we develop and run two classes of applications on both types of clouds, and we analyze the performance of the two clouds in terms of the time taken to run the application, along with the total amount of battery power consumed in both cases. Our results indicate that there are indeed classes of applications where an edge-cloud can outperform an infrastructure cloud in terms of both latency and battery power.", "title": "" }, { "docid": "7e682f98ee6323cd257fda07504cba20", "text": "We present a method for automated segmentation of the vasculature in retinal images. The method produces segmentations by classifying each image pixel as vessel or nonvessel, based on the pixel's feature vector. Feature vectors are composed of the pixel's intensity and two-dimensional Gabor wavelet transform responses taken at multiple scales. The Gabor wavelet is capable of tuning to specific frequencies, thus allowing noise filtering and vessel enhancement in a single step. We use a Bayesian classifier with class-conditional probability density functions (likelihoods) described as Gaussian mixtures, yielding a fast classification, while being able to model complex decision surfaces. The probability distributions are estimated based on a training set of labeled pixels obtained from manual segmentations. The method's performance is evaluated on publicly available DRIVE (Staal et al.,2004) and STARE (Hoover et al.,2000) databases of manually labeled images. On the DRIVE database, it achieves an area under the receiver operating characteristic curve of 0.9614, being slightly superior than that presented by state-of-the-art approaches. We are making our implementation available as open source MATLAB scripts for researchers interested in implementation details, evaluation, or development of methods", "title": "" }, { "docid": "1427c235b4ca0b0557d62317d48e6b3f", "text": "In this paper, we propose a novel classification method for lung nodules from CT images based on hybrid features. Towards nodules of different types, including well-circumscribed, vascularized, juxta-pleural, pleural-tail, as well as ground glass optical (GGO) and non-nodule from CT scans, our method has achieved promising classification results. The proposed method utilizes hybrid descriptors consisting of statistical features from multi-view multi-scale convolutional neural networks (CNNs) and geometrical features from Fisher vector (FV) encodings based on scaleinvariant feature transform (SIFT). First, we approximate the nodule radii based on icosahedron sampling and intensity analysis. Then, we apply high frequency content measure analysis to obtain sampling views with more abundant information. After that, based on re-sampled views, we train multi-view multi-scale CNNs to extract statistical features and calculate FV encodings as geometrical features. Finally, we achieve hybrid features by merging statistical and geometrical features based on multiple kernel learning (MKL) and classify nodule types through a multi-class support vector machine. The experiments on LIDC-IDRI and ELCAP have shown that our method has achieved promising results and can be of great assistance for radiologists’ diagnosis of lung cancer in clinical practice.", "title": "" }, { "docid": "91f3268092606d2bd1698096e32c824f", "text": "Classic pipeline models for task-oriented dialogue system require explicit modeling the dialogue states and hand-crafted action spaces to query a domain-specific knowledge base. Conversely, sequence-to-sequence models learn to map dialogue history to the response in current turn without explicit knowledge base querying. In this work, we propose a novel framework that leverages the advantages of classic pipeline and sequence-to-sequence models. Our framework models a dialogue state as a fixed-size distributed representation and use this representation to query a knowledge base via an attention mechanism. Experiment on Stanford Multi-turn Multi-domain Taskoriented Dialogue Dataset shows that our framework significantly outperforms other sequenceto-sequence based baseline models on both automatic and human evaluation. Title and Abstract in Chinese 面向任务型对话中基于对话状态表示的序列到序列学习 面向任务型对话中,传统流水线模型要求对对话状态进行显式建模。这需要人工定义对 领域相关的知识库进行检索的动作空间。相反地,序列到序列模型可以直接学习从对话 历史到当前轮回复的一个映射,但其没有显式地进行知识库的检索。在本文中,我们提 出了一个结合传统流水线与序列到序列二者优点的模型。我们的模型将对话历史建模为 一组固定大小的分布式表示。基于这组表示,我们利用注意力机制对知识库进行检索。 在斯坦福多轮多领域对话数据集上的实验证明,我们的模型在自动评价与人工评价上优 于其他基于序列到序列的模型。", "title": "" }, { "docid": "0165e1e5affdf13a25489634aa2d4d6f", "text": "The process of conceptual aircraft design has advanced tremendously in the past few decades due to rapidly developing computer technology. Today’s modern aerospace systems exhibit strong, interdisciplinary coupling and require a multidisciplinary, collaborative approach. Efficient transfer, sharing, and manipulation of aircraft design and analysis data in such a collaborative environment demands a formal structured representation of data. XML, a W3C recommendation, is one such standard concomitant with a number of powerful capabilities that alleviate interoperability issues in a collaborative environment. A compact, generic, and comprehensive XML schema for an aircraft design markup language (ADML) is proposed here to represent aircraft conceptual design and analysis data. The purpose of this unified data format is to provide a common language for data communication, and to improve efficiency and productivity within a multidisciplinary, collaborative aricraft design environment. An important feature of the proposed schema is the very expressive and efficient low level schemata (raw data, mathematical objects, and basic geometry). As a proof of concept the schema is used to encode an entire Convair B58. As the complexity of models and number of disciplines increases, the reduction in effort to exchange data models and analysis results in ADML also increases.", "title": "" }, { "docid": "e5f3a4d3e1fd591b81da2c08b228ce47", "text": "This article is a tutorial for researchers who are designing software to perform a creative task and want to evaluate their system using interdisciplinary theories of creativity. Researchers who study human creativity have a great deal to offer computational creativity. We summarize perspectives from psychology, philosophy, cognitive science, and computer science as to how creativity can be measured both in humans and in computers. We survey how these perspectives have been used in computational creativity research and make recommendations for how they should be used.", "title": "" }, { "docid": "6b130d9179bbf640644423e67289b29b", "text": "Although both reaching and grasping require transporting the hand to the object location, only grasping also requires processing of object shape, size and orientation to preshape the hand. Behavioural and neuropsychological evidence suggests that the object processing required for grasping relies on different neural substrates from those mediating object recognition. Specifically, whereas object recognition is believed to rely on structures in the ventral (occipitotemporal) stream, object grasping appears to rely on structures in the dorsal (occipitoparietal) stream. We used functional magnetic resonance imaging (fMRI) to determine whether grasping (compared to reaching) produced activation in dorsal areas, ventral areas, or both. We found greater activity for grasping than reaching in several regions, including anterior intraparietal (AIP) cortex. We also performed a standard object perception localizer (comparing intact vs. scrambled 2D object images) in the same subjects to identify the lateral occipital complex (LOC), a ventral stream area believed to play a critical role in object recognition. Although LOC was activated by the objects presented on both grasping and reaching trials, there was no greater activity for grasping compared to reaching. These results suggest that dorsal areas, including AIP, but not ventral areas such as LOC, play a fundamental role in computing object properties during grasping.", "title": "" }, { "docid": "d9599c4140819670a661bd4955680bb7", "text": "The paper assesses the demand for rural electricity services and contrasts it with the technology options available for rural electrification. Decentralised Distributed Generation can be economically viable as reflected by case studies reported in literature and analysed in our field study. Project success is driven by economically viable technology choice; however it is largely contingent on organisational leadership and appropriate institutional structures. While individual leadership can compensate for deployment barriers, we argue that a large scale roll out of rural electrification requires an alignment of economic incentives and institutional structures to implement, operate and maintain the scheme. This is demonstrated with the help of seven case studies of projects across north India. 1 Introduction We explore the contribution that decentralised and renewable energy technologies can make to rural electricity supply in India. We take a case study approach, looking at seven sites across northern India where renewable energy technologies have been established to provide electrification for rural communities. We supplement our case studies with stakeholder interviews and household surveys, estimating levels of demand for electricity services from willingness and ability to pay. We also assess the overall viability of Distributed Decentralised Generation (DDG) projects by investigating the costs of implementation as well as institutional and organisational barriers to their operation and replication. Renewable energy technologies represent some of the most promising options available for distributed and decentralised electrification. Demand for reliable electricity services is significant. It represents a key driver behind economic development and raising basic standards of living. This is especially applicable to rural India home to 70% of the nation's population and over 25% of the world's poor. Access to reliable and affordable electricity can help support income-generating activity and allow utilisation of modern appliances and agricultural equipment whilst replacing inefficient and polluting kerosene lighting. Presently only around 55% of households are electrified (MOSPI 2006) leaving over 20 million households without power. The supply of electricity across India currently lacks both quality and quantity with an extensive shortfall in supply, a poor record for outages, high levels of transmission and distribution (T&D) losses and an overall need for extended and improved infrastructure (GoI 2006). The Indian Government recently outlined an ambitious plan for 100% village level electrification by the end of 2007 and total household electrification by 2012. To achieve this, a major programme of grid extension and strengthening of the rural electricity infrastructure has been initiated under …", "title": "" }, { "docid": "62ea6783f6a3e6429621286b4a1f068d", "text": "Aviation delays inconvenience travelers and result in financial losses for stakeholders. Without complex data pre-processing, delay data collected by the existing IATA delay coding system are inadequate to support advanced delay analytics, e.g. large-scale delay propagation tracing in an airline network. Consequently, we developed three new coding schemes aiming at improving the current IATA system. These schemes were tested with specific analysis tasks using simulated delay data and were benchmarked against the IATA system. It was found that a coding scheme with a well-designed reporting style can facilitate automated data analytics and data mining, and an improved grouping of delay codes can minimise potential confusion at the data entry and recording stages. © 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "435a764aaf6bdd39a3d40771bc1f111e", "text": "Wikipedia, the popular online encyclopedia, has in just six years grown from an adjunct to the now-defunct Nupedia to over 31 million pages and 429 million revisions in 256 languages and spawned sister projects such as Wiktionary and Wikisource. Available under the GNU Free Documentation License, it is an extraordinarily large corpus with broad scope and constant updates. Its articles are largely consistent in structure and organized into category hierarchies. However, the wiki method of collaborative editing creates challenges that must be addressed. Wikipedia’s accuracy is frequently questioned, and systemic bias means that quality and coverage are uneven, while even the variety of English dialects juxtaposed can sabotage the unwary with differences in semantics, diction and spelling. This paper examines Wikipedia from a research perspective, providing basic background knowledge and an understanding of its strengths and weaknesses. We also solve a technical challenge posed by the enormity of text (1.04TB for the English version) made available with a simple, easily-implemented dictionary compression algorithm that permits time-efficient random access to the data with a twenty-eight-fold reduction in size.", "title": "" }, { "docid": "e83c81831f659303f3fe27987dd18a58", "text": "We experimentally evaluate the network-level switching time of a functional 23-host prototype hybrid optical circuit-switched/electrical packet-switched network for datacenters called Mordia (Microsecond Optical Research Datacenter Interconnect Architecture). This hybrid network uses a standard electrical packet switch and an optical circuit-switched architecture based on a wavelength-selective switch that has a measured mean port-to-port network reconfiguration time of 11.5 $\\mu{\\rm s}$ including the signal acquisition by the network interface card. Using multiple parallel rings, we show that this architecture can scale to support the large bisection bandwidth required for future datacenters.", "title": "" } ]
scidocsrr
1c49486bf844c6781573f83ed503b988
A Group Recommender System for Tourist Activities
[ { "docid": "2e9d5a0f975a42e79a5c7625fc246502", "text": "e-Tourism is a tourist recommendation and planning application to assist users on the organization of a leisure and tourist agenda. First, a recommender system offers the user a list of the city places that are likely of interest to the user. This list takes into account the user demographic classification, the user likes in former trips and the preferences for the current visit. Second, a planning module schedules the list of recommended places according to their temporal characteristics as well as the user restrictions; that is the planning system determines how and when to perform the recommended activities. This is a very relevant feature that most recommender systems lack as it allows the user to have the list of recommended activities organized as an agenda, i.e. to have a totally executable plan.", "title": "" } ]
[ { "docid": "63e123277918d08ed6ee497dd6e7e588", "text": "This study provided a comprehensive examination of the full range of transformational, transactional, and laissez-faire leadership. Results (based on 626 correlations from 87 sources) revealed an overall validity of .44 for transformational leadership, and this validity generalized over longitudinal and multisource designs. Contingent reward (.39) and laissez-faire (-.37) leadership had the next highest overall relations; management by exception (active and passive) was inconsistently related to the criteria. Surprisingly, there were several criteria for which contingent reward leadership had stronger relations than did transformational leadership. Furthermore, transformational leadership was strongly correlated with contingent reward (.80) and laissez-faire (-.65) leadership. Transformational and contingent reward leadership generally predicted criteria controlling for the other leadership dimensions, although transformational leadership failed to predict leader job performance.", "title": "" }, { "docid": "8130259ccaac4ed55238faba07402aff", "text": "A long history of diabetes mellitus and increasing age are associated with the onset of diabetic neuropathy, a painful and highly disabling complication with a prevalence peaking at 50% among elderly diabetic patients. Acetyl-L-carnitine (ALC) is a molecule derived from the acetylation of carnitine in the mitochondria that has an essential role in energy production. It has recently been proposed as a therapy to improve the symptoms of diabetic neuropathy. ALC is widely distributed in mammalian tissues, including the brain, blood-brain barrier, brain neurons, and astrocytes. Aside from its metabolic activity, ALC has demonstrated cytoprotective, antioxidant, and antiapoptotic effects in the nervous system. It exerts an analgesic action by reducing the concentration of glutamate in the synapses. It facilitates nerve regeneration and damage repair after primary trauma: its positive effects on metabolism promote the synthesis, fluidity, and functionality of neuronal membranes, increase protein synthesis, and improve the axonal transport of neurofilament proteins and tubulin. It also amplifies nerve growth factor responsiveness, an effect that is believed to enhance overall neurite growth. ALC has been proposed for the treatment of various neurological and psychiatric diseases, such as mood disorders and depression, dementias, Alzheimer's disease, and Parkinson's disease, because synaptic energy states and mitochondrial dysfunction are core factors in their pathogenesis.", "title": "" }, { "docid": "279c377e12cdb8aec7242e0e9da2dd26", "text": "It is well accepted that pain is a multidimensional experience, but little is known of how the brain represents these dimensions. We used positron emission tomography (PET) to indirectly measure pain-evoked cerebral activity before and after hypnotic suggestions were given to modulate the perceived intensity of a painful stimulus. These techniques were similar to those of a previous study in which we gave suggestions to modulate the perceived unpleasantness of a noxious stimulus. Ten volunteers were scanned while tonic warm and noxious heat stimuli were presented to the hand during four experimental conditions: alert control, hypnosis control, hypnotic suggestions for increased-pain intensity and hypnotic suggestions for decreased-pain intensity. As shown in previous brain imaging studies, noxious thermal stimuli presented during the alert and hypnosis-control conditions reliably activated contralateral structures, including primary somatosensory cortex (S1), secondary somatosensory cortex (S2), anterior cingulate cortex, and insular cortex. Hypnotic modulation of the intensity of the pain sensation led to significant changes in pain-evoked activity within S1 in contrast to our previous study in which specific modulation of pain unpleasantness (affect), independent of pain intensity, produced specific changes within the ACC. This double dissociation of cortical modulation indicates a relative specialization of the sensory and the classical limbic cortical areas in the processing of the sensory and affective dimensions of pain.", "title": "" }, { "docid": "bb6ec993e0d573f4307a37588d6732ae", "text": "Beaudry and Pinsonneault (2005) IT related coping behaviors System users choose different adaptation strategies based on a combination of primary appraisal (i.e., a user’s assessment of the expected consequences of an IT event) and secondary appraisal (i.e., a user’s assessment of his/her control over the situation). Users will perform different actions in response to a combination of cognitive and behavioral efforts, both of which have been categorized as either problemor emotion-focused. Whole system", "title": "" }, { "docid": "1b6dfa953ee044fceb17640cc862a534", "text": "Introduction The rapid pace at which the technological innovations are being introduced in the world poses a potential challenge to the retailer, supplier, and enterprises. In the field of Information Technology (IT) there is a rapid growth in the last 30 years (Want 2006; Landt 2005). One of the most promising technological innovations in IT is radio frequency identification (RFID) (Dutta et al. 2007; Whitaker et al. 2007; Bottani et al. 2009). The RFID technology was evolved in 1945 as an espionage tool invented by Leon Theremin for the Soviet Government (Nikitin et al. 2013, Tedjini et al. 2012). At that time it was mainly used by the military. The progress in microchip design, antenna technology and radio spread spectrum pushed it into various applications like supply chain management, retail, automatic toll collection by tunnel companies, animal tracking, ski lift access, tracking library books, theft prevention, vehicle immobilizer systems, railway rolling stock identification, movement tracking, security, healthcare, printing, textiles and clothing (Weinstein 2005; Liu and Miao 2006; Rao et al. 2005; Wu et al. 2009; Tan 2008). RFID can make the companies more competitive by changing the related processes in supply chain, manufacturing and retailing. Abstract", "title": "" }, { "docid": "42c7c881935df8b22068dabdd48a05e8", "text": "Dropout training, originally designed for deep neural networks, has been successful on high-dimensional single-layer natural language tasks. This paper proposes a theoretical explanation for this phenomenon: we show that, under a generative Poisson topic model with long documents, dropout training improves the exponent in the generalization bound for empirical risk minimization. Dropout achieves this gain much like a marathon runner who practices at altitude: once a classifier learns to perform reasonably well on training examples that have been artificially corrupted by dropout, it will do very well on the uncorrupted test set. We also show that, under similar conditions, dropout preserves the Bayes decision boundary and should therefore induce minimal bias in high dimensions.", "title": "" }, { "docid": "8c575ae46ac2969c19a841c7d9a8cb5a", "text": "Constrained Local Models (CLMs) are a well-established family of methods for facial landmark detection. However, they have recently fallen out of favor to cascaded regressionbased approaches. This is in part due to the inability of existing CLM local detectors to model the very complex individual landmark appearance that is affected by expression, illumination, facial hair, makeup, and accessories. In our work, we present a novel local detector – Convolutional Experts Network (CEN) – that brings together the advantages of neural architectures and mixtures of experts in an end-toend framework. We further propose a Convolutional Experts Constrained Local Model (CE-CLM) algorithm that uses CEN as a local detector. We demonstrate that our proposed CE-CLM algorithm outperforms competitive state-of-the-art baselines for facial landmark detection by a large margin, especially on challenging profile images.", "title": "" }, { "docid": "9a136517edbfce2a7c6b302da9e6c5b7", "text": "This paper presents our approach to semantic relatedness and textual entailment subtasks organized as task 1 in SemEval 2014. Specifically, we address two questions: (1) Can we solve these two subtasks together? (2) Are features proposed for textual entailment task still effective for semantic relatedness task? To address them, we extracted seven types of features including text difference measures proposed in entailment judgement subtask, as well as common text similarity measures used in both subtasks. Then we exploited the same feature set to solve the both subtasks by considering them as a regression and a classification task respectively and performed a study of influence of different features. We achieved the first and the second rank for relatedness and entailment task respectively.", "title": "" }, { "docid": "21c4a6bb8fee4e403c6cd384e1e423be", "text": "Fault detection prediction of FAB (wafer fabrication) process in semiconductor manufacturing process is possible that improve product quality and reliability in accordance with the classification performance. However, FAB process is sometimes due to a fault occurs. And mostly it occurs “pass”. Hence, data imbalance occurs in the pass/fail class. If the data imbalance occurs, prediction models are difficult to predict “fail” class because increases the bias of majority class (pass class). In this paper, we propose the SMOTE (Synthetic Minority Oversampling Technique) based over sampling method for solving problem of data imbalance. The proposed method solve the imbalance of the between pass and fail by oversampling the minority class of fail. In addition, by applying the fault detection prediction model to measure the performance.", "title": "" }, { "docid": "405182cedabc0c75c1b79052bd6db5b3", "text": "Human resource management systems (HRMS) integrate human resource processes and an organization's information systems. An HRMS frequently represents one of the modules of an enterprise resource planning system (ERP). ERPs are information systems that manage the business and consist of integrated software applications such customer relations and supply chain management, manufacturing, finance and human resources. ERP implementation projects frequently have high failure rates; although research has investigated a number of factors for success and failure rates, limited attention has been directed toward the implementation teams, and how to make these more effective. In this paper we argue that shared leadership represents an appropriate approach to improving the functioning of ERP implementation teams. Shared leadership represents a form of team leadership where the team members, rather than only a single team leader, engage in leadership behaviors. While shared leadership has received increased research attention during the past decade, it has not been applied to ERP implementation teams and therefore that is the purpose of this article. Toward this end, we describe issues related to ERP and HRMS implementation, teams, and the concept of shared leadership, review theoretical and empirical literature, present an integrative framework, and describe the application of shared leadership to ERP and HRMS implementation. Published by Elsevier Inc.", "title": "" }, { "docid": "5cdcb7073bd0f8e1b0affe5ffb4adfc7", "text": "This paper presents a nonlinear controller for hovering flight and touchdown control for a vertical take-off and landing (VTOL) unmanned aerial vehicle (UAV) using inertial optical flow. The VTOL vehicle is assumed to be a rigid body, equipped with a minimum sensor suite (camera and IMU), manoeuvring over a textured flat target plane. Two different tasks are considered in this paper: the first concerns the stability of hovering flight and the second one concerns regulation of automatic landing using the divergent optical flow as feedback information. Experimental results on a quad-rotor UAV demonstrate the performance of the proposed control strategy.", "title": "" }, { "docid": "934875351d5fa0c9b5c7499ca13727ab", "text": "Computation of the simplicial complexes of a large point cloud often relies on extracting a sample, to reduce the associated computational burden. The study considers sampling critical points of a Morse function associated to a point cloud, to approximate the Vietoris-Rips complex or the witness complex and compute persistence homology. The effectiveness of the novel approach is compared with the farthest point sampling, in a context of classifying human face images into ethnics groups using persistence homology.", "title": "" }, { "docid": "959a8602cb7292a7daf341d2b7614492", "text": "This paper presents a calibration method for eye-in-hand systems in order to estimate the hand-eye and the robot-world transformations. The estimation takes place in terms of a parametrization of a stochastic model. In order to perform optimally, a metric on the group of the rigid transformations SE(3) and the corresponding error model are proposed for nonlinear optimization. This novel metric works well with both common formulations AX=XB and AX=ZB, and makes use of them in accordance with the nature of the problem. The metric also adapts itself to the system precision characteristics. The method is compared in performance to earlier approaches", "title": "" }, { "docid": "a40b8e1bad22921a317c290e17478689", "text": "Two novel adaptive nonlinear filter structures are proposed which are based on linear combinations of order statistics. These adaptive schemes are modifications of the standard LMS algorithm and have the ability to incorporate constraints imposed on coefficients in order to permit location-invariant and unbiased estimation of a constant signal in the presence of additive white noise. The convergence in the mean and in the mean square of the proposed adaptive nonlinear filters is studied. The rate of convergence is also considered. It is verified by simulations that the independence theory provides useful bounds on the rate of convergence. The extreme eigenvalues of the matrix which controls the performance of the location-invariant adaptive LMS L-filter are related to the extreme eigenvalues of the correlation matrix of the ordered noise samples which controls the performance of other adaptive LMS L-filters proposed elsewhere. The proposed filters can adapt well to a variety of noise probability distributions ranging from the short-tailed ones (e.g. uniform distribution) to long-tailed ones (e.g. Laplacian distribution). Zusammenfassung. Es werden zwei neue adaptive nichtlineare Filterstrukturen vorgeschlagen, die auf Linearkombinationen yon Order-Statistik beruhen. Diese adaptiven Strukturen sind Modifikationen des iiblichen LMS-Algorithmus und erlauben die Einbringung yon Bedingungen bezfiglich der Koeffizienten, um ortsinvariante und erwartungstreue Schfitzungen konstanter Signale unter additivem, weiBem Rauschen zu erm6glichen. Die Konvergenz bez/iglich des Mittelwertes und des quadratischen Mittelwertes wird fiir die vorgeschlagenen nichtlinearen Filter untersucht. Weiterhin wird die Konvergenzgeschwindigkeit betrachtet. Durch Simulationen wird gezeigt, dab die Independence-Theorie brauchbare Grenzen ffir die Konvergenzrate liefert. Die extremen Eigenwerte der Matrix, die das Verhalten des ortsinvarianten adaptiven LMS L-Filters bestimmt, werden den Eigenwerten der Korrelationsmatrix der geordneten Rauschabtastwerte gegeniibergestellt, die das Verhalten anderer adaptiver LMS L-Filter bestimmt. Die vorgeschlagenen Filter stellen sich sehr gut auf eine Vielzahl verschiedener Verteilungsdichten des Rauschens ein, angefangen von schmalen Verteilungen (z.B. Gleichverteilung) bis hin zu langsam abfallenden (z.B. Laplace). R6sum+. Nous proposons deux structures de filtre non-lin~aire originales, structures bastes sur des combinaisons lin6aires de statistiques d'ordre. Ces techniques adaptatives sont des modifications de I'algorithme LMS standard et ont la capacit6 d'incorporer des contraintes impos+es sur les coefficients afin de permettre une estimation ne variant pas selon la localisation et non biais6e d 'un signal constant en pr6sence de bruit blanc additif. Nous 6tudions la convergence en moyenne et en moyenne quadratique des filtres non-lin6aires adaptatifs propos6s. Nous consid~rons 6galement le taux de convergence. Nous v+rifions par des simulations que l'hypoth+se d'ind6pendance fournit des bornes utiles sur le taux de convergence. Nous relions les valeurs propres extremes de la matrice qui contr61e les performances du L-filtre LMS adaptatif ne variant pas selon la localisation aux valeurs propres extremes de la matrice de correlation des 6chantillons de bruit ordonn+s qui contr61e les performances d'autres L-filtres LMS proposes ailleurs. Les filtres propos6s peuvent s 'adapter ais+ment 5. une vari6te de distributions de densit6 de bruit allant de celles 5. queue courte (p.e. la distribution uniforme) 5. celles 5. queue longue (p.e. la distribution de Laplace).", "title": "" }, { "docid": "66133239610bb08d83fb37f2c11a8dc5", "text": "sists of two excitation laser beams. One beam scans the volume of the brain from the side of a horizontally positioned zebrafish but is rapidly switched off when inside an elliptical exclusion region located over the eye (Fig. 1b). Simultaneously, a second beam scans from the front, to cover the forebrain and the regions between the eyes. Together, these two beams achieve nearly complete coverage of the brain without exposing the retina to direct laser excitation, which allows unimpeded presentation of visual stimuli that are projected onto a screen below the fish. To monitor intended swimming behavior, we used existing methods for recording activity from motor neuron axons in the tail of paralyzed larval zebrafish1 (Fig. 1a and Supplementary Note). This system provides imaging speeds of up to three brain volumes per second (40 planes per brain volume); increases in camera speed will allow for faster volumetric sampling. Because light-sheet imaging may still introduce some additional sensory stimulation (excitation light scattering in the brain and reflected from the glass walls of the chamber), we assessed whether fictive behavior in 5–7 d post-fertilization (d.p.f.) fish was robust to the presence of the light sheets. We tested two visuoLight-sheet functional imaging in fictively behaving zebrafish", "title": "" }, { "docid": "f62950bcb20c034de7a78f21887ce05b", "text": "In the past decade, the role of data has increased exponentially from something that is queried or reported on, to becoming a true corporate asset. The same time period has also seen marked growth in corporate structural complexity. This combination has lead to information management challenges, as the data moving across a multitude of systems lends itself to a higher likelihood of impacting dependent processes and systems, should something go wrong or be changed. Many enterprise data projects are faced with low success rates and consequently subject to high amounts of scrutiny as senior leadership struggles to identify return on investment. While there are many tools and methods to increase a companies' ability to govern data, this research is based on the premise that you can not govern what you do not know. This lack of awareness of the corporate data landscape impacts the ability to govern data, which in turn impacts overall data quality within organizations. This paper seeks to propose a tools and techniques for companies to better gain an awareness of the landscape of their data, processes, and organizational attributes through the use of linked data, via the Resource Description Framework (RDF) and ontology. The outcome of adopting such techniques is an increased level of data awareness within the organization, resulting in improved ability to govern corporate data assets, and in turn increased data quality.", "title": "" }, { "docid": "161bfbfef048ba5d3841818278410005", "text": "Memory bandwidth has been one of the most critical system performance bottlenecks. As a result, the HMC (Hybrid Memory Cube) has recently been proposed to improve DRAM bandwidth as well as energy efficiency. In this paper, we explore different system interconnect designs with HMCs. We show that processor-centric network architectures cannot fully utilize processor bandwidth across different traffic patterns. Thus, we propose a memory-centric network in which all processor channels are connected to HMCs and not to any other processors as all communication between processors goes through intermediate HMCs. Since there are multiple HMCs per processor, we propose a distributor-based network to reduce the network diameter and achieve lower latency while properly distributing the bandwidth across different routers and providing path diversity. Memory-centric networks lead to some challenges including higher processor-to-processor latency and the need to properly exploit the path diversity. We propose a pass-through microarchitecture, which, in combination with the proper intra-HMC organization, reduces the zero-load latency while exploiting adaptive (and non-minimal) routing to load-balance across different channels. Our results show that memory-centric networks can efficiently utilize processor bandwidth for different traffic patterns and achieve higher performance by providing higher memory bandwidth and lower latency.", "title": "" }, { "docid": "462a0746875e35116f669b16d851f360", "text": "We previously have applied deep autoencoder (DAE) for noise reduction and speech enhancement. However, the DAE was trained using only clean speech. In this study, by using noisyclean training pairs, we further introduce a denoising process in learning the DAE. In training the DAE, we still adopt greedy layer-wised pretraining plus fine tuning strategy. In pretraining, each layer is trained as a one-hidden-layer neural autoencoder (AE) using noisy-clean speech pairs as input and output (or transformed noisy-clean speech pairs by preceding AEs). Fine tuning was done by stacking all AEs with pretrained parameters for initialization. The trained DAE is used as a filter for speech estimation when noisy speech is given. Speech enhancement experiments were done to examine the performance of the trained denoising DAE. Noise reduction, speech distortion, and perceptual evaluation of speech quality (PESQ) criteria are used in the performance evaluations. Experimental results show that adding depth of the DAE consistently increase the performance when a large training data set is given. In addition, compared with a minimum mean square error based speech enhancement algorithm, our proposed denoising DAE provided superior performance on the three objective evaluations.", "title": "" } ]
scidocsrr
4d7140ec42beb7a7df2908c7b6f74ab6
Deep Graph Infomax
[ { "docid": "54d3d5707e50b979688f7f030770611d", "text": "In this article, we describe an automatic differentiation module of PyTorch — a library designed to enable rapid research on machine learning models. It builds upon a few projects, most notably Lua Torch, Chainer, and HIPS Autograd [4], and provides a high performance environment with easy access to automatic differentiation of models executed on different devices (CPU and GPU). To make prototyping easier, PyTorch does not follow the symbolic approach used in many other deep learning frameworks, but focuses on differentiation of purely imperative programs, with a focus on extensibility and low overhead. Note that this preprint is a draft of certain sections from an upcoming paper covering all PyTorch features.", "title": "" }, { "docid": "268e0e06a23f495cc36958dafaaa045a", "text": "Artificial intelligence (AI) has undergone a renaissance recently, making major progress in key domains such as vision, language, control, and decision-making. This has been due, in part, to cheap data and cheap compute resources, which have fit the natural strengths of deep learning. However, many defining characteristics of human intelligence, which developed under much different pressures, remain out of reach for current approaches. In particular, generalizing beyond one’s experiences—a hallmark of human intelligence from infancy—remains a formidable challenge for modern AI. The following is part position paper, part review, and part unification. We argue that combinatorial generalization must be a top priority for AI to achieve human-like abilities, and that structured representations and computations are key to realizing this objective. Just as biology uses nature and nurture cooperatively, we reject the false choice between “hand-engineering” and “end-to-end” learning, and instead advocate for an approach which benefits from their complementary strengths. We explore how using relational inductive biases within deep learning architectures can facilitate learning about entities, relations, and rules for composing them. We present a new building block for the AI toolkit with a strong relational inductive bias—the graph network—which generalizes and extends various approaches for neural networks that operate on graphs, and provides a straightforward interface for manipulating structured knowledge and producing structured behaviors. We discuss how graph networks can support relational reasoning and combinatorial generalization, laying the foundation for more sophisticated, interpretable, and flexible patterns of reasoning. As a companion to this paper, we have also released an open-source software library for building graph networks, with demonstrations of how to use them in practice.", "title": "" }, { "docid": "a33cf416cf48f67cd0a91bf3a385d303", "text": "Generative neural samplers are probabilistic models that implement sampling using feedforward neural networks: they take a random input vector and produce a sample from a probability distribution defined by the network weights. These models are expressive and allow efficient computation of samples and derivatives, but cannot be used for computing likelihoods or for marginalization. The generativeadversarial training method allows to train such models through the use of an auxiliary discriminative neural network. We show that the generative-adversarial approach is a special case of an existing more general variational divergence estimation approach. We show that any f -divergence can be used for training generative neural samplers. We discuss the benefits of various choices of divergence functions on training complexity and the quality of the obtained generative models.", "title": "" } ]
[ { "docid": "dbc66199d6873d990a8df18ce7adf01d", "text": "Facebook has rapidly become the most popular Social Networking Site (SNS) among faculty and students in higher education institutions in recent years. Due to the various interactive and collaborative features Facebook supports, it offers great opportunities for higher education institutions to support student engagement and improve different aspects of teaching and learning. To understand the social aspects of Facebook use among students and how they perceive using it for academic purposes, an exploratory survey has been distributed to 105 local and international students at a large public technology university in Malaysia. Results reveal consistent patterns of usage compared to what has been reported in literature reviews in relation to the intent of use and the current use for educational purposes. A comparison was conducted of male and female, international and local, postgraduate and undergraduate students respectively, using nonparametric tests. The results indicate that the students’ perception of using Facebook for academic purposes is not significantly related to students’ gender or students’ background; while it is significantly related to study level and students’ experience. Moreover, based on the overall results of the survey and literature reviews, the paper presents recommendations and suggestions for further research of social networking in a higher education context.", "title": "" }, { "docid": "107de9a30eb4a76385f125d3e857cc79", "text": "We define a simply typed, non-deterministic lambda-calculus where isomorphic types are equated. To this end, an equivalence relation is settled at the term level. We then provide a proof of strong normalisation modulo equivalence. Such a proof is a non-trivial adaptation of the reducibility method.", "title": "" }, { "docid": "ce1048eb76d48800b4e455b8e5d3342a", "text": "While it is true that successful implementation of an enterprise resource planning (ERP) system is a task of Herculean proportions, it is not impossible. If your organization is to reap the benefits of ERP, it must first develop a plan for success. But “prepare to see your organization reengineered, your staff disrupted, and your productivity drop before the payoff is realized.”1 Implementing ERP must be viewed and undertaken as a new business endeavor and a team mission, not just a software installation. Companies must involve all employees, and unconditionally and completely sell them on the concept of ERP for it to be a success.2 A successful implementation means involving, supervising, recognizing, and retaining those who have worked or will work closely with the system. Without a team attitude and total backing by everyone involved, an ERP implementation will end in less than an ideal situation.3 This was the situation for a soft drink bottler that tried to cut corners and did not recognize the importance of the people so heavily involved and depended on.", "title": "" }, { "docid": "8cf1694181142d08427fe8788b74b303", "text": "To simultaneously recover 3D shapes of non-rigid object and camera motions from 2D corresponding points is a difficult task in computer vision. This task is called Non-rigid Structure from motion(NRSfM). To solve this ill-posed problem, many existing methods rely on low rank assumption. However, the value of rank has to be accurately predefined because incorrect value can largely degrade the reconstruction performance. Unfortunately, these is no automatic solution to determine this value. In this paper, we present a self-expressive method that models 3D shapes with a sparse combination of other 3D shapes from the same structure. One of the biggest advantages is that it doesn't need the rank to be predefined. Also, unlike other learning-based methods, our method doesn't need learning step. Experimental results validate the efficiency of our method.", "title": "" }, { "docid": "ed80c1ad22dbf51bfb20351b3d7a2b8b", "text": "Three central problems in the recent literature on visual attention are reviewed. The first concerns the control of attention by top-down (or goal-directed) and bottom-up (or stimulus-driven) processes. The second concerns the representational basis for visual selection, including how much attention can be said to be location- or object-based. Finally, we consider the time course of attention as it is directed to one stimulus after another.", "title": "" }, { "docid": "4bdccdda47aea04c5877587daa0e8118", "text": "Recognizing text character from natural scene images is a challenging problem due to background interferences and multiple character patterns. Scene Text Character (STC) recognition, which generally includes feature representation to model character structure and multi-class classification to predict label and score of character class, mostly plays a significant role in word-level text recognition. The contribution of this paper is a complete performance evaluation of image-based STC recognition, by comparing different sampling methods, feature descriptors, dictionary sizes, coding and pooling schemes, and SVM kernels. We systematically analyze the impact of each option in the feature representation and classification. The evaluation results on two datasets CHARS74K and ICDAR2003 demonstrate that Histogram of Oriented Gradient (HOG) descriptor, soft-assignment coding, max pooling, and Chi-Square Support Vector Machines (SVM) obtain the best performance among local sampling based feature representations. To improve STC recognition, we apply global sampling feature representation. We generate Global HOG (GHOG) by computing HOG descriptor from global sampling. GHOG enables better character structure modeling and obtains better performance than local sampling based feature representations. The GHOG also outperforms existing methods in the two benchmark datasets.", "title": "" }, { "docid": "d3783bcc47ed84da2c54f5f536450a0c", "text": "In this paper, we present a new framework for large scale online kernel learning, making kernel methods efficient and scalable for large-scale online learning applications. Unlike the regular budget online kernel learning scheme that usually uses some budget maintenance strategies to bound the number of support vectors, our framework explores a completely different approach of kernel functional approximation techniques to make the subsequent online learning task efficient and scalable. Specifically, we present two different online kernel machine learning algorithms: (i) Fourier Online Gradient Descent (FOGD) algorithm that applies the random Fourier features for approximating kernel functions; and (ii) Nyström Online Gradient Descent (NOGD) algorithm that applies the Nyström method to approximate large kernel matrices. We explore these two approaches to tackle three online learning tasks: binary classification, multi-class classification, and regression. The encouraging results of our experiments on large-scale datasets validate the effectiveness and efficiency of the proposed algorithms, making them potentially more practical than the family of existing budget online kernel learning approaches.", "title": "" }, { "docid": "5fa9efcdb3b414b38784bd146f71fa3e", "text": "Successful fine-grained image classification methods learn subtle details between visually similar (sub-)classes, but the problem becomes significantly more challenging if the details are missing due to low resolution. Encouraged by the recent success of Convolutional Neural Network (CNN) architectures in image classification, we propose a novel resolution-aware deep model which combines convolutional image super-resolution and convolutional fine-grained classification into a single model in an end-to-end manner. Extensive experiments on multiple benchmarks demonstrate that the proposed model consistently performs better than conventional convolutional networks on classifying fine-grained object classes in low-resolution images.", "title": "" }, { "docid": "429ac6709131b648bb44a6ccaebe6a19", "text": "We highlight a practical yet rarely discussed problem in dialogue state tracking (DST), namely handling unknown slot values. Previous approaches generally assume predefined candidate lists and thus are not designed to output unknown values, especially when the spoken language understanding (SLU) module is absent as in many end-to-end (E2E) systems. We describe in this paper an E2E architecture based on the pointer network (PtrNet) that can effectively extract unknown slot values while still obtains state-of-the-art accuracy on the standard DSTC2 benchmark. We also provide extensive empirical evidence to show that tracking unknown values can be challenging and our approach can bring significant improvement with the help of an effective feature dropout technique.", "title": "" }, { "docid": "a00d7457ea96814a3ff878d2d0d0d09f", "text": "Machine Interpreters M. Anton Ertl1; ;y, David Gregg2, Andreas Krall1, and Bernd Paysan3 1Institut f ur Computersprachen, Technische Universit at Wien, Argentinierstra e 8, A-1040 Wien, Austria 2Department of Computer Science, Trinity College, Dublin 2, Ireland 3Stockmannstr. 14, D-81477 M unchen, Germany SUMMARY In a virtual machine interpreter, the code for each virtual machine instruction has similarities to code for other instructions. We present an interpreter generator that takes simple virtual machine instruction descriptions as input and generates C code for processing the instructions in several ways: execution, virtual machine code generation, disassembly, tracing, and pro ling. The generator is designed to support e cient interpreters: it supports threaded code, caching the top-of-stack item in a register, combining simple instructions into superinstructions, and other optimizations. We have used the generator to create interpreters for Forth and Java. The resulting interpreters are faster than other interpreters for the same languages and they are typically 2{10 times slower than code produced by native-code compilers. We also present results for the e ects of the individual optimizations supported by the generator.", "title": "" }, { "docid": "6e2d7dae0891a2f3a8f02fdb81af9dc6", "text": "Wireless Sensor Networks (WSNs) are charac-terized by multi-hop wireless connectivity, frequently changing network topology and need for efficient routing protocols. The purpose of this paper is to evaluate performance of routing protocol DSDV in wireless sensor network (WSN) scales regarding the End-to-End delay and throughput PDR with mobility factor .Routing protocols are a critical aspect to performance in mobile wireless networks and play crucial role in determining network performance in terms of packet delivery fraction, end-to-end delay and packet loss. Destination-sequenced distance vector (DSDV) protocol is a proactive protocol depending on routing tables which are maintained at each node. The routing protocol should detect and maintain optimal route(s) between source and destination nodes. In this paper, we present application of DSDV in WSN as extend to our pervious study to the design and impleme-ntation the details of the DSDV routing protocol in MANET using the ns-2 network simulator.", "title": "" }, { "docid": "7d0a7073733f8393478be44d820e89ae", "text": "Modeling user-item interaction patterns is an important task for personalized recommendations. Many recommender systems are based on the assumption that there exists a linear relationship between users and items while neglecting the intricacy and non-linearity of real-life historical interactions. In this paper, we propose a neural network based recommendation model (NeuRec) that untangles the complexity of user-item interactions and establish an integrated network to combine non-linear transformation with latent factors. We further design two variants of NeuRec: userbased NeuRec and item-based NeuRec, by focusing on different aspects of the interaction matrix. Extensive experiments on four real-world datasets demonstrated their superior performances on personalized ranking task.", "title": "" }, { "docid": "1861cbfefd392f662b350e70c60f3b6b", "text": "Text mining concerns looking for patterns in unstructured text. The related task of Information Extraction (IE) is about locating specific items in natural-language documents. This paper presents a framework for text mining, called DISCOTEX (Discovery from Text EXtraction), using a learned information extraction system to transform text into more structured data which is then mined for interesting relationships. The initial version of DISCOTEX integrates an IE module acquired by an IE learning system, and a standard rule induction module. In addition, rules mined from a database extracted from a corpus of texts are used to predict additional information to extract from future documents, thereby improving the recall of the underlying extraction system. Encouraging results are presented on applying these techniques to a corpus of computer job announcement postings from an Internet newsgroup.", "title": "" }, { "docid": "aef55dbadd2ae6509907b2632c88227a", "text": "In this paper we consider a new type of cryptographic scheme, which can decode concealed images without any cryptographic computations. The scheme is perfectly secure and very easy to implement. We extend it into a visual variant of the k out of n secret sharing problem, in which a dealer provides a transparency to each one of the n users; any k of them can see the image by stacking their transparencies, but any k 1 of them gain no information about it. A preliminary version of this paper appeared in Eurocrypt 94. y Dept. of Applied Mathematics and Computer Science, Weizmann Institute of Science, Rehovot 76100, Israel. E-mail: naor@wisdom.weizmann.ac.il. Research supported by an Alon Fellowship and a grant from the Israel Science Foundation administered by the Israeli Academy of Sciences. z Dept. of Applied Mathematics and Computer Science, Weizmann Institute of Science, Rehovot 76100, Israel. E-mail: shamir@wisdom.weizmann.ac.il.", "title": "" }, { "docid": "d1d862185a20e1f1efc7d3dc7ca8524b", "text": "In what ways do the online behaviors of wizards and ogres map to players’ actual leadership status in the offline world? What can we learn from players’ experience in Massively Multiplayer Online games (MMOGs) to advance our understanding of leadership, especially leadership in online settings (E-leadership)? As part of a larger agenda in the emerging field of empirically testing the ‘‘mapping’’ between the online and offline worlds, this study aims to tackle a central issue in the E-leadership literature: how have technology and technology mediated communications transformed leadership-diagnostic traits and behaviors? To answer this question, we surveyed over 18,000 players of a popular MMOG and also collected behavioral data of a subset of survey respondents over a four-month period. Motivated by leadership theories, we examined the connection between respondents’ offline leadership status and their in-game relationship-oriented and task-related-behaviors. Our results indicate that individuals’ relationship-oriented behaviors in the virtual world are particularly relevant to players’ leadership status in voluntary organizations, while their task-oriented behaviors are marginally linked to offline leadership status in voluntary organizations, but not in companies. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "83525470a770a036e9c7bb737dfe0535", "text": "It is known that the performance of the i-vectors/PLDA based speaker verification systems is affected in the cases of short utterances and limited training data. The performance degradation appears because the shorter the utterance, the less reliable the extracted i-vector is, and because the total variability covariance matrix and the underlying PLDA matrices need a significant amount of data to be robustly estimated. Considering the “MIT Mobile Device Speaker Verification Corpus” (MIT-MDSVC) as a representative dataset for robust speaker verification tasks on limited amount of training data, this paper investigates which configuration and which parameters lead to the best performance of an i-vectors/PLDA based speaker verification. The i-vectors/PLDA based system achieved good performance only when the total variability matrix and the underlying PLDA matrices were trained with data belonging to the enrolled speakers. This way of training means that the system should be fully retrained when new enrolled speakers were added. The performance of the system was more sensitive to the amount of training data of the underlying PLDA matrices than to the amount of training data of the total variability matrix. Overall, the Equal Error Rate performance of the i-vectors/PLDA based system was around 1% below the performance of a GMM-UBM system on the chosen dataset. The paper presents at the end some preliminary experiments in which the utterances comprised in the CSTR VCTK corpus were used besides utterances from MIT-MDSVC for training the total variability covariance matrix and the underlying PLDA matrices.", "title": "" }, { "docid": "8e28f1561b3a362b2892d7afa8f2164c", "text": "Inference based techniques are one of the major approaches to analyze DNS data and detecting malicious domains. The key idea of inference techniques is to first define associations between domains based on features extracted from DNS data. Then, an inference algorithm is deployed to infer potential malicious domains based on their direct/indirect associations with known malicious ones. The way associations are defined is key to the effectiveness of an inference technique. It is desirable to be both accurate (i.e., avoid falsely associating domains with no meaningful connections) and with good coverage (i.e., identify all associations between domains with meaningful connections). Due to the limited scope of information provided by DNS data, it becomes a challenge to design an association scheme that achieves both high accuracy and good coverage. In this paper, we propose a new association scheme to identify domains controlled by the same entity. Our key idea is an indepth analysis of active DNS data to accurately separate public IPs from dedicated ones, which enables us to build high-quality associations between domains. Our scheme avoids the pitfall of naive approaches that rely on weak “co-IP” relationship of domains (i.e., two domains are resolved to the same IP) that results in low detection accuracy, and, meanwhile, identifies many meaningful connections between domains that are discarded by existing state-of-the-art approaches. Our experimental results show that the proposed association scheme not only significantly improves the domain coverage compared to existing approaches but also achieves better detection accuracy. Existing path-based inference algorithm is specifically designed for DNS data analysis. It is effective but computationally expensive. To further demonstrate the strength of our domain association scheme as well as improving inference efficiency, we investigate the effectiveness of combining our association scheme with the generic belief propagation algorithm. Through comprehensive experiments, we show that this approach offers significant efficiency and scalability improvement with only minor negative impact of detection accuracy, which suggests that such a combination could offer a good tradeoff for malicious domain detection in practice.", "title": "" }, { "docid": "b9733e699abaaedc380a45a3136f97da", "text": "Generally speaking, anti-computer forensics is a set of techniques used as countermeasures to digital forensic analysis. When put into information and data perspective, it is a practice of making it hard to understand or find. Typical example being when programming code is often encoded to protect intellectual property and prevent an attacker from reverse engineering a proprietary software program.", "title": "" }, { "docid": "c6ab3d07e068637082b88160ca2f4988", "text": "This paper focuses on the design of a real-time particle-swarm-optimization-based proportional-integral-differential (PSO-PID) control scheme for the levitated balancing and propulsive positioning of a magnetic-levitation (maglev) transportation system. The dynamic model of a maglev transportation system, including levitated electromagnets and a propulsive linear induction motor based on the concepts of mechanical geometry and motion dynamics, is first constructed. The control objective is to design a real-time PID control methodology via PSO gain selections and to directly ensure the stability of the controlled system without the requirement of strict constraints, detailed system information, and auxiliary compensated controllers despite the existence of uncertainties. The effectiveness of the proposed PSO-PID control scheme for the maglev transportation system is verified by numerical simulations and experimental results, and its superiority is indicated in comparison with PSO-PID in previous literature and conventional sliding-mode (SM) control strategies. With the proposed PSO-PID control scheme, the controlled maglev transportation system possesses the advantages of favorable control performance without chattering phenomena in SM control and robustness to uncertainties superior to fixed-gain PSO-PID control.", "title": "" }, { "docid": "aa4132b0d25e5e7208255a0e7d197b2b", "text": "Attacking fingerprint-based biometric systems by presenting fake fingers at the sensor could be a serious threat for unattended applications. This work introduces a new approach for discriminating fake fingers from real ones, based on the analysis of skin distortion. The user is required to move the finger while pressing it against the scanner surface, thus deliberately exaggerating the skin distortion. Novel techniques for extracting, encoding and comparing skin distortion information are formally defined and systematically evaluated over a test set of real and fake fingers. The proposed approach is privacy friendly and does not require additional expensive hardware besides a fingerprint scanner capable of capturing and delivering frames at proper rate. The experimental results indicate the new approach to be a very promising technique for making fingerprint recognition systems more robust against fake-finger-based spoofing attempts", "title": "" } ]
scidocsrr
326c811da9711e4af4af6f549ad5f22c
Geometric Camera Calibration Using Circular Control Points
[ { "docid": "1b8d9c6a498821823321572a5055ecc3", "text": "The objective of stereo camera calibration is to estimate the internal and external parameters of each camera. Using these parameters, the 3-D position of a point in the scene, which is identified and matched in two stereo images, can be determined by the method of triangulation. In this paper, we present a camera model that accounts for major sources of camera distortion, namely, radial, decentering, and thin prism distortions. The proposed calibration procedure consists of two steps. In the first step, the calibration parameters are estimated using a closed-form solution based on a distortion-free camera model. In the second step, the parameters estimated in the first step are improved iteratively through a nonlinear optimization, taking into account camera distortions. According to minimum variance estimation, the objective function to be minimized is the mean-square discrepancy between the observed image points and their inferred image projections computed with the estimated calibration parameters. We introduce a type of measure that can be used to directly evaluate the performance of calibration and compare calibrations among different systems. The validity and performance of our calibration procedure are tested with both synthetic data and real images taken by teleand wide-angle lenses. The results consistently show significant improvements over less complete camera models.", "title": "" } ]
[ { "docid": "4eb205978a12b780dc26909bee0eebaa", "text": "This paper introduces CPE, the CIRCE Plugin for Eclipse. The CPE adds to the open-source development environment Eclipse the ability of writing and analysing software requirements written in natural language. Models of the software described by the requirements can be examined on-line during the requirements writing process. Initial UML models and skeleton Java code can be generated from the requirements, and imported into Eclipse for further editing and analysis.", "title": "" }, { "docid": "154e25caf9eb954bb7658304dd37a8a2", "text": "RFID is an automatic identification technology that enables tracking of people and objects. Both identity and location are generally key information for indoor services. An obvious and interesting method to obtain these two types of data is to localize RFID tags attached to devices or objects or carried by people. However, signals in indoor environments are generally harshly impaired and tags have very limited capabilities which pose many challenges for positioning them. In this work, we propose a classification and survey the current state-of-art of RFID localization by first presenting this technology and positioning principles. Then, we explain and classify RFID localization techniques. Finally, we discuss future trends in this domain.", "title": "" }, { "docid": "93a6c94a3ecb3fcaf363b07c077e5579", "text": "The state-of-the-art advancement in wind turbine condition monitoring and fault diagnosis for the recent several years is reviewed. Since the existing surveys on wind turbine condition monitoring cover the literatures up to 2006, this review aims to report the most recent advances in the past three years, with primary focus on gearbox and bearing, rotor and blades, generator and power electronics, as well as system-wise turbine diagnosis. There are several major trends observed through the survey. Due to the variable-speed nature of wind turbine operation and the unsteady load involved, time-frequency analysis tools such as wavelets have been accepted as a key signal processing tool for such application. Acoustic emission has lately gained much more attention in order to detect incipient failures because of the low-speed operation for wind turbines. There has been an increasing trend of developing model based reasoning algorithms for fault detection and isolation as cost-effective approach for wind turbines as relatively complicated system. The impact of unsteady aerodynamic load on the robustness of diagnostic signatures has been notified. Decoupling the wind load from condition monitoring decision making will reduce the associated down-time cost.", "title": "" }, { "docid": "8e3d30eebcd6e255be682157f6f2ccd5", "text": "X-ray crystallography shows the myosin cross-bridge to exist in two conformations, the beginning and end of the \"power stroke.\" A long lever-arm undergoes a 60 degrees to 70 degrees rotation between the two states. This rotation is coupled with changes in the active site (OPEN to CLOSED) and phosphate release. Actin binding mediates the transition from CLOSED to OPEN. Kinetics shows that the binding of myosin to actin is a two-step process which affects ATP and ADP affinity. The structural basis of these effects is not explained by the presently known conformers of myosin. Therefore, other states of the myosin cross-bridge must exist. Moreover, cryoelectronmicroscopy has revealed other angles of the cross-bridge lever arm induced by ADP binding. These structural states are presently being characterized by site-directed mutagenesis coupled with kinetic analysis.", "title": "" }, { "docid": "dd557664d20f17487425de206f57cbc5", "text": "This paper presents an ultra low-voltage, rail-to-rail input/output stage Operational Transconductance Amplifier (OTA) which uses quasi floating gate input transistors. This OTA works with ±0.3v and consumes 57µw. It has near zero variation in small/large-signal behavior (i.e. transconductance and slew rate) in whole range of the common mode voltage of input signals. Using source degeneration technique for linearity improvement, make it possible to obtain −42.7 dB, HD3 for 0.6vP-P sine wave input signal with the frequency of 1MHz. The used feedback amplifier in input stage also enhances common mode rejection ratio (CMRR), such that in DC, CMRR is 146 dB. OTA is used for implementation of a wide-tunable third-order elliptic filter with 237 KHz–2.18 MHz cutoff frequencies. Proposed OTA and filter have been simulated in 0.18µm TSMC CMOS technology with Hspice.", "title": "" }, { "docid": "a278f1c4f6cb1b0e1bda447f70cd7749", "text": "A digitally controlled oscillator (DCO) to be used in an all-digital phase-locked loop (PLL) is presented which offers a wide operating frequency range, a monotonic gain curve, and compensation for instantaneous supply voltage variation. The monotonic and wide oscillation frequency is achieved by interpolating at the fine tuning block between two nodes selected from a coarse delay line. Supply voltage compensation is obtained by dynamically adjusting the strength of the feedback latch of the delay cell in response to the change of the supply voltage.", "title": "" }, { "docid": "0957b0617894561ea6d6e85c43cfb933", "text": "We consider the online metric matching problem. In this prob lem, we are given a graph with edge weights satisfying the triangl e inequality, andk vertices that are designated as the right side of the matchin g. Over time up tok requests arrive at an arbitrary subset of vertices in the gra ph and each vertex must be matched to a right side vertex immediately upon arrival. A vertex cannot be rematched to another vertex once it is matched. The goal is to minimize the total weight of the matching. We give aO(log k) competitive randomized algorithm for the problem. This improves upon the best known guarantee of O(log k) due to Meyerson, Nanavati and Poplawski [19]. It is well known that no deterministic al gorithm can have a competitive less than 2k − 1, and that no randomized algorithm can have a competitive ratio of less than l k.", "title": "" }, { "docid": "329487a07d4f71e30b64da5da1c6684a", "text": "The purpose was to investigate the effect of 25 weeks heavy strength training in young elite cyclists. Nine cyclists performed endurance training and heavy strength training (ES) while seven cyclists performed endurance training only (E). ES, but not E, resulted in increases in isometric half squat performance, lean lower body mass, peak power output during Wingate test, peak aerobic power output (W(max)), power output at 4 mmol L(-1)[la(-)], mean power output during 40-min all-out trial, and earlier occurrence of peak torque during the pedal stroke (P < 0.05). ES achieved superior improvements in W(max) and mean power output during 40-min all-out trial compared with E (P < 0.05). The improvement in 40-min all-out performance was associated with the change toward achieving peak torque earlier in the pedal stroke (r = 0.66, P < 0.01). Neither of the groups displayed alterations in VO2max or cycling economy. In conclusion, heavy strength training leads to improved cycling performance in elite cyclists as evidenced by a superior effect size of ES training vs E training on relative improvements in power output at 4 mmol L(-1)[la(-)], peak power output during 30-s Wingate test, W(max), and mean power output during 40-min all-out trial.", "title": "" }, { "docid": "b6983a5ccdac40607949e2bfe2beace2", "text": "A focus on novel, confirmatory, and statistically significant results leads to substantial bias in the scientific literature. One type of bias, known as \"p-hacking,\" occurs when researchers collect or select data or statistical analyses until nonsignificant results become significant. Here, we use text-mining to demonstrate that p-hacking is widespread throughout science. We then illustrate how one can test for p-hacking when performing a meta-analysis and show that, while p-hacking is probably common, its effect seems to be weak relative to the real effect sizes being measured. This result suggests that p-hacking probably does not drastically alter scientific consensuses drawn from meta-analyses.", "title": "" }, { "docid": "7ef14aed74249f10adffe2cc49475229", "text": "We prove that idealised discriminative Bayesian neural networks, capturing perfect epistemic uncertainty, cannot have adversarial examples: Techniques for crafting adversarial examples will necessarily fail to generate perturbed images which fool the classifier. This suggests why MC dropout-based techniques have been observed to be fairly effective against adversarial examples. We support our claims mathematically and empirically. We experiment with HMC on synthetic data derived from MNIST for which we know the ground truth image density, showing that near-perfect epistemic uncertainty correlates to density under image manifold, and that adversarial images lie off the manifold. Using our new-found insights we suggest a new attack for MC dropout-based models by looking for imperfections in uncertainty estimation, and also suggest a mitigation. Lastly, we demonstrate our mitigation on a cats-vs-dogs image classification task with a VGG13 variant.", "title": "" }, { "docid": "aa3178c1b4d7ae8f9e3e97fabea3d6a1", "text": "This study continues landmark research, by Katz in 1984 and Hartland and Londoner in 1997, on characteristics of effective teaching by nurse anesthesia clinical instructors. Based on the literature review, there is a highlighted gap in research evaluating current teaching characteristics of clinical nurse anesthesia instructors that are valuable and effective from an instructor's and student's point of view. This study used a descriptive, quantitative research approach to assess (1) the importance of 24 characteristics (22 effective clinical teaching characteristics identified by Katz, and 2 items added for this study) of student registered nurse anesthetists (SRNAs) and clinical preceptors, who are Certified Registered Nurse Anesthetists, and (2) the congruence between the student and preceptor perceptions. A Likert-scale survey was used to assess the importance of each characteristic. The study was conducted at a large Midwestern hospital. The findings of this study did not support the results found by Hartland and Londoner based on the Friedman 2-way analysis. The rankings of the 24 characteristics by the students and the clinical preceptors in the current research were not significantly congruent based on the Kendall coefficient analysis. The results can help clinical preceptors increase their teaching effectiveness and generate effective learning environments for SRNAs.", "title": "" }, { "docid": "3910a3317ea9ff4ea6c621e562b1accc", "text": "Compaction of agricultural soils is a concern for many agricultural soil scientists and farmers since soil compaction, due to heavy field traffic, has resulted in yield reduction of most agronomic crops throughout the world. Soil compaction is a physical form of soil degradation that alters soil structure, limits water and air infiltration, and reduces root penetration in the soil. Consequences of soil compaction are still underestimated. A complete understanding of processes involved in soil compaction is necessary to meet the future global challenge of food security. We review here the advances in understanding, quantification, and prediction of the effects of soil compaction. We found the following major points: (1) When a soil is exposed to a vehicular traffic load, soil water contents, soil texture and structure, and soil organic matter are the three main factors which determine the degree of compactness in that soil. (2) Soil compaction has direct effects on soil physical properties such as bulk density, strength, and porosity; therefore, these parameters can be used to quantify the soil compactness. (3) Modified soil physical properties due to soil compaction can alter elements mobility and change nitrogen and carbon cycles in favour of more emissions of greenhouse gases under wet conditions. (4) Severe soil compaction induces root deformation, stunted shoot growth, late germination, low germination rate, and high mortality rate. (5) Soil compaction decreases soil biodiversity by decreasing microbial biomass, enzymatic activity, soil fauna, and ground flora. (6) Boussinesq equations and finite element method models, that predict the effects of the soil compaction, are restricted to elastic domain and do not consider existence of preferential paths of stress propagation and localization of deformation in compacted soils. (7) Recent advances in physics of granular media and soil mechanics relevant to soil compaction should be used to progress in modelling soil compaction.", "title": "" }, { "docid": "9faf67646394dfedfef1b6e9152d9cf6", "text": "Acoustic shooter localization systems are being rapidly deployed in the field. However, these are standalone systems---either wearable or vehicle-mounted---that do not have networking capability even though the advantages of widely distributed sensing for locating shooters have been demonstrated before. The reason for this is that certain disadvantages of wireless network-based prototypes made them impractical for the military. The system that utilized stationary single-channel sensors required many sensor nodes, while the multi-channel wearable version needed to track the absolute self-orientation of the nodes continuously, a notoriously hard task. This paper presents an approach that overcomes the shortcomings of past approaches. Specifically, the technique requires as few as five single-channel wireless sensors to provide accurate shooter localization and projectile trajectory estimation. Caliber estimation and weapon classification are also supported. In addition, a single node alone can provide reliable miss distance and range estimates based on a single shot as long as a reasonable assumption holds. The main contribution of the work and the focus of this paper is the novel sensor fusion technique that works well with a limited number of observations. The technique is thoroughly evaluated using an extensive shot library.", "title": "" }, { "docid": "42392af599ce65f38748420353afc534", "text": "An innovative technology for the mass production ofstretchable printed circuit boards (SCBs) will bepresented in this paper. This technology makes itpossible for the first time to really integrate fine pitch,high performance electronic circuits easily into textilesand so may be the building block for a totally newgeneration of wearable electronic systems. Anoverview of the technology will be given andsubsequently a real system using SCB technology ispresented.", "title": "" }, { "docid": "4a30da8f413a0f51db3e0c65de6551c2", "text": "INTRODUCTION\nApproximately 30% of people in rural communities report a sexual assault within their lifetime. The medico-legal response to a report of sexual assault may leave a significant impact on the victim. The purpose of this article is to examine the experiences of legal providers from rural communities, who assist victims of sexual assault.\n\n\nMETHODS\nA sample of expert participants were interviewed and included seven commonwealth attorneys (the state prosecuting attorneys in Virginia), six sheriffs or police investigators, and five victim-witness advocates, all from rural areas of Virginia. Qualitative data were collected by in-person interviews with a hermeneutic-phenomenological format.\n\n\nRESULTS\nThe experts interviewed described prosecution difficulties related to evidence collection and unrealistic jury expectations. These legal experts also shared frustrations with limitations in local services and limitations in the experiences of local sexual assault nurse examiners.\n\n\nCONCLUSIONS\nThis study provides a context for understanding the rural medico-legal response to sexual assault and for the importance of the role of the sexual assault nurse examiner to rural populations. Interdisciplinary collaboration is key to improving prosecution outcomes as well as victim support after reporting.", "title": "" }, { "docid": "f740191f7c6d27811bb09bf40e8da021", "text": "Collaboration Engineering is an approach for the design and deployment of repeatable collaboration processes that can be executed by practitioners without the support of collaboration professionals such as facilitators. A critical challenge in Collaboration Engineering concerns how the design activities have to be executed and which design choices have to be made to create a process design. We report on a four year design science study, in which we developed a design approach for Collaboration Engineering that", "title": "" }, { "docid": "dd50ef22ed75db63254df4dc369d6891", "text": "—Speech Recognition by computer is a process where speech signals are automatically converted into the corresponding sequence of words in text. When the training and testing conditions are not similar, statistical speech recognition algorithms suffer from severe degradation in recognition accuracy. So we depend on intelligent and recognizable sounds for common communications. In this research, word inputs are recognized by the system and executed in the form of text corresponding to the input word. In this paper, we propose a hybrid model by using a fully connected hidden layer between the input state nodes and the output. We have proposed a new objective function for the neural network using a combined framework of statistical and neural network based classifiers. We have used the hybrid model of Radial Basis Function and the Pattern Matching method. The system was trained by Indian English word consisting of 50 words uttered by 20 male speakers and 20 female speakers. The test samples comprised 30 words spoken by a different set of 20 male speakers and 20 female speakers. The recognition accuracy is found to be 91% which is well above the previous results.", "title": "" }, { "docid": "bbd378407abb1c2a9a5016afee40c385", "text": "One approach to the generation of natural-sounding synthesized speech waveforms is to select and concatenate units from a large speech database. Units (in the current work, phonemes) are selected to produce a natural realisation of a target phoneme sequence predicted from text which is annotated with prosodic and phonetic context information. We propose that the units in a synthesis database can be considered as a state transition network in which the state occupancy cost is the distance between a database unit and a target, and the transition cost is an estimate of the quality of concatenation of two consecutive units. This framework has many similarities to HMM-based speech recognition. A pruned Viterbi search is used to select the best units for synthesis from the database. This approach to waveform synthesis permits training from natural speech: two methods for training from speech are presented which provide weights which produce more natural speech than can be obtained by hand-tuning.", "title": "" }, { "docid": "9a12ec03e4521a33a7e76c0c538b6b43", "text": "Sparse representation of information provides a powerful means to perform feature extraction on high-dimensional data and is of broad interest for applications in signal processing, computer vision, object recognition and neurobiology. Sparse coding is also believed to be a key mechanism by which biological neural systems can efficiently process a large amount of complex sensory data while consuming very little power. Here, we report the experimental implementation of sparse coding algorithms in a bio-inspired approach using a 32 × 32 crossbar array of analog memristors. This network enables efficient implementation of pattern matching and lateral neuron inhibition and allows input data to be sparsely encoded using neuron activities and stored dictionary elements. Different dictionary sets can be trained and stored in the same system, depending on the nature of the input signals. Using the sparse coding algorithm, we also perform natural image processing based on a learned dictionary.", "title": "" }, { "docid": "50875a63d0f3e1796148d809b5673081", "text": "Coreference resolution seeks to find the mentions in text that refer to the same real-world entity. This task has been well-studied in NLP, but until recent years, empirical results have been disappointing. Recent research has greatly improved the state-of-the-art. In this review, we focus on five papers that represent the current state-ofthe-art and discuss how they relate to each other and how these advances will influence future work in this area.", "title": "" } ]
scidocsrr
5e03f18e46e0b61ae92ab097247f8bee
Learning 3D Object Categories by Looking Around Them
[ { "docid": "7af26168ae1557d8633a062313d74b78", "text": "This paper addresses the problem of estimating the depth map of a scene given a single RGB image. We propose a fully convolutional architecture, encompassing residual learning, to model the ambiguous mapping between monocular images and depth maps. In order to improve the output resolution, we present a novel way to efficiently learn feature map up-sampling within the network. For optimization, we introduce the reverse Huber loss that is particularly suited for the task at hand and driven by the value distributions commonly present in depth maps. Our model is composed of a single architecture that is trained end-to-end and does not rely on post-processing techniques, such as CRFs or other additional refinement steps. As a result, it runs in real-time on images or videos. In the evaluation, we show that the proposed model contains fewer parameters and requires fewer training data than the current state of the art, while outperforming all approaches on depth estimation. Code and models are publicly available.", "title": "" }, { "docid": "16a5313b414be4ae740677597291d580", "text": "We contribute a large scale database for 3D object recognition, named ObjectNet3D, that consists of 100 categories, 90,127 images, 201,888 objects in these images and 44,147 3D shapes. Objects in the 2D images in our database are aligned with the 3D shapes, and the alignment provides both accurate 3D pose annotation and the closest 3D shape annotation for each 2D object. Consequently, our database is useful for recognizing the 3D pose and 3D shape of objects from 2D images. We also provide baseline experiments on four tasks: region proposal generation, 2D object detection, joint 2D detection and 3D object pose estimation, and image-based 3D shape retrieval, which can serve as baselines for future research using our database. Our database is available online at http://cvgl.stanford.edu/projects/objectnet3d.", "title": "" }, { "docid": "060cf7fd8a97c1ddf852373b63fe8ae1", "text": "Object viewpoint estimation from 2D images is an essential task in computer vision. However, two issues hinder its progress: scarcity of training data with viewpoint annotations, and a lack of powerful features. Inspired by the growing availability of 3D models, we propose a framework to address both issues by combining render-based image synthesis and CNNs (Convolutional Neural Networks). We believe that 3D models have the potential in generating a large number of images of high variation, which can be well exploited by deep CNN with a high learning capacity. Towards this goal, we propose a scalable and overfit-resistant image synthesis pipeline, together with a novel CNN specifically tailored for the viewpoint estimation task. Experimentally, we show that the viewpoint estimation from our pipeline can significantly outperform state-of-the-art methods on PASCAL 3D+ benchmark.", "title": "" }, { "docid": "4d7cbe7f5e854028277f0120085b8977", "text": "In this paper we formulate structure from motion as a learning problem. We train a convolutional network end-to-end to compute depth and camera motion from successive, unconstrained image pairs. The architecture is composed of multiple stacked encoder-decoder networks, the core part being an iterative network that is able to improve its own predictions. The network estimates not only depth and motion, but additionally surface normals, optical flow between the images and confidence of the matching. A crucial component of the approach is a training loss based on spatial relative differences. Compared to traditional two-frame structure from motion methods, results are more accurate and more robust. In contrast to the popular depth-from-single-image networks, DeMoN learns the concept of matching and, thus, better generalizes to structures not seen during training.", "title": "" } ]
[ { "docid": "46a4e4dbcb9b6656414420a908b51cc5", "text": "We review Bacry and Lévy-Leblond’s work on possible kinematics as applied to 2-dimensional spacetimes, as well as the nine types of 2-dimensional Cayley–Klein geometries, illustrating how the Cayley–Klein geometries give homogeneous spacetimes for all but one of the kinematical groups. We then construct a two-parameter family of Clifford algebras that give a unified framework for representing both the Lie algebras as well as the kinematical groups, showing that these groups are true rotation groups. In addition we give conformal models for these spacetimes.", "title": "" }, { "docid": "2b7d91c38a140628199cbdbee65c008a", "text": "Edges in man-made environments, grouped according to vanishing point directions, provide single-view constraints that have been exploited before as a precursor to both scene understanding and camera calibration. A Bayesian approach to edge grouping was proposed in the \"Manhattan World\" paper by Coughlan and Yuille, where they assume the existence of three mutually orthogonal vanishing directions in the scene. We extend the thread of work spawned by Coughlan and Yuille in several significant ways. We propose to use the expectation maximization (EM) algorithm to perform the search over all continuous parameters that influence the location of the vanishing points in a scene. Because EM behaves well in high-dimensional spaces, our method can optimize over many more parameters than the exhaustive and stochastic algorithms used previously for this task. Among other things, this lets us optimize over multiple groups of orthogonal vanishing directions, each of which induces one additional degree of freedom. EM is also well suited to recursive estimation of the kind needed for image sequences and/or in mobile robotics. We present experimental results on images of \"Atlanta worlds\", complex urban scenes with multiple orthogonal edge-groups, that validate our approach. We also show results for continuous relative orientation estimation on a mobile robot.", "title": "" }, { "docid": "2084a38c285ebfb2d5e40e8667414d0d", "text": "Differential Evolution (DE) algorithm is a new heuristic approach mainly having three advantages; finding the true global minimum regardless of the initial parameter values, fast convergence, and using few control parameters. DE algorithm is a population based algorithm like genetic algorithms using similar operators; crossover, mutation and selection. In this work, we have compared the performance of DE algorithm to that of some other well known versions of genetic algorithms: PGA, Grefensstette, Eshelman. In simulation studies, De Jong’s test functions have been used. From the simulation results, it was observed that the convergence speed of DE is significantly better than genetic algorithms. Therefore, DE algorithm seems to be a promising approach for engineering optimization problems.", "title": "" }, { "docid": "ca6ae788fc63563e39e1cb611dbdd8c5", "text": "STATL is an extensible state/transition-based attack desc ription language designed to support intrusion detection. The language allows one to describe computer pen trations as sequences of actions that an attacker performs to compromise a computer system. A STATL descripti on of an attack scenario can be used by an intrusion detection system to analyze a stream of events and de tect possible ongoing intrusions. Since intrusion detection is performed in different domains (i.e., the netw ork or the hosts) and in different operating environments (e.g., Linux, Solaris, or Windows NT), it is useful to h ave an extensible language that can be easily tailored to different target environments. STATL defines do main-independent features of attack scenarios and provides constructs for extending the language to describe attacks in particular domains and environments. The STATL language has been successfully used in describing both network-based and host-based attacks, and it has been tailored to very different environments, e.g ., Sun Microsystems’ Solaris and Microsoft’s Windows NT. An implementation of the runtime support for the STATL language has been developed and a toolset of intrusion detection systems based on STATL has b een implemented. The toolset was used in a recent intrusion detection evaluation effort, delivering very favorable results. This paper presents the details of the STATL syntax and its semantics. Real examples from bot h the host and network-based extensions of the language are also presented.", "title": "" }, { "docid": "4bbcaa76b20afecc8e6002d155acf23e", "text": "We study the problem of learning mixtures of distributions, a natural formalization of clustering. A mixture of distributions is a collection of distributionsD = {D1, . . .DT }, andmixing weights , {w1, . . . , wT } such that", "title": "" }, { "docid": "8cd8577a70729d03c1561df6a1fcbdbb", "text": "Quantum computing is a new computational paradigm created by reformulating information and computation in a quantum mechanical framework [30, 27]. Since the laws of physics appear to be quantum mechanical, this is the most relevant framework to consider when considering the fundamental limitations of information processing. Furthermore, in recent decades we have seen a major shift from just observing quantum phenomena to actually controlling quantum mechanical systems. We have seen the communication of quantum information over long distances, the “teleportation” of quantum information, and the encoding and manipulation of quantum information in many different physical media. We still appear to be a long way from the implementation of a large-scale quantum computer, however it is a serious goal of many of the world’s leading physicists, and progress continues at a fast pace. In parallel with the broad and aggressive program to control quantum mechanical systems with increased precision, and to control and interact a larger number of subsystems, researchers have also been aggressively pushing the boundaries of what useful tasks one could perform with quantum mechanical devices. These in-", "title": "" }, { "docid": "46d20d0330aaaf22418c53c715d78631", "text": "s, Cochrane Central Register of Controlled Trials and Database of Systemic Reviews, Database of Abstracts of Effects, ACP Journal Club, and OTseeker. Experts such as librarians have been used. However, there is no mention of efforts in relation to unpublished research, but abstracts of 950 articles weres of 950 articles were scanned, which appears to be a sufficient amount.", "title": "" }, { "docid": "8171fc4d3b47ed79915f98269bef3c4d", "text": "The purpose of this study was to investigate the effects of loaded and unloaded plyometric training strategies on speed and power performance of elite young soccer players. Twenty-three under-17 male soccer players (age: 15.9 ± 1.2 years, height: 178.3 ± 8.1 cm, body-mass (BM): 68.1 ± 9.3 kg) from the same club took part in this study. The athletes were pair-matched in two training groups: loaded vertical and horizontal jumps using an haltere type handheld with a load of 8% of the athletes' body mass (LJ; n = 12) and unloaded vertical and horizontal plyometrics (UJ; n = 11). Sprinting speeds at 5-, 10-, and 20-m, mean propulsive power (MPP) relative to the players' BM in the jump squat exercise, and performance in the squat jump (SJ) and countermovement jump (CMJ) were assessed pre- and post-training period. During the experimental period, soccer players performed 12 plyometric training sessions across a 6-week preseason period. Magnitude based inferences and standardized differences were used for statistical analysis. A very likely increase in the vertical jumps was observed for the LJ group (99/01/00 and 98/02/00 for SJ and CMJ, respectively). In the UJ group a likely increase was observed for both vertical jumps (83/16/01 and 90/10/00, for SJ and CMJ, respectively). An almost certainly decrease in the sprinting velocities along the 20-m course were found in the LJ group (00/00/100 for all split distances tested). Meanwhile, in the UJ likely to very likely decreases were observed for all sprinting velocities tested (03/18/79, 01/13/86, and 00/04/96, for velocities in 5-, 10-, and 20-m, respectively). No meaningful differences were observed for the MPP in either training group (11/85/04 and 37/55/08 for LJ and UJ, respectively). In summary, under-17 professional soccer players increased jumping ability after a 6-week preseason training program, using loaded or unloaded jumps. Despite these positive adaptations, both plyometric strategies failed to produce worthwhile improvements in maximal speed and power performances, which is possible related to the interference of concurrent training effects. New training strategies should be developed to ensure adequate balance between power and endurance loads throughout short (and high-volume) soccer preseasons.", "title": "" }, { "docid": "463543546eeca427eb348df6c019c986", "text": "Blockchains have recently generated explosive interest from both academia and industry, with many proposed applications. But descriptions of many these proposals are more visionary projections than realizable proposals, and even basic definitions are often missing. We define “blockchain” and “blockchain network”, and then discuss two very different, well known classes of blockchain networks: cryptocurrencies and Git repositories. We identify common primitive elements of both and use them to construct a framework for explicitly articulating what characterizes blockchain networks. The framework consists of a set of questions that every blockchain initiative should address at the very outset. It is intended to help one decide whether or not blockchain is an appropriate approach to a particular application, and if it is, to assist in its initial design stage.", "title": "" }, { "docid": "89438b3b2a78c54a44236b720940c8f2", "text": "InProcess-Aware Information Systems, business processes are often modeled in an explicit way. Roughly speaking, the available business processmodeling languages can bedivided into twogroups. Languages from the first group are preferred by academic people but shunned by business people, and include Petri nets and process algebras. These academic languages have a proper formal semantics, which allows the corresponding academic models to be verified in a formal way. Languages from the second group are preferred by business people but disliked by academic people, and include BPEL, BPMN, andEPCs. These business languages often lack any proper semantics, which often leads to debates on how to interpret certain business models. Nevertheless, business models are used in practice, whereas academic models are hardly used. To be able to use, for example, the abundance of Petri net verification techniques on business models, we need to be able to transform these models to Petri nets. In this paper, we investigate anumberofPetri net transformations that already exist.For every transformation, we investigate the transformation itself, the constructs in the business models that are problematic for the transformation and the main applications for the transformation.", "title": "" }, { "docid": "570e6b3f853c4e774c2ffce3b2122479", "text": "Given a repeatedly issued query and a document with a not-yet-confirmed potential to satisfy the users' needs, a search system should place this document on a high position in order to gather user feedback and obtain a more confident estimate of the document utility. On the other hand, the main objective of the search system is to maximize expected user satisfaction over a rather long period, what requires showing more relevant documents on average. The state-of-the-art approaches to solving this exploration-exploitation dilemma rely on strongly simplified settings making these approaches infeasible in practice. We improve the most flexible and pragmatic of them to handle some actual practical issues. The first one is utilizing prior information about queries and documents, the second is combining bandit-based learning approaches with a default production ranking algorithm. We show experimentally that our framework enables to significantly improve the ranking of a leading commercial search engine.", "title": "" }, { "docid": "f6d9efb7cfee553bc02a5303a86fd626", "text": "OBJECTIVE\nTo perform a cross-cultural adaptation of the Portuguese version of the Maslach Burnout Inventory for students (MBI-SS), and investigate its reliability, validity and cross-cultural invariance.\n\n\nMETHODS\nThe face validity involved the participation of a multidisciplinary team. Content validity was performed. The Portuguese version was completed in 2009, on the internet, by 958 Brazilian and 556 Portuguese university students from the urban area. Confirmatory factor analysis was carried out using as fit indices: the χ²/df, the Comparative Fit Index (CFI), the Goodness of Fit Index (GFI) and the Root Mean Square Error of Approximation (RMSEA). To verify the stability of the factor solution according to the original English version, cross-validation was performed in 2/3 of the total sample and replicated in the remaining 1/3. Convergent validity was estimated by the average variance extracted and composite reliability. The discriminant validity was assessed, and the internal consistency was estimated by the Cronbach's alpha coefficient. Concurrent validity was estimated by the correlational analysis of the mean scores of the Portuguese version and the Copenhagen Burnout Inventory, and the divergent validity was compared to the Beck Depression Inventory. The invariance of the model between the Brazilian and the Portuguese samples was assessed.\n\n\nRESULTS\nThe three-factor model of Exhaustion, Disengagement and Efficacy showed good fit (c 2/df = 8.498, CFI = 0.916, GFI = 0.902, RMSEA = 0.086). The factor structure was stable (λ:χ²dif = 11.383, p = 0.50; Cov: χ²dif = 6.479, p = 0.372; Residues: χ²dif = 21.514, p = 0.121). Adequate convergent validity (VEM = 0.45;0.64, CC = 0.82;0.88), discriminant (ρ² = 0.06;0.33) and internal consistency (α = 0.83;0.88) were observed. The concurrent validity of the Portuguese version with the Copenhagen Inventory was adequate (r = 0.21, 0.74). The assessment of the divergent validity was impaired by the approach of the theoretical concept of the dimensions Exhaustion and Disengagement of the Portuguese version with the Beck Depression Inventory. Invariance of the instrument between the Brazilian and Portuguese samples was not observed (λ:χ²dif = 84.768, p<0.001; Cov: χ²dif = 129.206, p < 0.001; Residues: χ²dif = 518.760, p < 0.001).\n\n\nCONCLUSIONS\nThe Portuguese version of the Maslach Burnout Inventory for students showed adequate reliability and validity, but its factor structure was not invariant between the countries, indicating the absence of cross-cultural stability.", "title": "" }, { "docid": "5455e7d53e6de4cbe97cbcdf6eea9806", "text": "OBJECTIVE\nTo evaluate the clinical and radiological results in the surgical treatment of moderate and severe hallux valgus by performing percutaneous double osteotomy.\n\n\nMATERIAL AND METHOD\nA retrospective study was conducted on 45 feet of 42 patients diagnosed with moderate-severe hallux valgus, operated on in a single centre and by the same surgeon from May 2009 to March 2013. Two patients were lost to follow-up. Clinical and radiological results were recorded.\n\n\nRESULTS\nAn improvement from 48.14 ± 4.79 points to 91.28 ± 8.73 points was registered using the American Orthopedic Foot and Ankle Society (AOFAS) scale. A radiological decrease from 16.88 ± 2.01 to 8.18 ± 3.23 was observed in the intermetatarsal angle, and from 40.02 ± 6.50 to 10.51 ± 6.55 in hallux valgus angle. There was one case of hallux varus, one case of non-union, a regional pain syndrome type I, an infection that resolved with antibiotics, and a case of loosening of the osteosynthesis that required an open surgical refixation.\n\n\nDISCUSSION\nPercutaneous distal osteotomy of the first metatarsal when performed as an isolated procedure, show limitations when dealing with cases of moderate and severe hallux valgus. The described technique adds the advantages of minimally invasive surgery by expanding applications to severe deformities.\n\n\nCONCLUSIONS\nPercutaneous double osteotomy is a reproducible technique for correcting severe deformities, with good clinical and radiological results with a complication rate similar to other techniques with the advantages of shorter surgical times and less soft tissue damage.", "title": "" }, { "docid": "2b9b7b218e112447fa4cdd72085d3916", "text": "A 48-year-old female patient presented with gigantomastia. The sternal notch-nipple distance was 55 cm for the right breast and 50 cm for the left. Vertical mammaplasty based on the superior pedicle was performed. The resected tissue weighed 3400 g for the right breast and 2800 g for the left breast. The outcome was excellent with respect to symmetry, shape, size, residual scars, and sensitivity of the nipple-areola complex. Longer pedicles or larger resections were not found in the literature on vertical mammaplasty applications. In our opinion, by using the vertical mammaplasty technique in gigantomastia it is possible to achieve a well-projecting shape and preserve NAC sensitivity.", "title": "" }, { "docid": "860d39ff0ddd80caaf712e84a82f4d86", "text": "Steganography and steganalysis received a great deal of attention from media and law enforcement. Many powerful and robust methods of steganography and steganalysis have been developed. In this paper we are considering the methods of steganalysis that are to be used for this processes. Paper giving some idea about the steganalysis and its method. Keywords— Include at least 5 keywords or phrases", "title": "" }, { "docid": "8f2cfb5cb55b093f67c1811aba8b87e2", "text": "“You make what you measure” is a familiar mantra at datadriven companies. Accordingly, companies must be careful to choose North Star metrics that create a better product. Metrics fall into two general categories: direct count metrics such as total revenue and monthly active users, and nuanced quality metrics regarding value or other aspects of the user experience. Count metrics, when used exclusively as the North Star, might inform product decisions that harm user experience. Therefore, quality metrics play an important role in product development. We present a five-step framework for developing quality metrics using a combination of machine learning and product intuition. Machine learning ensures that the metric accurately captures user experience. Product intuition makes the metric interpretable and actionable. Through a case study of the Endorsements product at LinkedIn, we illustrate the danger of optimizing exclusively for count metrics, and showcase the successful application of our framework toward developing a quality metric. We show how the new quality metric has driven significant improvements toward creating a valuable, user-first product.", "title": "" }, { "docid": "c695f74a41412606e31c771ec9d2b6d3", "text": "Osteochondrosis dissecans (OCD) is a form of osteochondrosis limited to the articular epiphysis. The most commonly affected areas include, in decreasing order of frequency, the femoral condyles, talar dome and capitellum of the humerus. OCD rarely occurs in the shoulder joint, where it involves either the humeral head or the glenoid. The purpose of this report is to present a case with glenoid cavity osteochondritis dissecans and clinical and radiological outcome after arthroscopic debridement. The patient underwent arthroscopy to remove the loose body and to microfracture the cavity. The patient was followed-up for 4 years and she is pain-free with full range of motion and a stable shoulder joint.", "title": "" }, { "docid": "9b71d11e2096008bc3603c62d89e452e", "text": "Abstract In the present study biodiesel was synthesized from Waste Cook Oil (WCO) by three-step method and regressive analyzes of the process was done. The raw oil, containing 1.9wt% Free Fatty Acid (FFA) and viscosity was 47.6mm/s. WCO was collected from local restaurant of Sylhet city in Bangladesh. Transesterification method gives lower yield than three-step method. In the three-step method, the first step is saponification of the oil followed by acidification to produce FFA and finally esterification of FFA to produce biodiesel. In the saponification reaction, various reaction parameters such as oil to sodium hydroxide molar ratio and reaction time were optimized and the oil to NaOH molar ratio was 1:2, In the esterification reaction, the reaction parameters such as methanol to FFA molar ratio, catalyst concentration and reaction temperature were optimized. Silica gel was used during esterification reaction to adsorb water produced in the reaction. Hence the reaction rate was increased and finally the FFA was reduced to 0.52wt%. A factorial design was studied for esterification reaction based on yield of biodiesel. Finally various properties of biodiesel such as FFA, viscosity, specific gravity, cetane index, pour point, flash point etc. were measured and compared with biodiesel and petro-diesel standard. The reaction yield was 79%.", "title": "" }, { "docid": "7cebca46f584b2f31fd9d2c8ef004f17", "text": "Wirelessly networked systems of intra-body sensors and actuators could enable revolutionary applications at the intersection between biomedical science, networking, and control with a strong potential to advance medical treatment of major diseases of our times. Yet, most research to date has focused on communications along the body surface among devices interconnected through traditional electromagnetic radio-frequency (RF) carrier waves; while the underlying root challenge of enabling networked intra-body miniaturized sensors and actuators that communicate through body tissues is substantially unaddressed. The main obstacle to enabling this vision of networked implantable devices is posed by the physical nature of propagation in the human body. The human body is composed primarily (65 percent) of water, a medium through which RF electromagnetic waves do not easily propagate, even at relatively low frequencies. Therefore, in this article we take a different perspective and propose to investigate and study the use of ultrasonic waves to wirelessly internetwork intra-body devices. We discuss the fundamentals of ultrasonic propagation in tissues, and explore important tradeoffs, including the choice of a transmission frequency, transmission power, and transducer size. Then, we discuss future research challenges for ultrasonic networking of intra-body devices at the physical, medium access and network layers of the protocol stack.", "title": "" }, { "docid": "39282672decc76d784bd9f59da14e7d0", "text": "A network with <inline-formula><tex-math notation=\"LaTeX\">$n$</tex-math><alternatives> <inline-graphic xlink:href=\"ma-ieq1-2730207.gif\"/></alternatives></inline-formula> nodes contains <inline-formula> <tex-math notation=\"LaTeX\">$O(n^2)$</tex-math><alternatives><inline-graphic xlink:href=\"ma-ieq2-2730207.gif\"/> </alternatives></inline-formula> possible links. Even for networks of modest size, it is often difficult to evaluate all pairwise possibilities for links in a meaningful way. Further, even though link prediction is closely related to missing value estimation problems, it is often difficult to use sophisticated models such as latent factor methods because of their computational complexity on large networks. Hence, most known link prediction methods are designed for <italic>evaluating</italic> the link propensity on a <italic>specified</italic> subset of links, rather than on the entire networks. In practice, however, it is essential to perform an exhaustive search over the entire networks. In this article, we propose an ensemble enabled approach to scaling up link prediction, by decomposing traditional link prediction problems into subproblems of smaller size. These subproblems are each solved with latent factor models, which can be effectively implemented on networks of modest size. By incorporating with the characteristics of link prediction, the ensemble approach further reduces the sizes of subproblems without sacrificing its prediction accuracy. The ensemble enabled approach has several advantages in terms of performance, and our experimental results demonstrate the effectiveness and scalability of our approach.", "title": "" } ]
scidocsrr
58a3993b42f6a2d73e70c6b5ba9ab3eb
Sprengel deformity: pathogenesis and management.
[ { "docid": "546afa65724bedfc854ca1bdba0f8e98", "text": "We report 12 consecutive cases of vertical scapular osteotomy to correct Sprengel's deformity, performed during a 16-year period, with a mean follow-up of 10.4 years. The mean increase in abduction of the shoulder was 53 degrees . The cosmetic appearance improved by a mean of 1.5 levels on the Cavendish scale. Neither function nor cosmesis deteriorated with time. We recommend the procedure for correction of moderate deformities with a functional deficit.", "title": "" } ]
[ { "docid": "c35b5da1da795857baf4ee1ce7dbfac5", "text": "The art of finding software vulnerabilities has been covered extensively in the literature and there is a huge body of work on this topic. In contrast, the intentional insertion of exploitable, security-critical bugs has received little (public) attention yet. Wanting more bugs seems to be counterproductive at first sight, but the comprehensive evaluation of bug-finding techniques suffers from a lack of ground truth and the scarcity of bugs.\n In this paper, we propose EvilCoder, a system to automatically find potentially vulnerable source code locations and modify the source code to be actually vulnerable. More specifically, we leverage automated program analysis techniques to find sensitive sinks which match typical bug patterns (e.g., a sensitive API function with a preceding sanity check), and try to find data-flow connections to user-controlled sources. We then transform the source code such that exploitation becomes possible, for example by removing or modifying input sanitization or other types of security checks. Our tool is designed to randomly pick vulnerable locations and possible modifications, such that it can generate numerous different vulnerabilities on the same software corpus. We evaluated our tool on several open-source projects such as for example libpng and vsftpd, where we found between 22 and 158 unique connected source-sink pairs per project. This translates to hundreds of potentially vulnerable data-flow paths and hundreds of bugs we can insert. We hope to support future bug-finding techniques by supplying freshly generated, bug-ridden test corpora so that such techniques can (finally) be evaluated and compared in a comprehensive and statistically meaningful way.", "title": "" }, { "docid": "cbb8134d38905f9072d5eeec2fa82524", "text": "Semiconductor manufacturing fabs generate huge amount of data. The big data approaches of data management have increased speed, quality and accessibility of the data. This paper discusses harnessing value from this data using predictive analytics methods. Various aspects predictive analytics in the context of semiconductor manufacturing are discussed. The limitations of standard methods of analysis and the need to adopt robust methods of modeling and analysis are highlighted. The robust prediction modeling method is implemented on wafer sensor data resulting in improved prediction ability of wafer quality characteristics.", "title": "" }, { "docid": "b978d6b3750b48fffe55e9fef6366fd1", "text": "Intrusion detection system (IDS) has become an essential layer in all the latest ICT system due to an urge towards cyber safety in the day-to-day world. Reasons including uncertainty in finding the types of attacks and increased the complexity of advanced cyber attacks, IDS calls for the need of integration of Deep Neural Networks (DNNs). In this paper, DNNs have been utilized to predict the attacks on Network Intrusion Detection System (N-IDS). A DNN with 0.1 rate of learning is applied and is run for 1000 number of epochs and KDDCup-‘99’ dataset has been used for training and benchmarking the network. For comparison purposes, the training is done on the same dataset with several other classical machine learning algorithms and DNN of layers ranging from 1 to 5. The results were compared and concluded that a DNN of 3 layers has superior performance over all the other classical machine learning algorithms.", "title": "" }, { "docid": "2344fe8f6a552dde0ccea5bd457b5ccd", "text": "BACKGROUND\nPresent in several types of food, bioactive amines are described as organic bases of low molecular weight, which constitute a potential health risk. An awareness of amine levels in foods today is therefore important in relation to food safety and patient care. This review aims to emphasise the need to unify the information on the content of biogenic amines in foods and prevent patients' misunderstanding.\n\n\nMETHODS\nSelective literature search for relevant publications in PubMed and other scientific data bases combined with further data from the World Wide Web on histamine and other amines content in foods.\n\n\nRESULTS\nAvailable reference sources do not reflect a homogeneous consensus, and the variation between foods makes it impossible for dieticians to accurately estimate amines content to correctly advise patients.\n\n\nCONCLUSIONS\nTo achieve the goal of collecting reliable information, all methods and tools used in analytical studies should be standardised and information exposed to patients should be verified.", "title": "" }, { "docid": "bbcd0a157ee615d5a7c45e688c49aa8f", "text": "The study of brain networks by resting-state functional magnetic resonance imaging (rs-fMRI) is a promising method for identifying patients with dementia from healthy controls (HC). Using graph theory, different aspects of the brain network can be efficiently characterized by calculating measures of integration and segregation. In this study, we combined a graph theoretical approach with advanced machine learning methods to study the brain network in 89 patients with mild cognitive impairment (MCI), 34 patients with Alzheimer’s disease (AD), and 45 age-matched HC. The rs-fMRI connectivity matrix was constructed using a brain parcellation based on a 264 putative functional areas. Using the optimal features extracted from the graph measures, we were able to accurately classify three groups (i.e., HC, MCI, and AD) with accuracy of 88.4 %. We also investigated performance of our proposed method for a binary classification of a group (e.g., MCI) from two other groups (e.g., HC and AD). The classification accuracies for identifying HC from AD and MCI, AD from HC and MCI, and MCI from HC and AD, were 87.3, 97.5, and 72.0 %, respectively. In addition, results based on the parcellation of 264 regions were compared to that of the automated anatomical labeling atlas (AAL), consisted of 90 regions. The accuracy of classification of three groups using AAL was degraded to 83.2 %. Our results show that combining the graph measures with the machine learning approach, on the basis of the rs-fMRI connectivity analysis, may assist in diagnosis of AD and MCI.", "title": "" }, { "docid": "0470105ef882212930267e85d17b7c57", "text": "Using configuration synthesis and design map, the CPW-fed circular fractal slot antennas are proposed for dual-band applications. In practice, the experimental results with broadband and dual-band responses (47.4% and 13.5% bandwidth) and available radiation gains (peak gain 3.58 and 7.28 dBi) at 0.98 and 1.84 GHz respectively for half-wavelength design are achieved firstly. Then, the other broadband and dual-band responses (75.9% and 16.1% bandwidth) and available radiation gains (peak gain 3.16 and 6.62 dBi) at 2.38 and 5.35 GHz for quarter-wavelength design are described herein. Contour distribution patterns are applied to figure out the omni-directional patterns. The demonstration among the design map and the EM characteristics of the antenna is presented by current distributions.", "title": "" }, { "docid": "33880207bb52ce7e20c6f5ad80d67a47", "text": "This research involves the digital transformation of an orthopedic surgical practice office housing three community orthopedic surgeons and a physical therapy treatment clinic in Toronto, Ontario. All three surgeons engage in both a private community orthopaedic surgery practice and hold surgical privileges at a local community hospital which serves a catchment area of more than 850,000 people in the northwest Greater Toronto Area. The clinic employs two full time physical therapists and one office manager for therapy services as well as four administrative assistants who manage the surgeon’s practices.", "title": "" }, { "docid": "fb1f3f300bcd48d99f0a553a709fdc89", "text": "This work includes a high step up voltage gain DC-DC converter for DC microgrid applications. The DC microgrid can be utilized for rural electrification, UPS support, Electronic lighting systems and Electrical vehicles. The whole system consists of a Photovoltaic panel (PV), High step up DC-DC converter with Maximum Power Point Tracking (MPPT) and DC microgrid. The entire system is optimized with both MPPT and converter separately. The MPP can be tracked by Incremental Conductance (IC) MPPT technique modified with D-Sweep (Duty ratio Sweep). D-sweep technique reduces the problem of multiple local maxima. Converter optimization includes a high step up DC-DC converter which comprises of both coupled inductor and switched capacitors. This increases the gain up to twenty times with high efficiency. Both converter optimization and MPPT optimization increases overall system efficiency. MATLAB/simulink model is implemented. Hardware of the system can be implemented by either voltage mode control or current mode control.", "title": "" }, { "docid": "922ce107f9d88b02483fd6b65109d466", "text": "With the growing popularity of electronic documents, replication can occur for many reasons. People may copy text segments from various sources and make modifications. In this paper, we study the problem of local similarity search to find partially replicated text. Unlike existing studies on similarity search which find entirely duplicated documents, our target is to identify documents that approximately share a pair of sliding windows which differ by no more than τ tokens. Our problem is technically challenging because for sliding windows the tokens to be indexed are less selective than entire documents, rendering set similarity join-based algorithms less efficient. Our proposed method is based on enumerating token combinations to obtain signatures with high selectivity. In order to strike a balance between signature and candidate generation, we partition the token universe and for different partitions we generate combinations composed of different numbers of tokens. A cost-aware algorithm is devised to find a good partitioning of the token universe. We also propose to leverage the overlap between adjacent windows to share computation and thus speed up query processing. In addition, we develop the techniques to support the large thresholds. Experiments on real datasets demonstrate the efficiency of our method against alternative solutions.", "title": "" }, { "docid": "fd111c4f99c0fe9d8731385f6c7eb04f", "text": "We introduce a greedy transition-based parser that learns to represent parser states using recurrent neural networks. Our primary innovation that enables us to do this efficiently is a new control structure for sequential neural networks—the stack long short-term memory unit (LSTM). Like the conventional stack data structures used in transition-based parsers, elements can be pushed to or popped from the top of the stack in constant time, but, in addition, an LSTM maintains a continuous space embedding of the stack contents. Our model captures three facets of the parser's state: (i) unbounded look-ahead into the buffer of incoming words, (ii) the complete history of transition actions taken by the parser, and (iii) the complete contents of the stack of partially built tree fragments, including their internal structures. In addition, we compare two different word representations: (i) standard word vectors based on look-up tables and (ii) character-based models of words. Although standard word embedding models work well in all languages, the character-based models improve the handling of out-of-vocabulary words, particularly in morphologically rich languages. Finally, we discuss the use of dynamic oracles in training the parser. During training, dynamic oracles alternate between sampling parser states from the training data and from the model as it is being learned, making the model more robust to the kinds of errors that will be made at test time. Training our model with dynamic oracles yields a linear-time greedy parser with very competitive performance.", "title": "" }, { "docid": "99c6fb7c765bf749fd40a78eadf3e723", "text": "This paper presents a new design approach to nonlinear observers for Itô stochastic nonlinear systems with guaranteed stability. A stochastic contraction lemma is presented which is used to analyze incremental stability of the observer. A bound on the mean-squared distance between the trajectories of original dynamics and the observer dynamics is obtained as a function of the contraction rate and maximum noise intensity. The observer design is based on a non-unique state-dependent coefficient (SDC) form, which parametrizes the nonlinearity in an extended linear form. The observer gain synthesis algorithm, called linear matrix inequality state-dependent algebraic Riccati equation (LMI-SDARE), is presented. The LMI-SDARE uses a convex combination of multiple SDC parametrizations. An optimization problem with state-dependent linear matrix inequality (SDLMI) constraints is formulated to select the coefficients of the convex combination for maximizing the convergence rate and robustness against disturbances. Two variations of LMI-SDARE algorithm are also proposed. One of them named convex state-dependent Riccati equation (CSDRE) uses a chosen convex combination of multiple SDC matrices; and the other named Fixed-SDARE uses constant SDC matrices that are pre-computed by using conservative bounds of the system states while using constant coefficients of the convex combination pre-computed by a convex LMI optimization problem. A connection between contraction analysis and L2 gain of the nonlinear system is established in the presence of noise and disturbances. Results of simulation show superiority of the LMI-SDARE algorithm to the extended Kalman filter (EKF) and state-dependent differential Riccati equation (SDDRE) filter.", "title": "" }, { "docid": "bba813ba24b8bc3a71e1afd31cf0454d", "text": "Betweenness-Centrality measure is often used in social and computer communication networks to estimate the potential monitoring and control capabilities a vertex may have on data flowing in the network. In this article, we define the Routing Betweenness Centrality (RBC) measure that generalizes previously well known Betweenness measures such as the Shortest Path Betweenness, Flow Betweenness, and Traffic Load Centrality by considering network flows created by arbitrary loop-free routing strategies.\n We present algorithms for computing RBC of all the individual vertices in the network and algorithms for computing the RBC of a given group of vertices, where the RBC of a group of vertices represents their potential to collaboratively monitor and control data flows in the network. Two types of collaborations are considered: (i) conjunctive—the group is a sequences of vertices controlling traffic where all members of the sequence process the traffic in the order defined by the sequence and (ii) disjunctive—the group is a set of vertices controlling traffic where at least one member of the set processes the traffic. The algorithms presented in this paper also take into consideration different sampling rates of network monitors, accommodate arbitrary communication patterns between the vertices (traffic matrices), and can be applied to groups consisting of vertices and/or edges.\n For the cases of routing strategies that depend on both the source and the target of the message, we present algorithms with time complexity of O(n2m) where n is the number of vertices in the network and m is the number of edges in the routing tree (or the routing directed acyclic graph (DAG) for the cases of multi-path routing strategies). The time complexity can be reduced by an order of n if we assume that the routing decisions depend solely on the target of the messages.\n Finally, we show that a preprocessing of O(n2m) time, supports computations of RBC of sequences in O(kn) time and computations of RBC of sets in O(n3n) time, where k in the number of vertices in the sequence or the set.", "title": "" }, { "docid": "0bc61c7a334d5888aee825f2933d7219", "text": "This paper introduces a novel unsupervised outlier detection method, namely Coupled Biased Random Walks (CBRW), for identifying outliers in categorical data with diversified frequency distributions and many noisy features. Existing pattern-based outlier detection methods are ineffective in handling such complex scenarios, as they misfit such data. CBRW estimates outlier scores of feature values by modelling feature value level couplings, which carry intrinsic data characteristics, via biased random walks to handle this complex data. The outlier scores of feature values can either measure the outlierness of an object or facilitate the existing methods as a feature weighting and selection indicator. Substantial experiments show that CBRW can not only detect outliers in complex data significantly better than the state-of-the-art methods, but also greatly improve the performance of existing methods on data sets with many noisy features.", "title": "" }, { "docid": "263c04402cfe80649b1d3f4a8578e99b", "text": "This paper presents M3Express (Modular-Mobile-Multirobot), a new design for a low-cost modular robot. The robot is self-mobile, with three independently driven wheels that also serve as connectors. The new connectors can be automatically operated, and are based on stationary magnets coupled to mechanically actuated ferromagnetic yoke pieces. Extensive use is made of plastic castings, laser cut plastic sheets, and low-cost motors and electronic components. Modules interface with a host PC via Bluetooth® radio. An off-board camera, along with a set of modules and a control PC form a convenient, low-cost system for rapidly developing and testing control algorithms for modular reconfigurable robots. Experimental results demonstrate mechanical docking, connector strength, and accuracy of dead reckoning locomotion.", "title": "" }, { "docid": "4a775f8433e3d70593d5a0809b3fa83d", "text": "Moral reasoning traditionally distinguishes two types of evil:moral (ME) and natural (NE). The standard view is that ME is theproduct of human agency and so includes phenomena such as war,torture and psychological cruelty; that NE is the product ofnonhuman agency, and so includes natural disasters such asearthquakes, floods, disease and famine; and finally, that morecomplex cases are appropriately analysed as a combination of MEand NE. Recently, as a result of developments in autonomousagents in cyberspace, a new class of interesting and importantexamples of hybrid evil has come to light. In this paper, it iscalled artificial evil (AE) and a case is made for considering itto complement ME and NE to produce a more adequate taxonomy. Byisolating the features that have led to the appearance of AE,cyberspace is characterised as a self-contained environment thatforms the essential component in any foundation of the emergingfield of Computer Ethics (CE). It is argued that this goes someway towards providing a methodological explanation of whycyberspace is central to so many of CE's concerns; and it isshown how notions of good and evil can be formulated incyberspace. Of considerable interest is how the propensity for anagent's action to be morally good or evil can be determined evenin the absence of biologically sentient participants and thusallows artificial agents not only to perpetrate evil (and forthat matter good) but conversely to `receive' or `suffer from'it. The thesis defended is that the notion of entropy structure,which encapsulates human value judgement concerning cyberspace ina formal mathematical definition, is sufficient to achieve thispurpose and, moreover, that the concept of AE can be determinedformally, by mathematical methods. A consequence of this approachis that the debate on whether CE should be considered unique, andhence developed as a Macroethics, may be viewed, constructively,in an alternative manner. The case is made that whilst CE issuesare not uncontroversially unique, they are sufficiently novel torender inadequate the approach of standard Macroethics such asUtilitarianism and Deontologism and hence to prompt the searchfor a robust ethical theory that can deal with them successfully.The name Information Ethics (IE) is proposed for that theory. Itis argued that the uniqueness of IE is justified by its beingnon-biologically biased and patient-oriented: IE is anEnvironmental Macroethics based on the concept of data entityrather than life. It follows that the novelty of CE issues suchas AE can be appreciated properly because IE provides a newperspective (though not vice versa). In light of the discussionprovided in this paper, it is concluded that Computer Ethics isworthy of independent study because it requires its ownapplication-specific knowledge and is capable of supporting amethodological foundation, Information Ethics.", "title": "" }, { "docid": "c474df285da8106b211dc7fe62733423", "text": "In this paper, we propose an effective method to recognize human actions using 3D skeleton joints recovered from 3D depth data of RGBD cameras. We design a new action feature descriptor for action recognition based on differences of skeleton joints, i.e., EigenJoints which combine action information including static posture, motion property, and overall dynamics. Accumulated Motion Energy (AME) is then proposed to perform informative frame selection, which is able to remove noisy frames and reduce computational cost. We employ non-parametric Naïve-Bayes-Nearest-Neighbor (NBNN) to classify multiple actions. The experimental results on several challenging datasets demonstrate that our approach outperforms the state-of-the-art methods. In addition, we investigate how many frames are necessary for our method to perform classification in the scenario of online action recognition. We observe that the first 30% to 40% frames are sufficient to achieve comparable results to that using the entire video sequences on the MSR Action3D dataset.", "title": "" }, { "docid": "e8f1632349bcf571c04395225b90fb91", "text": "This paper studies the effect of school finance reforms on the distribution of school spending across richer and poorer districts, and the consequences of spending equalization for the relative test performance of students from different family backgrounds. We find that states where the school finance system was declared unconstitutional in the 1980s increased the relative funding of low-income districts. Increases in the amount of state aid available to poorer districts led to increases in the spending of these districts, narrowing the spending gap between richer and poorer districts. Using micro samples of SAT scores from this same period, we then test whether changes in spending inequality affect the gap in achievement between different family background groups. We find evidence that equalization of spending leads to a narrowing of test score outcomes across family background groups.  2002 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "e48941f23ee19ec4b26c4de409a84fe2", "text": "Object recognition is challenging especially when the objects from different categories are visually similar to each other. In this paper, we present a novel joint dictionary learning (JDL) algorithm to exploit the visual correlation within a group of visually similar object categories for dictionary learning where a commonly shared dictionary and multiple category-specific dictionaries are accordingly modeled. To enhance the discrimination of the dictionaries, the dictionary learning problem is formulated as a joint optimization by adding a discriminative term on the principle of the Fisher discrimination criterion. As well as presenting the JDL model, a classification scheme is developed to better take advantage of the multiple dictionaries that have been trained. The effectiveness of the proposed algorithm has been evaluated on popular visual benchmarks.", "title": "" }, { "docid": "9bdd2cf41bf5b967ef443855b1b49e0e", "text": "We propose a Label Propagation based algorithm for weakly supervised text classification. We construct a graph where each document is represented by a node and edge weights represent similarities among the documents. Additionally, we discover underlying topics using Latent Dirichlet Allocation (LDA) and enrich the document graph by including the topics in the form of additional nodes. The edge weights between a topic and a text document represent level of “affinity” between them. Our approach does not require document level labelling, instead it expects manual labels only for topic nodes. This significantly minimizes the level of supervision needed as only a few topics are observed to be enough for achieving sufficiently high accuracy. The Label Propagation Algorithm is employed on this enriched graph to propagate labels among the nodes. Our approach combines the advantages of Label Propagation (through document-document similarities) and Topic Modelling (for minimal but smart supervision). We demonstrate the effectiveness of our approach on various datasets and compare with state-of-the-art weakly supervised text classification approaches.", "title": "" }, { "docid": "53dcdeb8e8368864fb795395dd151fd2", "text": "Superposition coding is a well-known capacity-achieving coding scheme for stochastically degraded broadcast channels. Although well-studied in theory, it is important to understand issues that arise when implementing this scheme in a practical setting. In this paper, we present a software-radio based design of a superposition coding system on the GNU Radio platform with the Universal Software Radio Peripheral acting as the transceiver frontend. We also study the packet error performance and discuss some issues that arise in its implementation.", "title": "" } ]
scidocsrr
7dc783aa4be73fe89f95e55ed4d8e0fd
Looking through the Glass Ceiling: A Qualitative Study of STEM Women’s Career Narratives
[ { "docid": "3cf902a91dff37e27690afcf388e9812", "text": "A role congruity theory of prejudice toward female leaders proposes that perceived incongruity between the female gender role and leadership roles leads to 2 forms of prejudice: (a) perceiving women less favorably than men as potential occupants of leadership roles and (b) evaluating behavior that fulfills the prescriptions of a leader role less favorably when it is enacted by a woman. One consequence is that attitudes are less positive toward female than male leaders and potential leaders. Other consequences are that it is more difficult for women to become leaders and to achieve success in leadership roles. Evidence from varied research paradigms substantiates that these consequences occur, especially in situations that heighten perceptions of incongruity between the female gender role and leadership roles.", "title": "" } ]
[ { "docid": "6f5ee673c82d43a984e0217b5044d2dd", "text": "Twitter currently receives about 190 million tweets (small text-based Web posts) a day, in which people share their comments regarding a wide range of topics. A large number of tweets include opinions about products and services. However, with Twitter being a relatively new phenomenon, these tweets are underutilized as a source for evaluating customer sentiment. To explore high-volume twitter data, we introduce three novel time-based visual sentiment analysis techniques: (1) topic-based sentiment analysis that extracts, maps, and measures customer opinions; (2) stream analysis that identifies interesting tweets based on their density, negativity, and influence characteristics; and (3) pixel cell-based sentiment calendars and high density geo maps that visualize large volumes of data in a single view. We applied these techniques to a variety of twitter data, (e.g., movies, amusement parks, and hotels) to show their distribution and patterns, and to identify influential opinions.", "title": "" }, { "docid": "7359e66b615e2ffa16cbe68a3e46e3a1", "text": "This study examined the phenomenon of post-stroke depression and evaluated its impact on rehabilitation outcome. Sixty-four patients presenting to a rehabilitation program within weeks of first stroke were evaluated for depression through self-report measures and staff ratings. Patients also rated the particular coping strategies which they used in dealing with their illness and hospital stay. Physical and occupational therapists provided measures of functional impairment at admission and discharge. A high (47%) prevalence of depression was found in this population, with no overall differences observed between patients with right or left hemisphere lesions. Depressed patients, in comparison to non-depressed, evidenced greater functional impairment at both admission and discharge. However, both groups showed similar gains over the course of rehabilitation. Coping strategies employed by depressed patients appeared to reflect a lower level of participation in the rehabilitation process. A subgroup of patients evaluated 6 weeks after discharge revealed that depression was associated with a worsening on one measure of functional status. These findings indicate that depression is a frequent companion of stroke, that it is associated with degree of functional impairment, and that it may exert a negative impact on the rehabilitation process and outcome.", "title": "" }, { "docid": "631b473342cc30360626eaea0734f1d8", "text": "Argument extraction is the task of identifying arguments, along with their components in text. Arguments can be usually decomposed into a claim and one or more premises justifying it. The proposed approach tries to identify segments that represent argument elements (claims and premises) on social Web texts (mainly news and blogs) in the Greek language, for a small set of thematic domains, including articles on politics, economics, culture, various social issues, and sports. The proposed approach exploits distributed representations of words, extracted from a large non-annotated corpus. Among the novel aspects of this work is the thematic domain itself which relates to social Web, in contrast to traditional research in the area, which concentrates mainly on law documents and scientific publications. The huge increase of social web communities, along with their user tendency to debate, makes the identification of arguments in these texts a necessity. In addition, a new manually annotated corpus has been constructed that can be used freely for research purposes. Evaluation results are quite promising, suggesting that distributed representations can contribute positively to the task of argument extraction.", "title": "" }, { "docid": "863e71cf1c1eddf3c6ceac400670e6f7", "text": "This paper provides a brief overview to four major types of causal models for health-sciences research: Graphical models (causal diagrams), potential-outcome (counterfactual) models, sufficient-component cause models, and structural-equations models. The paper focuses on the logical connections among the different types of models and on the different strengths of each approach. Graphical models can illustrate qualitative population assumptions and sources of bias not easily seen with other approaches; sufficient-component cause models can illustrate specific hypotheses about mechanisms of action; and potential-outcome and structural-equations models provide a basis for quantitative analysis of effects. The different approaches provide complementary perspectives, and can be employed together to improve causal interpretations of conventional statistical results.", "title": "" }, { "docid": "553f1aa4efeeb8a1f1700285f59aec08", "text": "Code-mixing is frequently observed in user generated content on social media, especially from multilingual users. The linguistic complexity of such content is compounded by presence of spelling variations, transliteration and non-adherance to formal grammar. We describe our initial efforts to create a multi-level annotated corpus of Hindi-English codemixed text collated from Facebook forums, and explore language identification, back-transliteration, normalization and POS tagging of this data. Our results show that language identification and transliteration for Hindi are two major challenges that impact POS tagging accuracy.", "title": "" }, { "docid": "4a536c1186a1d1d1717ec1e0186b262c", "text": "In this paper, I outline a perspective on organizational transformation which proposes change as endemic to the practice of organizing and hence as enacted through the situated practices of organizational actors as they improvise, innovate, and adjust their work routines over time. I ground this perspective in an empirical study which examined the use of a new information technology within one organization over a two year period. In this organization, a series of subtle but nonetheless significant changes were enacted over time as organizational actors appropriated the new technology into their work practices, and then experimented with local innovations, responded to unanticipated breakdowns and contingencies, initiated opportunistic shifts in structure and coordination mechanisms, and improvised various procedural, cognitive, and normative variations to accommodate their evolving use of the technology. These findings provide the empirical basis for a practice-based perspective on organizational transformation. Because it is grounded in the micro-level changes that actors enact over time as they make sense of and act in the world, a practice lens can avoid the strong assumptions of rationality, determinism, or discontinuity characterizing existing change perspectives. A situated change perspective may offer a particularly useful strategy for analyzing change in organizations turning increasingly away from patterns of stability, bureaucracy, and control to those of flexibility, selforganizing, and learning.", "title": "" }, { "docid": "feda50d2876074ce37276d6df7d2823f", "text": "Word embedding models have become a fundamental component in a wide range of Natural Language Processing (NLP) applications. However, embeddings trained on human-generated corpora have been demonstrated to inherit strong gender stereotypes that reflect social constructs. To address this concern, in this paper, we propose a novel training procedure for learning gender-neutral word embeddings. Our approach aims to preserve gender information in certain dimensions of word vectors while compelling other dimensions to be free of gender influence. Based on the proposed method, we generate a GenderNeutral variant of GloVe (GN-GloVe). Quantitative and qualitative experiments demonstrate that GN-GloVe successfully isolates gender information without sacrificing the functionality of the embedding model.", "title": "" }, { "docid": "0b437c0fc573c2f9d368cf501678b0a8", "text": "Sexual selection is the mechanism that favors an increase in the frequency of alleles associated with reproduction (Darwin, 1871). Darwin distinguished sexual selection from natural selection, but today most evolutionary scientists combine the two concepts under the name, natural selection. Sexual selection is composed of intrasexual competition (competition between members of the same sex for sexual access to members of the opposite sex) and intersexual selection (differential mate choice of members of the opposite sex). Focusing mainly on precopulatory adaptations associated with intrasexual competition and intersexual selection, postcopulatory sexual selection was largely ignored even a century after the presentation of sexual selection theory. Parker (1970) was the first to recognize that male–male competition may continue even after the initiation of copulation when males compete for fertilizations. More recently, Thornhill (1983) and others (e.g. Eberhard, 1996) recognized that intersexual selection may also continue after the initiation of copulation when a female biases paternity between two or more males’ sperm. The competition between males for fertilization of a single female’s ova is known as sperm competition (Parker, 1970), and the selection of sperm from two or more males by a single female is known as cryptic female choice (Eberhard, 1996; Thornhill, 1983). Although sperm competition and cryptic female choice together compose postcopulatory sexual selection (see Table 6.1), sperm competition is often used in reference to both processes (e.g. Baker & Bellis, 1995; Birkhead & Møller, 1998; Simmons, 2001; Shackelford, Pound, & Goetz, 2005). In this chapter, we review the current state of knowledge regarding human sperm competition (and see Shackelford et al., 2005).", "title": "" }, { "docid": "2f8430ae99d274bb1a08b031dfd1c11b", "text": "BACKGROUND\nCleft-lip nasal deformity (CLND) affects the overall facial appearance and attractiveness. The CLND nose shares some features in part with the aging nose.\n\n\nOBJECTIVES\nThis questionnaire survey examined: 1) the panel perceptions of the role of secondary cleft rhinoplasty in nasal rejuvenation; and 2) the influence of a medical background in cleft care, age and gender of the panel members on the estimated age of the CLND nose.\n\n\nSTUDY DESIGN\nUsing a cross-sectional study design, we enrolled a random sample of adult laypersons and health care providers. The predictor variables were secondary cleft rhinoplasty (before/after) and a medical background in cleft care (yes/no). The outcome variable was the estimated age of nose in photographs derived from 8 German nonsyndromic CLND patients. Other study variables included age, gender, and career of the assessors. Appropriate descriptive and univariate statistics were computed, and a P value of <.05 was considered to be statistically significant.\n\n\nRESULTS\nThe sample consisted of 507 lay volunteers and 51 medical experts (407 [72.9%] were female; mean age ± SD = 24.9 ± 8.2 y). The estimated age of the CLND noses was higher than their real age. The rhinoplasty decreased the estimated age to a statistically significant degree (P < .0001). A medical background, age, and gender of the participants were not individually associated with their votes (P > .05).\n\n\nCONCLUSIONS\nThe results of this study suggest that CLND noses lack youthful appearance. Secondary cleft rhinoplasty rejuvenates the nose and makes it come close to the actual age of the patients.", "title": "" }, { "docid": "eeed4c3f13f50f269bcfd51d2157f5a6", "text": "DRAM energy is an important component to optimize in modern computing systems. One outstanding source of DRAM energy is the energy to fetch data stored on cells to the row buffer, which occurs during two DRAM operations, row activate and refresh. This work exploits previously proposed half page row access, modifying the wordline connections within a bank to halve the number of cells fetched to the row buffer, to save energy in both cases. To accomplish this, we first change the data wire connections in the sub-array to reduce the cost of row buffer overfetch in multi-core systems which yields a 12% energy savings on average and a slight performance improvement in quad-core systems. We also propose charge recycling refresh, which reuses charge left over from a prior half page refresh to refresh another half page. Our charge recycling scheme is capable of reducing both auto- and self-refresh energy, saving more than 15% of refresh energy at 85°C, and provides even shorter refresh cycle time. Finally, we propose a refresh scheduling scheme that can dynamically adjust the number of charge recycled half pages, which can save up to 30% of refresh energy at 85°C.", "title": "" }, { "docid": "d107bb7ee16b24206f468aee2d0a47e4", "text": "This paper presents a novel gradient correlation similarity (Gcs) measure-based decolorization model for faithfully preserving the appearance of the original color image. Contrary to the conventional data-fidelity term consisting of gradient error-norm-based measures, the newly defined Gcs measure calculates the summation of the gradient correlation between each channel of the color image and the transformed grayscale image. Two efficient algorithms are developed to solve the proposed model. On one hand, due to the highly nonlinear nature of Gcs measure, a solver consisting of the augmented Lagrangian and alternating direction method is adopted to deal with its approximated linear parametric model. The presented algorithm exhibits excellent iterative convergence and attains superior performance. On the other hand, a discrete searching solver is proposed by determining the solution with the minimum function value from the linear parametric model-induced candidate images. The non-iterative solver has advantages in simplicity and speed with only several simple arithmetic operations, leading to real-time computational speed. In addition, it is very robust with respect to the parameter and candidates. Extensive experiments under a variety of test images and a comprehensive evaluation against existing state-of-the-art methods consistently demonstrate the potential of the proposed model and algorithms.", "title": "" }, { "docid": "8d743f8c333c392038e84d44e79dae2a", "text": "For conventional wireless networks, the main target of resource allocation (RA) is to efficiently utilize the available resources. Generally, there are no changes in the available spectrum, thus static spectrum allocation policies were adopted. However, these allocation policies lead to spectrum under-utilization. In this regard, cognitive radio networks (CRNs) have received great attention due to their potential to improve the spectrum utilization. In general, efficient spectrum management and resource allocation are essential and very crucial for CRNs. This is due to the fact that unlicensed users should attain the most benefit from accessing the licensed spectrum without causing adverse interference to the licensed ones. The cognitive users or called secondary users have to effectively capture the arising spectrum opportunities in time, frequency, and space to transmit their data. Mainly, two aspects characterize the resource allocation for CRNs: 1) primary (licensed) network protection and 2) secondary (unlicensed) network performance enhancement in terms of quality-of-service, throughput, fairness, energy efficiency, etc. CRNs can operate in one of three known operation modes: 1) interweave; 2) overlay; and 3) underlay. Among which the underlay cognitive radio mode is known to be highly efficient in terms of spectrum utilization. This is because the unlicensed users are allowed to share the same channels with the active licensed users under some conditions. In this paper, we provide a survey for resource allocation in underlay CRNs. In particular, we first define the RA process and its components for underlay CRNs. Second, we provide a taxonomy that categorizes the RA algorithms proposed in literature based on the approaches, criteria, common techniques, and network architecture. Then, the state-of-the-art resource allocation algorithms are reviewed according to the provided taxonomy. Additionally, comparisons among different proposals are provided. Finally, directions for future research are outlined.", "title": "" }, { "docid": "c4e80fd8e2c5b1795c016c9542f8f33e", "text": "Duckweeds, plants of the Lemnaceae family, have the distinction of being the smallest angiosperms in the world with the fastest doubling time. Together with its naturally ability to thrive on abundant anthropogenic wastewater, these plants hold tremendous potential to helping solve critical water, climate and fuel issues facing our planet this century. With the conviction that rapid deployment and optimization of the duckweed platform for biomass production will depend on close integration between basic and applied research of these aquatic plants, the first International Conference on Duckweed Research and Applications (ICDRA) was organized and took place in Chengdu, China, from October 7th to 10th of 2011. Co-organized with Rutgers University of New Jersey (USA), this Conference attracted participants from Germany, Denmark, Japan, Australia, in addition to those from the US and China. The following are concise summaries of the various oral presentations and final discussions over the 2.5 day conference that serve to highlight current research interests and applied research that are paving the way for the imminent deployment of this novel aquatic crop. We believe the sharing of this information with the broad Plant Biology community is an important step toward the renaissance of this excellent plant model that will have important impact on our quest for sustainable development of the world.", "title": "" }, { "docid": "68b2e5a2f82435c2a007c806e060e301", "text": "Self-forming barriers and advanced liner materials are studied extensively for their Cu gapfill performance and interconnect scaling. In this paper, 22nm1/2 pitch Cu low-k interconnects with barrier (Mn-based, TaN) /liner (Co, Ru) combinations are compared and benchmarked for their resistivity, resistance scaling, and electromigration (EM) performance. Extendibility to 16nm copper width was explored experimentally and a projection towards 12nm width is performed. It is found that the Ru-liner based systems show a higher overall Cu-resistivity. We show that this increase can be compensated by combining Ru with a thinner Mn-based barrier, which increases the effective Cu-area at a particular trench width. The EM performance reveals that the Ru-liner systems have a better EM lifetime compared to the Co-liner based systems. More interestingly, in a comparison of the maximum current density Jmax a significant improvement is found for the scaled Mn-based/Ru system, making it therefore a serious candidate to extend the Cu metallization.", "title": "" }, { "docid": "1c3b044d572509e14b11d2ec7cb6a566", "text": "Animal models point towards a key role of brain-derived neurotrophic factor (BDNF), insulin-like growth factor-I (IGF-I) and vascular endothelial growth factor (VEGF) in mediating exercise-induced structural and functional changes in the hippocampus. Recently, also platelet derived growth factor-C (PDGF-C) has been shown to promote blood vessel growth and neuronal survival. Moreover, reductions of these neurotrophic and angiogenic factors in old age have been related to hippocampal atrophy, decreased vascularization and cognitive decline. In a 3-month aerobic exercise study, forty healthy older humans (60 to 77years) were pseudo-randomly assigned to either an aerobic exercise group (indoor treadmill, n=21) or to a control group (indoor progressive-muscle relaxation/stretching, n=19). As reported recently, we found evidence for fitness-related perfusion changes of the aged human hippocampus that were closely linked to changes in episodic memory function. Here, we test whether peripheral levels of BDNF, IGF-I, VEGF or PDGF-C are related to changes in hippocampal blood flow, volume and memory performance. Growth factor levels were not significantly affected by exercise, and their changes were not related to changes in fitness or perfusion. However, changes in IGF-I levels were positively correlated with hippocampal volume changes (derived by manual volumetry and voxel-based morphometry) and late verbal recall performance, a relationship that seemed to be independent of fitness, perfusion or their changes over time. These preliminary findings link IGF-I levels to hippocampal volume changes and putatively hippocampus-dependent memory changes that seem to occur over time independently of exercise. We discuss methodological shortcomings of our study and potential differences in the temporal dynamics of how IGF-1, VEGF and BDNF may be affected by exercise and to what extent these differences may have led to the negative findings reported here.", "title": "" }, { "docid": "b06f1e94f0ba22828044030c3a1fe691", "text": "BACKGROUND\nThe use of opioids for chronic non-cancer pain has increased in the United States since state laws were relaxed in the late 1990s. These policy changes occurred despite scanty scientific evidence that chronic use of opioids was safe and effective.\n\n\nMETHODS\nWe examined opiate prescriptions and dosing patterns (from computerized databases, 1996 to 2002), and accidental poisoning deaths attributable to opioid use (from death certificates, 1995 to 2002), in the Washington State workers' compensation system.\n\n\nRESULTS\nOpioid prescriptions increased only modestly between 1996 and 2002. However, prescriptions for the most potent opioids (Schedule II), as a percentage of all scheduled opioid prescriptions (II, III, and IV), increased from 19.3% in 1996 to 37.2% in 2002. Among long-acting opioids, the average daily morphine equivalent dose increased by 50%, to 132 mg/day. Thirty-two deaths were definitely or probably related to accidental overdose of opioids. The majority of deaths involved men (84%) and smokers (69%).\n\n\nCONCLUSIONS\nThe reasons for escalating doses of the most potent opioids are unknown, but it is possible that tolerance or opioid-induced abnormal pain sensitivity may be occurring in some workers who use opioids for chronic pain. Opioid-related deaths in this population may be preventable through use of prudent guidelines regarding opioid use for chronic pain.", "title": "" }, { "docid": "9e8a1a70af4e52de46d773cec02f99a7", "text": "In this paper, we build a corpus of tweets from Twitter annotated with keywords using crowdsourcing methods. We identify key differences between this domain and the work performed on other domains, such as news, which makes existing approaches for automatic keyword extraction not generalize well on Twitter datasets. These datasets include the small amount of content in each tweet, the frequent usage of lexical variants and the high variance of the cardinality of keywords present in each tweet. We propose methods for addressing these issues, which leads to solid improvements on this dataset for this task.", "title": "" }, { "docid": "a30de4a213fe05c606fb16d204b9b170", "text": "– The recent work on cross-country regressions can be compared to looking at “a black cat in a dark room”. Whether or not all this work has accomplished anything on the substantive economic issues is a moot question. But the search for “a black cat ” has led to some progress on the econometric front. The purpose of this paper is to comment on this progress. We discuss the problems with the use of cross-country panel data in the context of two problems: The analysis of economic growth and that of the purchasing power parity (PPP) theory. A propos de l’emploi des méthodes de panel sur des données inter-pays RÉSUMÉ. – Les travaux récents utilisant des régressions inter-pays peuvent être comparés à la recherche d'« un chat noir dans une pièce sans lumière ». La question de savoir si ces travaux ont apporté quelque chose de significatif à la connaissance économique est assez controversée. Mais la recherche du « chat noir » a conduit à quelques progrès en économétrie. L'objet de cet article est de discuter de ces progrès. Les problèmes posés par l'utilisation de panels de pays sont discutés dans deux contextes : celui de la croissance économique et de la convergence d'une part ; celui de la théorie de la parité des pouvoirs d'achat d'autre part. * G.S. MADDALA: Department of Economics, The Ohio State University. I would like to thank M. NERLOVE, P. SEVESTRE and an anonymous referee for helpful comments. Responsability for the omissions and any errors is my own. ANNALES D’ÉCONOMIE ET DE STATISTIQUE. – N° 55-56 – 1999 « The Gods love the obscure and hate the obvious » BRIHADARANYAKA UPANISHAD", "title": "" }, { "docid": "e0021ce3472bd0cc87bbddef0dc24a07", "text": "A complex signal demodulation technique is proposed to eliminate the null detection point problem in non-contact vital sign detection. This technique is robust against DC offset in direct conversion system. Based on the complex signal demodulation, a random body movement cancellation technique is developed to cancel out strong noise caused by random body movement in non-contact vital sign monitoring. Multiple transceivers and antennas with polarization and frequency multiplexing are used to detect signals from different body orientations. The noise due to random body movement is cancelled out based on different patterns of the desired and undesired signals. Experiments by highly compact 5–6 GHz portable radar systems have been performed to verify these two techniques.", "title": "" }, { "docid": "032b91a8aeb2fcbde191d22c414b2129", "text": "In many personalized recommendation problems available data consists only of positive interactions (implicit feedback) between users and items. This problem is also known as One-Class Collaborative Filtering (OC-CF). Linear models usually achieve state-of-the-art performances on OC-CF problems and many efforts have been devoted to build more expressive and complex representations able to improve the recommendations.Recent analysis show that collaborative filtering (CF) datasets have peculiar characteristics such as high sparsity and a long tailed distribution of the ratings. In this paper we propose a boolean kernel, called Disjunctive kernel, which is less expressive than the linear one but it is able to alleviate the sparsity issue in CF contexts. The embedding of this kernel is composed by all the combinations of a certain arity d of the input variables, and these combined features are semantically interpreted as disjunctions of the input variables. Experiments on several CF datasets show the effectiveness and the efficiency of the proposed kernel.", "title": "" } ]
scidocsrr
3c3ffbccc82e6197e4175c0534608233
on Fault-Tolerant Control for Unmanned Aerial Vehicles ( UAVs )
[ { "docid": "c999bd0903b53285c053c76f9fcc668f", "text": "In this paper, a bibliographical review on reconfigurable (active) fault-tolerant control systems (FTCS) is presented. The existing approaches to fault detection and diagnosis (FDD) and fault-tolerant control (FTC) in a general framework of active fault-tolerant control systems (AFTCS) are considered and classified according to different criteria such as design methodologies and applications. A comparison of different approaches is briefly carried out. Focuses in the field on the current research are also addressed with emphasis on the practical application of the techniques. In total, 376 references in the open literature, dating back to 1971, are compiled to provide an overall picture of historical, current, and future developments in this area. # 2008 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "d349cf385434027b4532080819d5745f", "text": "Although not commonly used, correlation filters can track complex objects through rotations, occlusions and other distractions at over 20 times the rate of current state-of-the-art techniques. The oldest and simplest correlation filters use simple templates and generally fail when applied to tracking. More modern approaches such as ASEF and UMACE perform better, but their training needs are poorly suited to tracking. Visual tracking requires robust filters to be trained from a single frame and dynamically adapted as the appearance of the target object changes. This paper presents a new type of correlation filter, a Minimum Output Sum of Squared Error (MOSSE) filter, which produces stable correlation filters when initialized using a single frame. A tracker based upon MOSSE filters is robust to variations in lighting, scale, pose, and nonrigid deformations while operating at 669 frames per second. Occlusion is detected based upon the peak-to-sidelobe ratio, which enables the tracker to pause and resume where it left off when the object reappears.", "title": "" }, { "docid": "2aa492360133f8020abc3d02ec328a4a", "text": "This paper conducts a performance analysis of two popular private blockchain platforms, Hyperledger Fabric and Ethereum (private deployment), to assess the performance and limitations of these state-of-the-art platforms. Blockchain, a decentralized transaction and data management technology, is said to be the technology that will have similar impacts as the Internet had on people's lives. Many industries have become interested in adopting blockchain in their IT systems, but scalability is an often- cited concern of current blockchain technology. Therefore, the goals of this preliminary performance analysis are twofold. First, a methodology for evaluating a blockchain platform is developed. Second, the analysis results are presented to inform practitioners in making decisions regarding adoption of blockchain technology in their IT systems. The experimental results, based on varying number of transactions, show that Hyperledger Fabric consistently outperforms Ethereum across all evaluation metrics which are execution time, latency and throughput. Additionally, both platforms are still not competitive with current database systems in term of performances in high workload scenarios.", "title": "" }, { "docid": "65ddfd636299f556117e53b5deb7c7e5", "text": "BACKGROUND\nMobile phone use is near ubiquitous in teenagers. Paralleling the rise in mobile phone use is an equally rapid decline in the amount of time teenagers are spending asleep at night. Prior research indicates that there might be a relationship between daytime sleepiness and nocturnal mobile phone use in teenagers in a variety of countries. As such, the aim of this study was to see if there was an association between mobile phone use, especially at night, and sleepiness in a group of U.S. teenagers.\n\n\nMETHODS\nA questionnaire containing an Epworth Sleepiness Scale (ESS) modified for use in teens and questions about qualitative and quantitative use of the mobile phone was completed by students attending Mountain View High School in Mountain View, California (n = 211).\n\n\nRESULTS\nMultivariate regression analysis indicated that ESS score was significantly associated with being female, feeling a need to be accessible by mobile phone all of the time, and a past attempt to reduce mobile phone use. The number of daily texts or phone calls was not directly associated with ESS. Those individuals who felt they needed to be accessible and those who had attempted to reduce mobile phone use were also ones who stayed up later to use the mobile phone and were awakened more often at night by the mobile phone.\n\n\nCONCLUSIONS\nThe relationship between daytime sleepiness and mobile phone use was not directly related to the volume of texting but may be related to the temporal pattern of mobile phone use.", "title": "" }, { "docid": "6b03b9e8fdc1b5d9f01b3a9426e0ab3a", "text": "We consider the problem of weakly supervised object localization. For an object of interest (e.g. “car”), an image is weakly labeled when its label only indicates the presence/absence of this object, but not the exact location of the object in the image. Given a collection of weakly labeled images for an object, our goal is to localize the object of interest in each image. We propose a novel architecture called the regularized attention network for this problem. Our work builds upon the attention network proposed in [1]. We extend the standard attention network by incorporating a regularization term that encourages the attention scores of object proposals to mimic the scoring distribution of a strong fully supervised object detector. Despite of the simplicity of our approach, our proposed architecture achieves the state-of-the-art results on several benchmark datasets.", "title": "" }, { "docid": "e97f494b2eed2b14e2d4c0fd80e38170", "text": "We present a stochastic gradient descent optimisation method for image registration with adaptive step size prediction. The method is based on the theoretical work by Plakhov and Cruz (J. Math. Sci. 120(1):964–973, 2004). Our main methodological contribution is the derivation of an image-driven mechanism to select proper values for the most important free parameters of the method. The selection mechanism employs general characteristics of the cost functions that commonly occur in intensity-based image registration. Also, the theoretical convergence conditions of the optimisation method are taken into account. The proposed adaptive stochastic gradient descent (ASGD) method is compared to a standard, non-adaptive Robbins-Monro (RM) algorithm. Both ASGD and RM employ a stochastic subsampling technique to accelerate the optimisation process. Registration experiments were performed on 3D CT and MR data of the head, lungs, and prostate, using various similarity measures and transformation models. The results indicate that ASGD is robust to these variations in the registration framework and is less sensitive to the settings of the user-defined parameters than RM. The main disadvantage of RM is the need for a predetermined step size function. The ASGD method provides a solution for that issue.", "title": "" }, { "docid": "42bb7c29f66d10d3529fa87c275069de", "text": "Smart grids (SG) energy management system and electric vehicle (EV) have gained considerable reputation in recent years. This has been enabled by the high growth of EVs on roads; however, this may lead to a significant impact on the power grids. In order to keep EVs far from causing peaks in power demand and to manage building energy during the day, it is important to perform an intelligent scheduling for EVs charging and discharging service and buildings areas by including different metrics, such as real-time price and demand–supply curve. In this paper, we propose a real-time dynamic pricing model for EVs charging and discharging service and building energy management, in order to reduce the peak loads. Our proposed approach uses a decentralized cloud computing architecture based on software define networking (SDN) technology and network function virtualization (NFV). We aim to schedule user's requests in a real-time way and to supervise communications between microgrids controllers, SG and user entities (i.e., EVs, electric vehicles public supply stations, advance metering infrastructure, smart meters, etc.). We formulate the problem as a linear optimization problem for EV and a global optimization problem for all microgrids. We solve the problems by using different decentralized decision algorithms. To the best of our knowledge, this is the first paper that proposes a pricing model based on decentralized Cloud-SDN architecture in order to solve all the aforementioned issues. The extensive simulations and comparisons with related works proved that our proposed pricing model optimizes the energy load during peak hours, maximizes EVs utility, and maintains the microgrid stability. The simulation is based on real electric load of the city of Toronto.", "title": "" }, { "docid": "962858b6cbb3ae5c95d0018075fd0060", "text": "By 2010, the worldwide annual production of plastics will surpass 300 million tons. Plastics are indispensable materials in modern society, and many products manufactured from plastics are a boon to public health (e.g., disposable syringes, intravenous bags). However, plastics also pose health risks. Of principal concern are endocrine-disrupting properties, as triggered for example by bisphenol A and di-(2-ethylhexyl) phthalate (DEHP). Opinions on the safety of plastics vary widely, and despite more than five decades of research, scientific consensus on product safety is still elusive. This literature review summarizes information from more than 120 peer-reviewed publications on health effects of plastics and plasticizers in lab animals and humans. It examines problematic exposures of susceptible populations and also briefly summarizes adverse environmental impacts from plastic pollution. Ongoing efforts to steer human society toward resource conservation and sustainable consumption are discussed, including the concept of the 5 Rs--i.e., reduce, reuse, recycle, rethink, restrain--for minimizing pre- and postnatal exposures to potentially harmful components of plastics.", "title": "" }, { "docid": "00c5760f14752e8f455a3c48704b0f9c", "text": "Secure and efficient lightweight user authentication protocol for mobile cloud computing becomes a paramount concern due to the data sharing using Internet among the end users and mobile devices. Mutual authentication of a mobile user and cloud service provider is necessary for accessing of any cloud services. However, resource constraint nature of mobile devices makes this task more challenging. In this paper, we propose a new secure and lightweight mobile user authentication scheme for mobile cloud computing, based on cryptographic hash, bitwise XOR, and fuzzy extractor functions. Through informal security analysis and rigorous formal security analysis using random oracle model, it has been demonstrated that the proposed scheme is secure against possible well-known passive and active attacks and also provides user anonymity. Moreover, we provide formal security verification through ProVerif 1.93 simulation for the proposed scheme. Also, we have done authentication proof of our proposed scheme using the Burrows-Abadi-Needham logic. Since the proposed scheme does not exploit any resource constrained cryptosystem, it has the lowest computation cost in compare to existing related schemes. Furthermore, the proposed scheme does not involve registration center in the authentication process, for which it is having lowest communication cost compared with existing related schemes.", "title": "" }, { "docid": "dbc8564d588199436686bf234514a20f", "text": "1. MOTIVATION AND SUMMARY Traditional Database Management Systems (DBMS) software is built on the concept of persistent data sets, that are stored reliably in stable storage and queried/updated several times throughout their lifetime. For several emerging application domains, however, data arrives and needs to be processed on a continuous ( ) basis, without the benefit of several passes over a static, persistent data image. Such continuous data streams arise naturally, for example, in the network installations of large Telecom and Internet service providers where detailed usage information (Call-Detail-Records (CDRs), SNMP/RMON packet-flow data, etc.) from different parts of the underlying network needs to be continuously collected and analyzed for interesting trends. Other applications that generate rapid, continuous and large volumes of stream data include transactions in retail chains, ATM and credit card operations in banks, financial tickers, Web server log records, etc. In most such applications, the data stream is actually accumulated and archived in the DBMS of a (perhaps, off-site) data warehouse, often making access to the archived data prohibitively expensive. Further, the ability to make decisions and infer interesting patterns on-line (i.e., as the data stream arrives) is crucial for several mission-critical tasks that can have significant dollar value for a large corporation (e.g., telecom fraud detection). As a result, recent years have witnessed an increasing interest in designing data-processing algorithms that work over continuous data streams, i.e., algorithms that provide results to user queries while looking at the relevant data items only once and in a fixed order (determined by the stream-arrival pattern). Two key parameters for query processing over continuous datastreams are (1) the amount of memory made available to the online algorithm, and (2) the per-item processing time required by the query processor. The former constitutes an important constraint on the design of stream processing algorithms, since in a typical streaming environment, only limited memory resources are available to the query-processing algorithms. In these situations, we need algorithms that can summarize the data stream(s) involved in a concise, but reasonably accurate, synopsis that can be stored in the allotted (small) amount of memory and can be used to provide approximate answers to user queries along with some reasonable guarantees on the quality of the approximation. Such approx-", "title": "" }, { "docid": "725e92f13cc7c03b890b5d2e7380b321", "text": "Developing algorithms for solving high-dimensional partial differential equations (PDEs) has been an exceedingly difficult task for a long time, due to the notoriously difficult problem known as “the curse of dimensionality”. This paper presents a deep learning-based approach that can handle general high-dimensional parabolic PDEs. To this end, the PDEs are reformulated as a control theory problem and the gradient of the unknown solution is approximated by neural networks, very much in the spirit of deep reinforcement learning with the gradient acting as the policy function. Numerical results on examples including the nonlinear Black-Scholes equation, the Hamilton-Jacobi-Bellman equation, and the Allen-Cahn equation suggest that the proposed algorithm is quite effective in high dimensions, in terms of both accuracy and speed. This opens up new possibilities in economics, finance, operational research, and physics, by considering all participating agents, assets, resources, or particles together at the same time, instead of making ad hoc assumptions on their inter-relationships.", "title": "" }, { "docid": "d35736158d3f38503f0f2090c4e47811", "text": "This study examines the role of the decision environment in how well business intelligence (BI) capabilities are leveraged to achieve BI success. We examine the decision environment in terms of the types of decisions made and the information processing needs of the organization. Our findings suggest that technological capabilities such as data quality, user access and the integration of BI with other systems are necessary for BI success, regardless of the decision environment. However, the decision environment does influence the relationship between BI success and capabilities, such as the extent to which BI supports flexibility and risk in decision making. 2013 Elsevier B.V. All rights reserved. * Corresponding author. Tel.: +32 16248854. E-mail addresses: oyku.isik@vlerick.com (Ö. Işık), mary.jones@unt.edu (M.C. Jones), anna.sidorova@unt.edu (A. Sidorova).", "title": "" }, { "docid": "5f6b9fd58c633bf1de0158f0356bda80", "text": "Crop diseases are a major threat to food security, but their rapid identification remains difficult in many parts of the world due to the lack of the necessary infrastructure. The combination of increasing global smartphone penetration and recent advances in computer vision made possible by deep learning has paved the way for smartphone-assisted disease diagnosis. Using a public dataset of 54,306 images of diseased and healthy plant leaves collected under controlled conditions, we train a deep convolutional neural network to identify 14 crop species and 26 diseases (or absence thereof). The trained model achieves an accuracy of 99.35% on a held-out test set, demonstrating the feasibility of this approach. Overall, the approach of training deep learning models on increasingly large and publicly available image datasets presents a clear path toward smartphone-assisted crop disease diagnosis on a massive global scale.", "title": "" }, { "docid": "f50f7daeac03fbd41f91ff48c054955b", "text": "Neuronal signalling and communication underpin virtually all aspects of brain activity and function. Network science approaches to modelling and analysing the dynamics of communication on networks have proved useful for simulating functional brain connectivity and predicting emergent network states. This Review surveys important aspects of communication dynamics in brain networks. We begin by sketching a conceptual framework that views communication dynamics as a necessary link between the empirical domains of structural and functional connectivity. We then consider how different local and global topological attributes of structural networks support potential patterns of network communication, and how the interactions between network topology and dynamic models can provide additional insights and constraints. We end by proposing that communication dynamics may act as potential generative models of effective connectivity and can offer insight into the mechanisms by which brain networks transform and process information.", "title": "" }, { "docid": "d6d0a5d1ddffaefe6d2f0944e50b3b70", "text": "We present a generalization of the scalar importance function employed by Metropolis Light Transport (MLT) and related Markov chain rendering algorithms. Although MLT is known for its user-designable mutation rules, we demonstrate that its scalar contribution function is similarly programmable in an unbiased manner. Normally, MLT samples light paths with a tendency proportional to their brightness. For a range of scenes, we demonstrate that this importance function is undesirable and leads to poor sampling behaviour. Instead, we argue that simple user-designable importance functions can concentrate work in transport effects of interest and increase estimator efficiency. Unlike mutation rules, these functions are not encumbered with the calculation of transitional probabilities. We introduce alternative importance functions, which encourage the Markov chain to aggressively pursue sampling goals of interest to the user. In addition, we prove that these importance functions may adapt over the course of a render in an unbiased fashion. To that end, we introduce multi-stage MLT, a general rendering setting for creating such adaptive functions. This allows us to create a noise-sensitive MLT renderer whose importance function explicitly targets noise. Finally, we demonstrate that our techniques are compatible with existing Markov chain rendering algorithms and significantly improve their visual efficiency.", "title": "" }, { "docid": "940c6c0cd05498a95eb486c8f592474f", "text": "We present familiar principles involving objects and classes (of objects), pairing (on objects), choice (selecting elements from classes), positive classes (elements of an ultrafilter), and definable classes (definable using the preceding notions). We also postulate the existence of a divine object in the formalized sense that it lies in every definable positive class. ZFC (even extended with certain hypotheses just shy of the existence of a measurable cardinal) is interpretable in the resulting system. This establishes the consistency of mathematics relative to the consistency of these systems. Measurable cardinals are used to interpret and prove the consistency of the system. Positive classes and various kinds of divine objects have played significant roles in theology. 1. T1: Objects, classes, pairing. 2. T2: Extensionality, choice operator. 3. T3: Positive classes. 4. T4: Definable classes. 5. T5: Divine objects. 6. Interpreting ZFC in T5. 7. Interpreting a strong extension of ZFC in T5. 8. Without Extensionality.", "title": "" }, { "docid": "ad398514cd152e85ffccc4da522a7155", "text": "We introduce a framework for unsupervised learning of structured predictors with overlapping, global features. Each input’s latent representation is predicted conditional on the observed data using a feature-rich conditional random field (CRF). Then a reconstruction of the input is (re)generated, conditional on the latent structure, using a generative model which factorizes similarly to the CRF. The autoencoder formulation enables efficient exact inference without resorting to unrealistic independence assumptions or restricting the kinds of features that can be used. We illustrate connections to traditional autoencoders, posterior regularization, and multi-view learning. We then show competitive results with instantiations of the framework for two canonical tasks in natural language processing: part-of-speech induction and bitext word alignment, and show that training the proposed model can be substantially more efficient than a comparable feature-rich baseline.", "title": "" }, { "docid": "205e03f589758316987e3eaacee13430", "text": "Motivated by the technology evolutions and the corresponding changes in user-consumer behavioral patterns, this study applies a Location Based Services (LBS) environmental determinants’ integrated theoretical framework by investigating its role on classifying, profiling and predicting user-consumer behavior. For that purpose, a laboratory LBS application was developed and tested with 110 subjects within the context of a field trial setting in the entertainment industry. Users are clustered into two main types having the “physical” and the “social density” determinants to best discriminate between the resulting clusters. Also, the two clusters differ in terms of their spatial and verbal ability and attitude towards the LBS environment. Similarly, attitude is predicted by the “location”, the “device” and the “mobile connection” LBS environmental determinants for the “walkers in place” (cluster #1) and by all LBS environmental determinants (i.e. those determinants of cluster #1 plus the “digital” and the “social environment” ones) for the “walkers in space” (cluster #2). Finally, the attitude of both clusters’ participants towards the LBS environment affects their behavioral intentions towards using LBS applications, with limited, however, predicting power observed in this relationship.", "title": "" }, { "docid": "66638a2a66f6829f5b9ac72e4ace79ed", "text": "The Theory of Waste Management is a unified body of knowledge about waste and waste management, and it is founded on the expectation that waste management is to prevent waste to cause harm to human health and the environment and promote resource use optimization. Waste Management Theory is to be constructed under the paradigm of Industrial Ecology as Industrial Ecology is equally adaptable to incorporate waste minimization and/or resource use optimization goals and values.", "title": "" }, { "docid": "08d8e372c5ae4eef9848552ee87fbd64", "text": "What chiefly distinguishes cerebral cortex from other parts of the central nervous system is the great diversity of its cell types and inter-connexions. It would be astonishing if such a structure did not profoundly modify the response patterns of fibres coming into it. In the cat's visual cortex, the receptive field arrangements of single cells suggest that there is indeed a degree of complexity far exceeding anything yet seen at lower levels in the visual system. In a previous paper we described receptive fields of single cortical cells, observing responses to spots of light shone on one or both retinas (Hubel & Wiesel, 1959). In the present work this method is used to examine receptive fields of a more complex type (Part I) and to make additional observations on binocular interaction (Part II). This approach is necessary in order to understand the behaviour of individual cells, but it fails to deal with the problem of the relationship of one cell to its neighbours. In the past, the technique of recording evoked slow waves has been used with great success in studies of functional anatomy. It was employed by Talbot & Marshall (1941) and by Thompson, Woolsey & Talbot (1950) for mapping out the visual cortex in the rabbit, cat, and monkey. Daniel & Whitteiidge (1959) have recently extended this work in the primate. Most of our present knowledge of retinotopic projections, binocular overlap, and the second visual area is based on these investigations. Yet the method of evoked potentials is valuable mainly for detecting behaviour common to large populations of neighbouring cells; it cannot differentiate functionally between areas of cortex smaller than about 1 mm2. To overcome this difficulty a method has in recent years been developed for studying cells separately or in small groups during long micro-electrode penetrations through nervous tissue. Responses are correlated with cell location by reconstructing the electrode tracks from histological material. These techniques have been applied to CAT VISUAL CORTEX 107 the somatic sensory cortex of the cat and monkey in a remarkable series of studies by Mountcastle (1957) and Powell & Mountcastle (1959). Their results show that the approach is a powerful one, capable of revealing systems of organization not hinted at by the known morphology. In Part III of the present paper we use this method in studying the functional architecture of the visual cortex. It helped us attempt to explain on anatomical …", "title": "" } ]
scidocsrr
903654ea037f0a4bd83d6718a37da9b6
Semantic Mapping with Simultaneous Object Detection and Localization
[ { "docid": "019c27341b9811a7347467490cea6a72", "text": "For intelligent robots to interact in meaningful ways with their environment, they must understand both the geometric and semantic properties of the scene surrounding them. The majority of research to date has addressed these mapping challenges separately, focusing on either geometric or semantic mapping. In this paper we address the problem of building environmental maps that include both semantically meaningful, object-level entities and point- or mesh-based geometrical representations. We simultaneously build geometric point cloud models of previously unseen instances of known object classes and create a map that contains these object models as central entities. Our system leverages sparse, feature-based RGB-D SLAM, image-based deep-learning object detection and 3D unsupervised segmentation.", "title": "" }, { "docid": "091c57447d5a3c97d3ff1afb57ebb4e3", "text": "We present ORB-SLAM2, a complete simultaneous localization and mapping (SLAM) system for monocular, stereo and RGB-D cameras, including map reuse, loop closing, and relocalization capabilities. The system works in real time on standard central processing units in a wide variety of environments from small hand-held indoors sequences, to drones flying in industrial environments and cars driving around a city. Our back-end, based on bundle adjustment with monocular and stereo observations, allows for accurate trajectory estimation with metric scale. Our system includes a lightweight localization mode that leverages visual odometry tracks for unmapped regions and matches with map points that allow for zero-drift localization. The evaluation on 29 popular public sequences shows that our method achieves state-of-the-art accuracy, being in most cases the most accurate SLAM solution. We publish the source code, not only for the benefit of the SLAM community, but with the aim of being an out-of-the-box SLAM solution for researchers in other fields.", "title": "" } ]
[ { "docid": "ede29bc41058b246ceb451d5605cce2c", "text": "Knowledge graphs have challenged the existing embedding-based approaches for representing their multifacetedness. To address some of the issues, we have investigated some novel approaches that (i) capture the multilingual transitions on different language-specific versions of knowledge, and (ii) encode the commonly existing monolingual knowledge with important relational properties and hierarchies. In addition, we propose the use of our approaches in a wide spectrum of NLP tasks that have not been well explored by related works.", "title": "" }, { "docid": "79a16052e5e6a44ca6f9fef8ebac3c2d", "text": "Plants are among the earth's most useful and beautiful products of nature. Plants have been crucial to mankind's survival. The urgent need is that many plants are at the risk of extinction. About 50% of ayurvedic medicines are prepared using plant leaves and many of these plant species belong to the endanger group. So it is indispensable to set up a database for plant protection. We believe that the first step is to teach a computer how to classify plants. Leaf /plant identification has been a challenge for many researchers. Several researchers have proposed various techniques. In this paper we have proposed a novel framework for recognizing and identifying plants using shape, vein, color, texture features which are combined with Zernike movements. Radial basis probabilistic neural network (RBPNN) has been used as a classifier. To train RBPNN we use a dual stage training algorithm which significantly enhances the performance of the classifier. Simulation results on the Flavia leaf dataset indicates that the proposed method for leaf recognition yields an accuracy rate of 93.82%", "title": "" }, { "docid": "252da9d23f94d9f0bc468d9b093352f1", "text": "3D context has been shown to be extremely important for scene understanding, yet very little research has been done on integrating context information with deep neural network architectures. This paper presents an approach to embed 3D context into the topology of a neural network trained to perform holistic scene understanding. Given a depth image depicting a 3D scene, our network aligns the observed scene with a predefined 3D scene template, and then reasons about the existence and location of each object within the scene template. In doing so, our model recognizes multiple objects in a single forward pass of a 3D convolutional neural network, capturing both global scene and local object information simultaneously. To create training data for this 3D network, we generate partially synthetic depth images which are rendered by replacing real objects with a repository of CAD models of the same object category1. Extensive experiments demonstrate the effectiveness of our algorithm compared to the state of the art.", "title": "" }, { "docid": "6e79df8b9db8bd81774d72b8ef672760", "text": "Concepts of sexuality and gender identity are undergoing re-examination in society. Recent media attention has intensified interest in the area, although reliable information is sometimes lacking. Gender dysphoria, and its extreme form, transsexualism, frequently brings sufferers into contact with psychiatric, social, and mental health professionals, and surgical caregivers. Treatment of these patients often represents a challenge on many levels. Some guidelines for this care are outlined.", "title": "" }, { "docid": "25c815f5fc0cf87bdef5e069cbee23a8", "text": "This paper presents a 9-bit subrange analog-to-digital converter (ADC) consisting of a 3.5-bit flash coarse ADC, a 6-bit successive-approximation-register (SAR) fine ADC, and a differential segmented capacitive digital-to-analog converter (DAC). The flash ADC controls the thermometer coarse capacitors of the DAC and the SAR ADC controls the binary fine ones. Both theoretical analysis and behavioral simulations show that the differential non-linearity (DNL) of a SAR ADC with a segmented DAC is better than that of a binary ADC. The merged switching of the coarse capacitors significantly enhances overall operation speed. At 150 MS/s, the ADC consumes 1.53 mW from a 1.2-V supply. The effective number of bits (ENOB) is 8.69 bits and the effective resolution bandwidth (ERBW) is 100 MHz. With a 1.3-V supply voltage, the sampling rate is 200 MS/s with 2.2-mW power consumption. The ENOB is 8.66 bits and the ERBW is 100 MHz. The FOMs at 1.3 V and 200 MS/s, 1.2 V and 150 MS/s and 1 V and 100 MS/s are 27.2, 24.7, and 17.7 fJ/conversion-step, respectively.", "title": "" }, { "docid": "c158fbbcf592ff372d0d317494f79537", "text": "The concept of no- or minimal-preparation veneers is more than 25 years old, yet there is no classification system categorizing the extent of preparation for different veneer treatments. The lack of veneer preparation classifications creates misunderstanding and miscommunication with patients and within the dental profession. Such a system could be indicated in various clinical scenarios and would benefit dentists and patients, providing a guide for conservatively preparing and placing veneers. A classification system is proposed to divide preparation and veneering into reduction--referred to as space requirement, working thickness, or material room--volume of enamel remaining, and percentage of dentin exposed. Using this type of metric provides an accurate measurement system to quantify tooth structure removal, with preferably no reduction, on a case-by-case basis, dissolve uncertainty, and aid with multiple aspects of treatment planning and communication.", "title": "" }, { "docid": "da129ff6527c7b8af0f34a910051e5ef", "text": "A compact ultra-wideband (UWB) bandpass filter is proposed based on the coplanar-waveguide (CPW) split-mode resonator. By suitably introducing a short-circuited stub to implement the shunt inductance between two quarter wavelength CPW stepped-impedance resonators, a strong magnetic coupling may be realized so that a CPW split-mode resonator may be constructed. Moreover, by properly designing the dual-metal-plane structure, one may accomplish a microstrip-to-CPW feeding mechanism to provide strong enough capacitive coupling for bandwidth enhancement and also introduce an extra electric coupling between input and output ports so that two transmission zeros may be created for selectivity improvement. The implemented UWB filter shows a fractional bandwidth of 116% and two transmission zeros at 1.705 and 11.39 GHz. Good agreement between simulated and measured responses is observed.", "title": "" }, { "docid": "5c95665b5608a40d1dc2499c6fd6d21e", "text": "Camber is one of the most significant defects in the first stages of hot rolling of steel plates. This kind of defect may cause the clogging of finishing mills, but it is also the visible effect of alterations in the process. In this paper we describe the design and implementation of a computer vision system for real-time measurement of camber in a hot rolling mill. Our goal is to provide valuable feedback information to improve AGC operation. As ground truth values are almost impossible to obtain, we have analyzed the relationship among measured camber and other process variables in order to validate our results. The system has proved to be robust, and at the same time there is a strong relationship between known problems in the mill and system readings.", "title": "" }, { "docid": "ba4faa0390c2c75aab79822a1e523e71", "text": "The number of linked data sources and the size of the linked open data graph keep growing every day. As a consequence, semantic RDF services are more and more confronted to various “big data” problems. Query processing is one of them and needs to be efficiently addressed with executions over scalable, highly available and fault tolerant frameworks. Data management systems requiring these properties are rarely built from scratch but are rather designed on top of an existing cluster computing engine. In this work, we consider the processing of SPARQL queries with Apache Spark. We propose and compare five different query processing approaches based on different join execution models and Spark components. A detailed experimentation, on real-world and synthetic data sets, emphasizes that two approaches tailored for the RDF data model outperform the other ones on all major query shapes, i.e., star, snowflake, chain and hybrid.", "title": "" }, { "docid": "1623cdb614ad63675d982e8396e4ff01", "text": "Recognizing textual entailment is a fundamental task in a variety of text mining or natural language processing applications. This paper proposes a simple neural model for RTE problem. It first matches each word in the hypothesis with its most-similar word in the premise, producing an augmented representation of the hypothesis conditioned on the premise as a sequence of word pairs. The LSTM model is then used to model this augmented sequence, and the final output from the LSTM is fed into a softmax layer to make the prediction. Besides the base model, in order to enhance its performance, we also proposed three techniques: the integration of multiple word-embedding library, bi-way integration, and ensemble based on model averaging. Experimental results on the SNLI dataset have shown that the three techniques are effective in boosting the predicative accuracy and that our method outperforms several state-of-the-state ones.", "title": "" }, { "docid": "1211ff582838af121f181608cca43766", "text": "We consider oblivious storage systems hiding both the contents of the data as well as access patterns from an untrusted cloud provider. We target a scenario where multiple users from a trusted group (e.g., corporate employees) asynchronously access and edit potentially overlapping data sets through a trusted proxy mediating client-cloud communication. The main contribution of our paper is twofold. Foremost, we initiate the first formal study of asynchronicity in oblivious storage systems. We provide security definitions for scenarios where both client requests and network communication are asynchronous (and in fact, even adversarially scheduled). While security issues in ObliviStore (Stefanov and Shi, S&P 2013) have recently been surfaced, our treatment shows that also CURIOUS (Bindschaedler at al., CCS 2015), proposed with the exact goal of preventing these attacks, is insecure under asynchronous scheduling of network communication. Second, we develop and evaluate a new oblivious storage system, called Tree-based Asynchronous Oblivious Store, or TaoStore for short, which we prove secure in asynchronous environments. TaoStore is built on top of a new tree-based ORAM scheme that processes client requests concurrently and asynchronously in a non-blocking fashion. This results in a substantial gain in throughput, simplicity, and flexibility over previous systems.", "title": "" }, { "docid": "68b1e52ae7298648563941bf64c683e3", "text": "The recent concept of ‘‘Health Insurance Marketplace’’ introduced to facilitate the purchase of health insurance by comparing different insurance plans in terms of price, coverage benefits, and quality designates a key role to the health insurance providers. Currently, the web based tools available to search for health insurance plans are deficient in offering personalized recommendations based on the coverage benefits and cost. Therefore, anticipating the users’ needs we propose a cloud based framework that offers personalized recommendations about the health insurance plans.We use theMulti-attribute Utility Theory (MAUT) to help users compare different health insurance plans based on coverage and cost criteria, such as: (a) premium, (b) co-pay, (c) deductibles, (d) co-insurance, and (e) maximum benefit offered by a plan. To overcome the issues arising possibly due to the heterogeneous data formats and different plan representations across the providers, we present a standardized representation for the health insurance plans. The plan information of each of the providers is retrieved using the Data as a Service (DaaS). The framework is implemented as Software as a Service (SaaS) to offer customized recommendations by applying a ranking technique for the identified plans according to the user specified criteria. © 2014 Published by Elsevier B.V.", "title": "" }, { "docid": "83355cf4228e84a718bffba06250520a", "text": "Fabric defect detection is now an active area of research for identifying and resolving problems of textile industry, to enhance the performance and also to maintain the quality of fabric. The traditional system of visual inspection by human beings is extremely time consuming, high on costs as well as not reliable since it is highly error prone. Defect detection & classification are the major challenges in defect inspection. Hence in order to overcome these drawbacks, faster and cost effective automatic defect detection is very necessary. Considering these necessities, this paper proposes wavelet filter method. It also explains in detail its various techniques of getting final output like preprocessing, decomposition, thresholding, and noise eliminating.", "title": "" }, { "docid": "5a077d1d4d6c212b7f817cc115bf31bd", "text": "Focus group interviews are widely used in health research to explore phenomena and are accepted as a legitimate qualitative methodology. They are used to draw out interaction data from discussions among participants; researchers running these groups need to be skilled in interviewing and in managing groups, group dynamics and group discussions. This article follows Doody et al's (2013) article on the theory of focus group research; it addresses the preparation for focus groups relating to the research environment, interview process, duration, participation of group members and the role of the moderator. The article aims to assist researchers to prepare and plan for focus groups and to develop an understanding of them, so information from the groups can be used for academic studies or as part of a research proposal.", "title": "" }, { "docid": "705ba6bc49669ba22ff2408a3f9a984c", "text": "Clinicians spend a significant amount of time inputting free-form textual notes into Electronic Health Records (EHR) systems. Much of this documentation work is seen as a burden, reducing time spent with patients and contributing to clinician burnout. With the aspiration of AI-assisted note-writing, we propose a new language modeling task predicting the content of notes conditioned on past data from a patient’s medical record, including patient demographics, labs, medications, and past notes. We train generative models using the public, de-identified MIMIC-III dataset and compare generated notes with those in the dataset on multiple measures. We find that much of the content can be predicted, and that many common templates found in notes can be learned. We discuss how such models can be useful in supporting assistive note-writing features such as error-detection and auto-complete.", "title": "" }, { "docid": "48cfb0c1b3b2ce7ce00aa972a3e599e7", "text": "This paper discusses some relevant work of emotion detection from text which is a main field in affecting computing and artificial intelligence field. Artificial intelligence is not only the ability for a machine to think or interact with end user smartly but also to act humanly or rationally so emotion detection from text plays a key role in human-computer interaction. It has attracted the attention of many researchers due to the great revolution of emotional data available on social and web applications of computers and much more in mobile devices. This survey mainly collects history of unsupervised emotion detection from text.", "title": "" }, { "docid": "d5b5600e6b6a696b88ced8e725ddf2c5", "text": "Nipah virus (NiV, Henipavirus) is a highly lethal emergent zoonotic paramyxovirus responsible for repeated human outbreaks of encephalitis in South East Asia. There are no approved vaccines or treatments, thus improved understanding of NiV biology is imperative. NiV matrix protein recruits a plethora of cellular machinery to scaffold and coordinate virion budding. Intriguingly, matrix also hijacks cellular trafficking and ubiquitination pathways to facilitate transient nuclear localization. While the biological significance of matrix nuclear localization for an otherwise cytoplasmic virus remains enigmatic, the molecular details have begun to be characterized, and are conserved among matrix proteins from divergent paramyxoviruses. Matrix protein appropriation of cellular machinery will be discussed in terms of its early nuclear targeting and later role in virion assembly.", "title": "" }, { "docid": "a27ffbf7428fb863c30612342c61d757", "text": "Social media provide a platform for users to express their opinions and share information. Understanding public health opinions on social media, such as Twitter, offers a unique approach to characterizing common health issues such as diabetes, diet, exercise, and obesity (DDEO); however, collecting and analyzing a large scale conversational public health data set is a challenging research task. The goal of this research is to analyze the characteristics of the general public’s opinions in regard to diabetes, diet, exercise and obesity (DDEO) as expressed on Twitter. A multi-component semantic and linguistic framework was developed to collect Twitter data, discover topics of interest about DDEO, and analyze the topics. From the extracted 4.5 million tweets, 8% of tweets discussed diabetes, 23.7% diet, 16.6% exercise, and 51.7% obesity. The strongest correlation among the topics was determined between exercise and obesity (p < .0002). Other notable correlations were: diabetes and obesity (p < .0005), and diet and obesity (p < .001). DDEO terms were also identified as subtopics of each of the DDEO topics. The frequent subtopics discussed along with “Diabetes”, excluding the DDEO terms themselves, were blood pressure, heart attack, yoga, and Alzheimer. The non-DDEO subtopics for “Diet” included vegetarian, pregnancy, celebrities, weight loss, religious, and mental health, while subtopics for “Exercise” included computer games, brain, fitness, and daily plan. Non-DDEO subtopics for “Obesity” included Alzheimer, cancer, and children. With 2.67 billion social media users in 2016, publicly available data such as Twitter posts can be utilized to support clinical providers, public health experts, and social scientists in better understanding common public opinions in regard to diabetes, diet, exercise, and obesity.", "title": "" }, { "docid": "f62b7597dd84e4bb18a32fc1e5713394", "text": "Automated personality prediction from social media is gaining increasing attention in natural language processing and social sciences communities. However, due to high labeling costs and privacy issues, the few publicly available datasets are of limited size and low topic diversity. We address this problem by introducing a large-scale dataset derived from Reddit, a source so far overlooked for personality prediction. The dataset is labeled with Myers-Briggs Type Indicators (MBTI) and comes with a rich set of features for more than 9k users. We carry out a preliminary feature analysis, revealing marked differences between the MBTI dimensions and poles. Furthermore, we use the dataset to train and evaluate benchmark personality prediction models, achieving macro F1-scores between 67% and 82% on the individual dimensions and 82% accuracy for exact or one-off accurate type prediction. These results are encouraging and comparable with the reliability of standardized tests.", "title": "" }, { "docid": "0b17b0e7141b287bdd8c8467d6581748", "text": "Two new wideband four-way out-of-phase slotline power dividers are proposed in this paper. The half-wavelength slotlines are employed to construct the presented compact power dividers. Based on the proposed power-dividing circuit, a four-way power divider is implemented with compact size and simple structure. To obtain high isolation among the four output ports and good output impedance matching, another four-way out-of-phase slotline power divider with improved isolation performance is designed by introducing an air-bridge resistor and two slotlines with isolation resistors. The simulated and measured results of the proposed power dividers demonstrate reasonable performance of impedance matching, insertion loss, amplitude balancing, and isolation among the output ports.", "title": "" } ]
scidocsrr
676244e97f2fcfc0e1f7a87e965a01fb
Model-Free reinforcement learning with continuous action in practice
[ { "docid": "cae4703a50910c7718284c6f8230a4bc", "text": "Autonomous helicopter flight is widely regarded to be a highly challenging control problem. Despite this fact, human experts can reliably fly helicopters through a wide range of maneuvers, including aerobatic maneuvers at the edge of the helicopter’s capabilities. We present apprenticeship learning algorithms, which leverage expert demonstrations to efficiently learn good controllers for tasks being demonstrated by an expert. These apprenticeship learning algorithms have enabled us to significantly extend the state of the art in autonomous helicopter aerobatics. Our experimental results include the first autonomous execution of a wide range of maneuvers, including but not limited to in-place flips, in-place rolls, loops and hurricanes, and even auto-rotation landings, chaos and tic-tocs, which only exceptional human pilots can perform. Our results also include complete airshows, which require autonomous transitions between many of these maneuvers. Our controllers perform as well as, and often even better than, our expert pilot.", "title": "" } ]
[ { "docid": "17b2adeaa934fe769ae3f3460e87b5cc", "text": "We aim to improve on the design of procedurally generated game levels. We propose a method which empowers game designers to author and control level generators, by expressing gameplay-related design constraints. Following a survey conducted on recent procedural level generation methods, we argue that gameplay-based control is currently the most natural control mechanism available for generative methods. Our method uses graph grammars, the result of the designer-expressed constraints, to generate sequences of desired player actions. These action graphs are used as the basis for the spatial structure and content of game levels; they guide the layout process and indicate the required content related to such actions. We showcase our approach with a case study on a 3D dungeon crawler game. Results allow us to conclude that our control mechanisms are both expressive and powerful, effectively supporting designers to procedurally generate game", "title": "" }, { "docid": "198967b505c9ded9255bff7b82fb2781", "text": "Generative adversarial nets (GANs) have been successfully applied to the artificial generation of image data. In terms of text data, much has been done on the artificial generation of natural language from a single corpus. We consider multiple text corpora as the input data, for which there can be two applications of GANs: (1) the creation of consistent cross-corpus word embeddings given different word embeddings per corpus; (2) the generation of robust bag-of-words document embeddings for each corpora. We demonstrate our GAN models on real-world text data sets from different corpora, and show that embeddings from both models lead to improvements in supervised learning problems.", "title": "" }, { "docid": "7974d0299ffcca73bb425fb72f463429", "text": "The development of human gut microbiota begins as soon as the neonate leaves the protective environment of the uterus (or maybe in-utero) and is exposed to innumerable microorganisms from the mother as well as the surrounding environment. Concurrently, the host responses to these microbes during early life manifest during the development of an otherwise hitherto immature immune system. The human gut microbiome, which comprises an extremely diverse and complex community of microorganisms inhabiting the intestinal tract, keeps on fluctuating during different stages of life. While these deviations are largely natural, inevitable and benign, recent studies show that unsolicited perturbations in gut microbiota configuration could have strong impact on several features of host health and disease. Our microbiota undergoes the most prominent deviations during infancy and old age and, interestingly, our immune health is also in its weakest and most unstable state during these two critical stages of life, indicating that our microbiota and health develop and age hand-in-hand. However, the mechanisms underlying these interactions are only now beginning to be revealed. The present review summarizes the evidences related to the age-associated changes in intestinal microbiota and vice-versa, mechanisms involved in this bi-directional relationship, and the prospective for development of microbiota-based interventions such as probiotics for healthy aging.", "title": "" }, { "docid": "196ddcefb2c3fcb6edd5e8d108f7e219", "text": "This paper may be considered as a practical reference for those who wish to add (now sufficiently matured) Agent Based modeling to their analysis toolkit and may or may not have some System Dynamics or Discrete Event modeling background. We focus on systems that contain large numbers of active objects (people, business units, animals, vehicles, or even things like projects, stocks, products, etc. that have timing, event ordering or other kind of individual behavior associated with them). We compare the three major paradigms in simulation modeling: System Dynamics, Discrete Event and Agent Based Modeling with respect to how they approach such systems. We show in detail how an Agent Based model can be built from an existing System Dynamics or a Discrete Event model and then show how easily it can be further enhanced to capture much more complicated behavior, dependencies and interactions thus providing for deeper insight in the system being modeled. Commonly understood examples are used throughout the paper; all models are specified in the visual language supported by AnyLogic tool. We view and present Agent Based modeling not as a substitution to older modeling paradigms but as a useful add-on that can be efficiently combined with System Dynamics and Discrete Event modeling. Several multi-paradigm model architectures are suggested.", "title": "" }, { "docid": "391b2716b952c1613d964fe58d70ee5f", "text": "BACKGROUND\nDue to an increasing number of norovirus infections in the last years rapid, specific, and sensitive diagnostic tools are needed. Reverse transcriptase-polymerase chain reactions (RT-PCR) have become the methods of choice. To minimize the working time and the risk of carryover contamination during the multi-step procedure of PCR the multiplex real-time RT-PCR for the simultaneous detection of genogroup I (GI) and II (GII) offers advantages for the handling of large amounts of clinical specimens.\n\n\nMETHODS\nWe have developed and evaluated a multiplex one-tube RT-PCR using a combination of optimized GI and GII specific primers located in the junction between ORF1 and ORF2 of the norovirus genome. For the detection of GI samples, a 3'-minor groove binder-DNA probe (GI-MGB-probe) were designed and used for the multiplex real-time PCR.\n\n\nRESULTS\nComparable results to those of our in-house nested PCR and monoplex real-time-PCR were only obtained using the GI specific MGB-probe. The MGB-probe forms extremely stable duplexes with single-stranded DNA targets, which enabled us to design a shorter probe (length 15 nucleotides) hybridizing to a more conserved part of the GI sequences. 97% of 100 previously norovirus positive specimens (tested by nested PCR and/or monoplex real-time PCR) were detected by the multiplex real-time PCR. A broad dynamic range from 2 x 10(1) to 2 x 10(7) genomic equivalents per assay using plasmid DNA standards for GI and GII were obtained and viral loads between 2.5 x 10(2) and 2 x 10(12) copies per ml stool suspension were detected.\n\n\nCONCLUSION\nThe one-tube multiplex RT real-time PCR using a minor groove binder-DNA probe for GI is a fast, specific, sensitive and cost-effective tool for the detection of norovirus infections in both mass outbreaks and sporadic cases and may have also applications in food and environmental testing.", "title": "" }, { "docid": "e425bba0f3ab24c226ab8881f3fe0780", "text": "We present a new method for solving total variation (TV) minimization problems in image restoration. The main idea is to remove some of the singularity caused by the nondifferentiability of the quantity |∇u| in the definition of the TV-norm before we apply a linearization technique such as Newton’s method. This is accomplished by introducing an additional variable for the flux quantity appearing in the gradient of the objective function, which can be interpreted as the normal vector to the level sets of the image u. Our method can be viewed as a primal-dual method as proposed by Conn and Overton [A Primal-Dual Interior Point Method for Minimizing a Sum of Euclidean Norms, preprint, 1994] and Andersen [Ph.D. thesis, Odense University, Denmark, 1995] for the minimization of a sum of Euclidean norms. In addition to possessing local quadratic convergence, experimental results show that the new method seems to be globally convergent.", "title": "" }, { "docid": "da28960f4a5daeb80aa5c344db326c8d", "text": "Adaptive traffic signal control, which adjusts traffic signal timing according to real-time traffic, has been shown to be an effective method to reduce traffic congestion. Available works on adaptive traffic signal control make responsive traffic signal control decisions based on human-crafted features (e.g. vehicle queue length). However, human-crafted features are abstractions of raw traffic data (e.g., position and speed of vehicles), which ignore some useful traffic information and lead to suboptimal traffic signal controls. In this paper, we propose a deep reinforcement learning algorithm that automatically extracts all useful features (machine-crafted features) from raw real-time traffic data and learns the optimal policy for adaptive traffic signal control. To improve algorithm stability, we adopt experience replay and target network mechanisms. Simulation results show that our algorithm reduces vehicle delay by up to 47% and 86% when compared to another two popular traffic signal control algorithms, longest queue first algorithm and fixed time control algorithm, respectively.", "title": "" }, { "docid": "8c0a8816028e8c50ebccbd812ee3a4e5", "text": "Songs are representation of audio signal and musical instruments. An audio signal separation system should be able to identify different audio signals such as speech, background noise and music. In a song the singing voice provides useful information regarding pitch range, music content, music tempo and rhythm. An automatic singing voice separation system is used for attenuating or removing the music accompaniment. The paper presents survey of the various algorithm and method for separating singing voice from musical background. From the survey it is observed that most of researchers used Robust Principal Component Analysis method for separation of singing voice from music background, by taking into account the rank of music accompaniment and the sparsity of singing voices.", "title": "" }, { "docid": "b812a26706cdd8b145422578a2b2d7b7", "text": "Currently successful execution of many human-in-the-loop manipulation tasks directly depends on the operator’s skill or a programmer’s knowledge of the presumed environment in which the task will be performed. Computer mediation of human inputs can augment this process to: (i) permit easy and rapid incorporation of local sensory information to augment performance, (ii) provide variable performance (precisionand power-) assist for output motions/forces, and (iii) hierarchical distribution of control and graceful degradation. Such mediated control has enormous potential to both reduce operator error and permit incorporation of greater autonomy into human/robot interaction. We chose to examine two sets of tasks to help enhance a remote operator’s performance: one involving precision in setting the forward velocity in the presence of variable loads/disturbances and the other improves safety by assisting the operator to avoid obstacles by carefully mediating the operator’s joystick inputs. The overall goal is to endow a set of local reflexes that use local sensory information to override the user’s input in order to enhance security, safety, and performance. In particular, we implement and evaluate the paradigm of mediated control for remotely driving a mobile robot system (a 1/10 scale remote control truck equipped with Basic Stamp 2 microprocessors) that will serve as our scaled inexpensive testbed. We discuss various aspects of the design and implementation of the Smart Car mobile platform. Beginning with the process of selection of the mechanical platform, we will discuss our motivation and reasoning for making several of the design choices necessary to create the testbed in the subsequent sections. In addition, we discuss the issues pertaining to implementation of two controllers. One for obstacle detection and the other for Wheel RPM PID control. We present the motivation and detail of the actual implementation. We use commercial off the shelf (COTS) hardware and integrated the whole system. Validation and calibration is a very critical step in the whole process. While an ad-hoc solution approach would also suffice for general demonstration purposes, we are interested in using the Smart Car as a scaled testbed. In particular, we are also interested in obtaining quantitative data and hence we spent considerable time validating our system. We first calibrate the independent subsystems, which include the transmitter/receiver, sensors, and then consider the calibration of the controller’s interaction with the entire system. A test setup with an external reference tachometer was created to validate open and closed loop response of the various controllers. Such mediated control has considerable significance and application in a wide range of applications ranging from “Smart Highway Systems” to semiautonomous exploratory rovers.", "title": "" }, { "docid": "fb885dabf20238e719a1a4ec9d6964d6", "text": "This paper investigates the comparative role of several factors, including information technology (IT), predicting the level of cooperation between two independent organizations. Drawing upon multiple theoretical perspectives, we develop five hypotheses about the impact on interorganizational cooperation of three sets of factors: (1) the characteristics of the environment withm which the relationship operates, (2) the characteristics of the relationship itself, and (3) the characteristics of how IT is used within the relationship. Each of these conceptual constructs is operationalized and measured within the specific context of buyer-supplier relationships in the automobile industry. The hypotheses are tested across two national settings (the US and Japan) using multiple regression analyzes conducted on a data set of 447 distinct relationships. The results indicate that the use of IT and the characteristics of the environment do not play the same role in explaining interorganizational cooperation in the two country settings, while in both countries the characteristics of the relationship significantly contribute to change in 112.", "title": "" }, { "docid": "b6515fb596ec0013be8815a42996e596", "text": "Correcting speech recognition errors on a mobile touchscreen device is an unavoidable but time-consuming task that requires a lot of user effort. To reduce this user effort, we previously proposed an error correction method using long context match with Web N-gram, which we combined with a simple gesture-based user interface. This method automatically replaces an error word with its corresponding correct word. However, it was evaluated only substitution errors in sentences, each of which involves only one error. In this paper, we extend this method to be used for more general cases when a sentence has more than one error. It recovers not only substitution errors but also deletion errors and insertion errors. For recovering deletion errors, it predicts a deleted word based on the phonemes and the part-of-speech tags of its surrounding words. Our experimental results show that the proposed method recovered the errors more accurately with less user effort than the conventional Word Confusion Network based error correction interface.", "title": "" }, { "docid": "c2802496761276ddc99949f8c5667bbc", "text": "A long-term goal of AI is to produce agents that can learn a diversity of skills throughout their lifetimes and continuously improve those skills via experience. A longstanding obstacle towards that goal is catastrophic forgetting, which is when learning new information erases previously learned information. Catastrophic forgetting occurs in artificial neural networks (ANNs), which have fueled most recent advances in AI. A recent paper proposed that catastrophic forgetting in ANNs can be reduced by promoting modularity, which can limit forgetting by isolating task information to specific clusters of nodes and connections (functional modules). While the prior work did show that modular ANNs suffered less from catastrophic forgetting, it was not able to produce ANNs that possessed task-specific functional modules, thereby leaving the main theory regarding modularity and forgetting untested. We introduce diffusion-based neuromodulation, which simulates the release of diffusing, neuromodulatory chemicals within an ANN that can modulate (i.e. up or down regulate) learning in a spatial region. On the simple diagnostic problem from the prior work, diffusion-based neuromodulation 1) induces task-specific learning in groups of nodes and connections (task-specific localized learning), which 2) produces functional modules for each subtask, and 3) yields higher performance by eliminating catastrophic forgetting. Overall, our results suggest that diffusion-based neuromodulation promotes task-specific localized learning and functional modularity, which can help solve the challenging, but important problem of catastrophic forgetting.", "title": "" }, { "docid": "549c0bacda4b663cb518003883d09f2c", "text": "The key to building an evolvable dialogue system in real-world scenarios is to ensure an affordable on-line dialogue policy learning, which requires the on-line learning process to be safe, efficient and economical. But in reality, due to the scarcity of real interaction data, the dialogue system usually grows slowly. Besides, the poor initial dialogue policy easily leads to bad user experience and incurs a failure of attracting users to contribute training data, so that the learning process is unsustainable. To accurately depict this, two quantitative metrics are proposed to assess safety and efficiency issues. For solving the unsustainable learning problem, we proposed a complete companion teaching framework incorporating the guidance from the human teacher. Since the human teaching is expensive, we compared various teaching schemes answering the question how and when to teach, to economically utilize teaching budget, so that make the online learning process affordable.", "title": "" }, { "docid": "27a28b74cd2c42c19fcb31c7e3c4ac67", "text": "The backpropagation of error algorithm (BP) is impossible to implement in a real brain. The recent success of deep networks in machine learning and AI, however, has inspired proposals for understanding how the brain might learn across multiple layers, and hence how it might approximate BP. As of yet, none of these proposals have been rigorously evaluated on tasks where BP-guided deep learning has proved critical, or in architectures more structured than simple fully-connected networks. Here we present results on scaling up biologically motivated models of deep learning on datasets which need deep networks with appropriate architectures to achieve good performance. We present results on the MNIST, CIFAR-10, and ImageNet datasets and explore variants of target-propagation (TP) and feedback alignment (FA) algorithms, and explore performance in both fullyand locally-connected architectures. We also introduce weight-transport-free variants of difference target propagation (DTP) modified to remove backpropagation from the penultimate layer. Many of these algorithms perform well for MNIST, but for CIFAR and ImageNet we find that TP and FA variants perform significantly worse than BP, especially for networks composed of locally connected units, opening questions about whether new architectures and algorithms are required to scale these approaches. Our results and implementation details help establish baselines for biologically motivated deep learning schemes going forward.", "title": "" }, { "docid": "66370e97fba315711708b13e0a1c9600", "text": "Cloud Computing is the long dreamed vision of computing as a utility, where users can remotely store their data into the cloud so as to enjoy the on-demand high quality applications and services from a shared pool of configurable computing resources. By data outsourcing, users can be relieved from the burden of local data storage and maintenance. However, the fact that users no longer have physical possession of the possibly large size of outsourced data makes the data integrity protection in Cloud Computing a very challenging and potentially formidable task, especially for users with constrained computing resources and capabilities. Thus, enabling public auditability for cloud data storage security is of critical importance so that users can resort to an external audit party to check the integrity of outsourced data when needed. To securely introduce an effective third party auditor (TPA), the following two fundamental requirements have to be met: 1) TPA should be able to efficiently audit the cloud data storage without demanding the local copy of data, and introduce no additional on-line burden to the cloud user; 2) The third party auditing process should bring in no new vulnerabilities towards user data privacy. In this paper, we utilize and uniquely combine the public key based homomorphic authenticator with random masking to achieve the privacy-preserving public cloud data auditing system, which meets all above requirements. To support efficient handling of multiple auditing tasks, we further explore the technique of bilinear aggregate signature to extend our main result into a multi-user setting, where TPA can perform multiple auditing tasks simultaneously. Extensive security and performance analysis shows the proposed schemes are provably secure and highly efficient.", "title": "" }, { "docid": "4124c4c838d0c876f527c021a2c58358", "text": "Early disease detection is a major challenge in agriculture field. Hence proper measures has to be taken to fight bioagressors of crops while minimizing the use of pesticides. The techniques of machine vision are extensively applied to agricultural science, and it has great perspective especially in the plant protection field,which ultimately leads to crops management. Our goal is early detection of bioagressors. The paper describes a software prototype system for pest detection on the infected images of different leaves. Images of the infected leaf are captured by digital camera and processed using image growing, image segmentation techniques to detect infected parts of the particular plants. Then the detected part is been processed for futher feature extraction which gives general idea about pests. This proposes automatic detection and calculating area of infection on leaves of a whitefly (Trialeurodes vaporariorum Westwood) at a mature stage.", "title": "" }, { "docid": "adfc4c28f616428d21332596cffe7c9b", "text": "The high increase in the number of companies competing in mature markets makes customer retention an important factor for any company to survive. Thus, many methodologies (e.g., data mining and statistics) have been proposed to analyse and study customer retention. The validity of such methods is not yet proved though. This paper tries to fill this gap by empirically comparing two techniques: Customer churn decision tree and logistic regression models. The paper proves the superiority of decision tree technique and stresses the needs for more advanced methods to churn modelling.", "title": "" }, { "docid": "820f67fa3521ee4af7da0e022a8d0be3", "text": "The visual appearance of rain is highly complex. Unlike the particles that cause other weather conditions such as haze and fog, rain drops are large and visible to the naked eye. Each drop refracts and reflects both scene radiance and environmental illumination towards an observer. As a result, a spatially distributed ensemble of drops moving at high velocities (rain) produces complex spatial and temporal intensity fluctuations in images and videos. To analyze the effects of rain, it is essential to understand the visual appearance of a single rain drop. In this paper, we develop geometric and photometric models for the refraction through, and reflection (both specular and internal) from, a rain drop. Our geometric and photometric models show that each rain drop behaves like a wide-angle lens that redirects light from a large field of view towards the observer. From this, we observe that in spite of being a transparent object, the brightness of the drop does not depend strongly on the brightness of the background. Our models provide the fundamental tools to analyze the complex effects of rain. Thus, we believe our work has implications for vision in bad weather as well as for efficient rendering of rain in computer graphics.", "title": "" }, { "docid": "ce0f21b03d669b72dd954352e2c35ab1", "text": "In this letter, a new technique is proposed for the design of a compact high-power low-pass rectangular waveguide filter with a wide spurious-free frequency behavior. Specifically, the new filter is intended for the suppression of the fundamental mode over a wide band in much higher power applications than the classical corrugated filter with the same frequency specifications. Moreover, the filter length is dramatically reduced when compared to alternative techniques previously considered.", "title": "" }, { "docid": "aed97de827b675d3ddb3e04274f73428", "text": "In paid search advertising on Internet search engines, advertisers bid for specific keywords, e.g. “Rental Cars LAX,” to display a text ad in the sponsored section of the search results page. The advertiser is charged when a user clicks on the ad. Many of the keywords in paid search campaigns generate few, if any, sales conversions – even over several months. This sparseness makes it difficult to assess the profit performance of individual keywords and has led to the practice of managing large groups of keywords together or relying on easy-to-calculate heuristics such as click-through rate (CTR). The authors develop a model of individual keyword conversion that addresses the sparseness problem. Conversion rates are estimated using a hierarchical Bayes binary choice model. This enables conversion to be based on both word-level covariates and shrinkage across keywords. The model is applied to keyword-level paid search data containing daily information on impressions, clicks and reservations for a major lodging chain. The results show that including keyword-level covariates and heterogeneity significantly improves conversion estimates. A holdout comparison suggests that campaign management based on the model, i.e., estimated costper-sale on a keyword level, would outperform existing managerial strategies.", "title": "" } ]
scidocsrr
ca19f476db37f63307a639f43f54dc81
Measuring classifier performance: a coherent alternative to the area under the ROC curve
[ { "docid": "a9bc9d9098fe852d13c3355ab6f81edb", "text": "The area under the ROC curve, or the equivalent Gini index, is a widely used measure of performance of supervised classification rules. It has the attractive property that it side-steps the need to specify the costs of the different kinds of misclassification. However, the simple form is only applicable to the case of two classes. We extend the definition to the case of more than two classes by averaging pairwise comparisons. This measure reduces to the standard form in the two class case. We compare its properties with the standard measure of proportion correct and an alternative definition of proportion correct based on pairwise comparison of classes for a simple artificial case and illustrate its application on eight data sets. On the data sets we examined, the measures produced similar, but not identical results, reflecting the different aspects of performance that they were measuring. Like the area under the ROC curve, the measure we propose is useful in those many situations where it is impossible to give costs for the different kinds of misclassification.", "title": "" } ]
[ { "docid": "b7171ab55a7539d54a4781dacebbfd49", "text": "This paper proposes an image processing technique for the detection of glaucoma which mainly affects the optic disc by increasing the cup size. During early stages it was difficult to detect Glaucoma, which is in fact second leading cause of blindness. In this paper glaucoma is categorized through extraction of features from retinal fundus images. The features include (i) Cup to Disc Ratio (CDR), which is one of the primary physiological parameter for the diagnosis of glaucoma and (ii) Ratio of Neuroretinal Rim in inferior, superior, temporal and nasal quadrants i.e. (ISNT quadrants) for verification of the ISNT rule. The novel technique is implemented on 80 retinal images and an accuracy of 97.5% is achieved taking an average computational time of 0.8141 seconds.", "title": "" }, { "docid": "fad2000af9be8c099c0fd88dc341d974", "text": "The computer technology has emerged as a necessity in our day to day life to deal with various aspects like education, banking, communication, entertainment etc. Computer system’s security is threatened by weapons named as malware to accomplish malicious intention of its writers. Various solutions are available to detect these threats like AV Scanners, Intrusion Detection System, and Firewalls etc. These solutions of malware detection traditionally use signatures of malware to detect their presence in our system. But these methods are also evaded due to some obfuscation techniques employed by malware authors. This survey paper highlights the existing detection and analysis methodologies used for these obfuscated malicious code.", "title": "" }, { "docid": "ea84c28e02a38caff14683681ea264d7", "text": "This paper presents a hierarchical framework for detecting local and global anomalies via hierarchical feature representation and Gaussian process regression. While local anomaly is typically detected as a 3D pattern matching problem, we are more interested in global anomaly that involves multiple normal events interacting in an unusual manner such as car accident. To simultaneously detect local and global anomalies, we formulate the extraction of normal interactions from training video as the problem of efficiently finding the frequent geometric relations of the nearby sparse spatio-temporal interest points. A codebook of interaction templates is then constructed and modeled using Gaussian process regression. A novel inference method for computing the likelihood of an observed interaction is also proposed. As such, our model is robust to slight topological deformations and can handle the noise and data unbalance problems in the training data. Simulations show that our system outperforms the main state-of-the-art methods on this topic and achieves at least 80% detection rates based on three challenging datasets.", "title": "" }, { "docid": "353bbc5e68ec1d53b3cd0f7c352ee699", "text": "• A submitted manuscript is the author's version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers.", "title": "" }, { "docid": "a8ff2ea9e15569de375c34ef252d0dad", "text": "BIM (Building Information Modeling) has been recently implemented by many Architecture, Engineering, and Construction firms due to its productivity gains and long term benefits. This paper presents the development and implementation of a sustainability assessment framework for an architectural design using BIM technology in extracting data from the digital building model needed for determining the level of sustainability. The sustainability assessment is based on the LEED (Leadership in Energy and Environmental Design) Green Building Rating System, a widely accepted national standards for sustainable building design in the United States. The architectural design of a hotel project is used as a case study to verify the applicability of the framework.", "title": "" }, { "docid": "a701b681b5fb570cf8c0668fe691ee15", "text": "Coagulation-flocculation is a relatively simple physical-chemical technique in treatment of old and stabilized leachate which has been practiced using a variety of conventional coagulants. Polymeric forms of metal coagulants which are increasingly applied in water treatment are not well documented in leachate treatment. In this research, capability of poly-aluminum chloride (PAC) in the treatment of stabilized leachate from Pulau Burung Landfill Site (PBLS), Penang, Malaysia was studied. The removal efficiencies for chemical oxygen demand (COD), turbidity, color and total suspended solid (TSS) obtained using PAC were compared with those obtained using alum as a conventional coagulant. Central composite design (CCD) and response surface method (RSM) were applied to optimize the operating variables viz. coagulant dosage and pH. Quadratic models developed for the four responses (COD, turbidity, color and TSS) studied indicated the optimum conditions to be PAC dosage of 2g/L at pH 7.5 and alum dosage of 9.5 g/L at pH 7. The experimental data and model predictions agreed well. COD, turbidity, color and TSS removal efficiencies of 43.1, 94.0, 90.7, and 92.2% for PAC, and 62.8, 88.4, 86.4, and 90.1% for alum were demonstrated.", "title": "" }, { "docid": "1a9086eb63bffa5a36fde268fb74c7a6", "text": "This brief presents a simple reference circuit with channel-length modulation compensation to generate a reference voltage of 221 mV using subthreshold of MOSFETs at supply voltage of 0.85 V with power consumption of 3.3 muW at room temperature using TSMC 0.18-mum technology. The proposed circuit occupied in less than 0.0238 mm 2 achieves the reference voltage variation of 2 mV/V for supply voltage from 0.9 to 2.5V and about 6 mV of temperature variation in the range from -20degC to 120 degC. The agreement of simulation and measurement data is demonstrated", "title": "" }, { "docid": "fc4ea7391c1500851ec0d37beed4cd90", "text": "As a crucial operation, routing plays an important role in various communication networks. In the context of data and sensor networks, routing strategies such as shortest-path, multi-path and potential-based (“all-path”) routing have been developed. Existing results in the literature show that the shortest path and all-path routing can be obtained from L1 and L2 flow optimization, respectively. Based on this connection between routing and flow optimization in a network, in this paper we develop a unifying theoretical framework by considering flow optimization with mixed (weighted) L1/L2-norms. We obtain a surprising result: as we vary the trade-off parameter θ, the routing graphs induced by the optimal flow solutions span from shortest-path to multi-path to all-path routing-this entire sequence of routing graphs is referred to as the routing continuum. We also develop an efficient iterative algorithm for computing the entire routing continuum. Several generalizations are also considered, with applications to traffic engineering, wireless sensor networks, and network robustness analysis.", "title": "" }, { "docid": "0b087e7e36bef7a6d92b8e44bd22047a", "text": "We investigated whether the dynamics of head and facial movements apart from specific facial expressions communicate affect in infants. Age-appropriate tasks were used to elicit positive and negative affect in 28 ethnically diverse 12-month-old infants. 3D head and facial movements were tracked from 2D video. Strong effects were found for both head and facial movements. For head movement, angular velocity and angular acceleration of pitch, yaw, and roll were higher during negative relative to positive affect. For facial movement, displacement, velocity, and acceleration also increased during negative relative to positive affect. Our results suggest that the dynamics of head and facial movements communicate affect at ages as young as 12 months. These findings deepen our understanding of emotion communication and provide a basis for studying individual differences in emotion in socio-emotional development.", "title": "" }, { "docid": "34de4277ca9d6ce808cc5d566857eb22", "text": "Attention-based neural abstractive summarization systems equipped with copy mechanisms have shown promising results. Despite this success, it has been noticed that such a system generates a summary by mostly, if not entirely, copying over phrases, sentences, and sometimes multiple consecutive sentences from an input paragraph, effectively performing extractive summarization. In this paper, we verify this behavior using the latest neural abstractive summarization system a pointergenerator network (See et al., 2017). We propose a simple baseline method that allows us to control the amount of copying without retraining. Experiments indicate that the method provides a strong baseline for abstractive systems looking to obtain high ROUGE scores while minimizing overlap with the source article, substantially reducing the n-gram overlap with the original article while keeping within 2 points of the original model’s ROUGE score.", "title": "" }, { "docid": "2e15c9702b0a17a1196ca272d16bd8fa", "text": "In this review, we evaluate four topics in the study of personality development where discernible progress has been made since 1995 (the last time the area of personality development was reviewed in this series). We (a) evaluate research about the structure of personality in childhood and in adulthood, with special attention to possible developmental changes in the lower-order components of broad traits; (b) summarize new directions in behavioral genetic studies of personality; (c) synthesize evidence from longitudinal studies to pinpoint where and when in the life course personality change is most likely to occur; and (d) document which personality traits influence social relationships, status attainment, and health, and the mechanisms by which these personality effects come about. In each of these four areas, we note gaps and identify priorities for further research.", "title": "" }, { "docid": "d08612dc15372f2c78ac00f426500d5f", "text": "In this chapter an extensive review of algorithmic methods that automatically detect patterns in dermoscopic images of pigmented lesions is presented. Pattern Analysis seeks to identify specific patterns, which may be global and local. It is the method most commonly used for providing diagnostic accuracy for cutaneous melanoma. In this chapter, a description of global and local patterns identified by pattern analysis is presented as well as a brief explanation of algorithmic methods that carry out the detection and classification of these patterns. Although the 7-Point Checklist method corresponds to a different diagnostic technique than pattern analysis, it can be considered as a simplification of it as it classifies seven features related with local patterns. For this reason, the main techniques to automatically assess the 7-Point Checklist are briefly explained in this chapter.", "title": "" }, { "docid": "58b931bff4be9af5dcbc5188a0bf095f", "text": "Near-field antenna measurements combined with a near-field far-field transformation are an established antenna characterization technique. The approach avoids far-field measurements and offers a wide area of post-processing possibilities including radiation pattern determination and diagnostic methods. In this paper, a near-field far-field transformation algorithm employing plane wave expansion is presented and applied to the case of spherical near-field measurements. Compared to existing algorithms, this approach exploits the benefits of diagonalized translation operators, known from fast multipole methods. Due to the plane wave based field representation, a probe correction, using directly the probe's far-field pattern can easily be integrated into the transformation. Hence, it is possible to perform a full probe correction for arbitrary field probes with almost no additional effort. In contrast to other plane wave techniques, like holographic projections, which are suitable for highly directive antennas, the presented approach is applicable for arbitrary radiating structures. Major advantages are low computational effort with respect to the coupling matrix elements owing to the use of diagonalized translation operators and the efficient correction of arbitrary field probes. Also, irregular measurement grids can be handled with little additional effort.", "title": "" }, { "docid": "5b9baa6587bc70c17da2b0512545c268", "text": "Credit scoring models have been widely studied in the areas of statistics, machine learning, and artificial intelligence (AI). Many novel approaches such as artificial neural networks (ANNs), rough sets, or decision trees have been proposed to increase the accuracy of credit scoring models. Since an improvement in accuracy of a fraction of a percent might translate into significant savings, a more sophisticated model should be proposed to significantly improving the accuracy of the credit scoring mode. In this paper, genetic programming (GP) is used to build credit scoring models. Two numerical examples will be employed here to compare the error rate to other credit scoring models including the ANN, decision trees, rough sets, and logistic regression. On the basis of the results, we can conclude that GP can provide better performance than other models. q 2005 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "61b7c35516b8a3f2a387526ef2541434", "text": "Understanding and quantifying dependence is at the core of all modelling efforts in financial econometrics. The linear correlation coefficient, which is the far most used measure to test dependence in the financial community and also elsewhere, is only a measure of linear dependence. This means that it is a meaningful measure of dependence if asset returns are well represented by an elliptical distribution. Outside the world of elliptical distributions, however, using the linear correlation coefficient as a measure of dependence may lead to misleading conclusions. Hence, alternative methods for capturing co-dependency should be considered. One class of alternatives are copula-based dependence measures. In this survey we consider two parametric families of copulas; the copulas of normal mixture distributions and Archimedean copulas.", "title": "" }, { "docid": "0ec7538bef6a3ad982b8935f6124127d", "text": "New technology has been seen as a way for many businesses in the tourism industry to stay competitive and enhance their marketing campaign in various ways. AR has evolved as the buzzword of modern information technology and is gaining increasing attention in the media as well as through a variety of use cases. This trend is highly fostered across mobile applications as well as the hype of wearable computing triggered by Google’s Glass project to be launched in 2014. However, although research on AR has been conducted in various fields including the Urban Tourism industry, the majority of studies focus on technical aspects of AR, while others are tailored to specific applications. Therefore, this paper aims to examine the current implementation of AR in the Urban Tourism context and identifies areas of research and development that is required to guide the early stages of AR implementation in a purposeful way to enhance the tourist experience. The paper provides an overview of AR and examines the impacts AR has made on the economy. Hence, AR applications in Urban Tourism are identified and benefits of AR are discussed. Please cite this article as: Jung, T. and Han, D. (2014). Augmented Reality (AR) in Urban Heritage Tourism. e-Review of Tourism Research. (ISSN: 1941-5842) Augmented Reality (AR) in Urban Heritage Tourism Timothy Jung and Dai-In Han Department of Food and Tourism Management Manchester\t\r Metropolitan\t\r University,\t\r United\t\r Kingdom t.jung@mmu.ac.uk,\t\r d.han@mmu.ac.uk", "title": "" }, { "docid": "335e92a896c6cce646f3ae81c5d9a02c", "text": "Vulnerabilities in web applications allow malicious users to obtain unrestricted access to private and confidential information. SQL injection attacks rank at the top of the list of threats directed at any database-driven application written for the Web. An attacker can take advantages of web application programming security flaws and pass unexpected malicious SQL statements through a web application for execution by the back-end database. This paper proposes a novel specification-based methodology for the detection of exploitations of SQL injection vulnerabilities. The new approach on the one hand utilizes specifications that define the intended syntactic structure of SQL queries that are produced and executed by the web application and on the other hand monitors the application for executing queries that are in violation of the specification.\n The three most important advantages of the new approach against existing analogous mechanisms are that, first, it prevents all forms of SQL injection attacks; second, its effectiveness is independent of any particular target system, application environment, or DBMS; and, third, there is no need to modify the source code of existing web applications to apply the new protection scheme to them.\n We developed a prototype SQL injection detection system (SQL-IDS) that implements the proposed algorithm. The system monitors Java-based applications and detects SQL injection attacks in real time. We report some preliminary experimental results over several SQL injection attacks that show that the proposed query-specific detection allows the system to perform focused analysis at negligible computational overhead without producing false positives or false negatives. Therefore, the new approach is very efficient in practice.", "title": "" }, { "docid": "b24772af47f76db0f19ee281cccaa03f", "text": "We describe a method for assessing the visualization literacy (VL) of a user. Assessing how well people understand visualizations has great value for research (e. g., to avoid confounds), for design (e. g., to best determine the capabilities of an audience), for teaching (e. g., to assess the level of new students), and for recruiting (e. g., to assess the level of interviewees). This paper proposes a method for assessing VL based on Item Response Theory. It describes the design and evaluation of two VL tests for line graphs, and presents the extension of the method to bar charts and scatterplots. Finally, it discusses the reimplementation of these tests for fast, effective, and scalable web-based use.", "title": "" }, { "docid": "43c91f3491daceb76906a48fba3663dc", "text": "A noninverting buck-boost dc-dc converter can work in buck, boost, or buck-boost mode. Hence, it provides a good solution when the input voltage may be higher or lower than the output voltage. However, a buck-boost converter requires four power transistors, rather than two. Therefore, its efficiency decreases, due to the conduction and switching losses of the two extra power transistors. Another issue of a buck-boost converter is how to smoothly switch its operational mode, when its input voltage approaches its output voltage. A hysteretic-current-mode noninverting buck-boost converter with high efficiency and smooth mode transition is proposed, and it was designed and fabricated using TSMC 0.35-μm CMOS 2P4 M 3.3 V/5V mixed-signal polycide process. The input voltage may range from 2.5 to 5 V, the output voltage is 3.3 V, and the maximal load current is 400 mA. According to the measured results, the maximal efficiency reaches 98.1%, and the efficiencies measured in the entire input voltage and loading ranges are all above 80%.", "title": "" }, { "docid": "724b049bd1ba662ebc29cc9eddad4a82", "text": "The way people look in terms of facial attributes (ethnicity, hair color, facial hair, etc.) and the clothes or accessories they wear (sunglasses, hat, hoodies, etc.) is highly dependent on geo-location and weather condition, respectively. This work explores, for the first time, the use of this contextual information, as people with wearable cameras walk across different neighborhoods of a city, in order to learn a rich feature representation for facial attribute classification, without the costly manual annotation required by previous methods. By tracking the faces of casual walkers on more than 40 hours of egocentric video, we are able to cover tens of thousands of different identities and automatically extract nearly 5 million pairs of images connected by or from different face tracks, along with their weather and location context, under pose and lighting variations. These image pairs are then fed into a deep network that preserves similarity of images connected by the same track, in order to capture identity-related attribute features, and optimizes for location and weather prediction to capture additional facial attribute features. Finally, the network is fine-tuned with manually annotated samples. We perform an extensive experimental analysis on wearable data and two standard benchmark datasets based on web images (LFWA and CelebA). Our method outperforms by a large margin a network trained from scratch. Moreover, even without using manually annotated identity labels for pre-training as in previous methods, our approach achieves results that are better than the state of the art.", "title": "" } ]
scidocsrr
065e418647d7343acda7cb1216986f79
The impact of emotionality and self-disclosure on online dating versus traditional dating
[ { "docid": "817471946fbe8b23d195c4fea8967549", "text": "The purpose of this research project was to investigate possible sex, ethnicity, and age group differences involving the information placed in Internet dating ads, and to contrast the findings with predictions from evolutionary theory (e.g., women being more selective than men) and with findings from previous studies involving heterosexual dating ads placed in newspapers and magazines. Of particular interest were the types and number of characteristics sought in a dating partner. Results generally supported predictions from evolutionary theory. Women listed more desired characteristics for a partner than did men. Women focused more on non-physical attributes such as ambition and character than did men, and men focused more on youth and attractiveness than did women. There was; however, considerable similarity in terms of the five most desired attributes listed by both men and women. Women listed the following desired characteristics in men most often: humor, honesty, caring, openness, and personality. Men desired the following: affection, humor, honesty, openness, and attractive women. These desired characteristics were also significantly different from those found in recent studies which looked at dating ads placed in newspapers.", "title": "" } ]
[ { "docid": "5f41bc81a483dd4deb5e70272d32ac77", "text": "In this paper, we present the design and evaluation of a novel soft cable-driven exosuit that can apply forces to the body to assist walking. Unlike traditional exoskeletons which contain rigid framing elements, the soft exosuit is worn like clothing, yet can generate moments at the ankle and hip with magnitudes of 18% and 30% of those naturally generated by the body during walking, respectively. Our design uses geared motors to pull on Bowden cables connected to the suit near the ankle. The suit has the advantages over a traditional exoskeleton in that the wearer's joints are unconstrained by external rigid structures, and the worn part of the suit is extremely light, which minimizes the suit's unintentional interference with the body's natural biomechanics. However, a soft suit presents challenges related to actuation force transfer and control, since the body is compliant and cannot support large pressures comfortably. We discuss the design of the suit and actuation system, including principles by which soft suits can transfer force to the body effectively and the biological inspiration for the design. For a soft exosuit, an important design parameter is the combined effective stiffness of the suit and its interface to the wearer. We characterize the exosuit's effective stiffness, and present preliminary results from it generating assistive torques to a subject during walking. We envision such an exosuit having broad applicability for assisting healthy individuals as well as those with muscle weakness.", "title": "" }, { "docid": "6554f662f667b8b53ad7b75abfa6f36f", "text": "present paper introduces an innovative approach to automatically grade the disease on plant leaves. The system effectively inculcates Information and Communication Technology (ICT) in agriculture and hence contributes to Precision Agriculture. Presently, plant pathologists mainly rely on naked eye prediction and a disease scoring scale to grade the disease. This manual grading is not only time consuming but also not feasible. Hence the current paper proposes an image processing based approach to automatically grade the disease spread on plant leaves by employing Fuzzy Logic. The results are proved to be accurate and satisfactory in contrast with manual grading. Keywordscolor image segmentation, disease spot extraction, percent-infection, fuzzy logic, disease grade. INTRODUCTION The sole area that serves the food needs of the entire human race is the Agriculture sector. It has played a key role in the development of human civilization. Plants exist everywhere we live, as well as places without us. Plant disease is one of the crucial causes that reduces quantity and degrades quality of the agricultural products. Plant Pathology is the scientific study of plant diseases caused by pathogens (infectious diseases) and environmental conditions (physiological factors). It involves the study of pathogen identification, disease etiology, disease cycles, economic impact, plant disease epidemiology, plant disease resistance, pathosystem genetics and management of plant diseases. Disease is impairment to the normal state of the plant that modifies or interrupts its vital functions such as photosynthesis, transpiration, pollination, fertilization, germination etc. Plant diseases have turned into a nightmare as it can cause significant reduction in both quality and quantity of agricultural products [2]. Information and Communication Technology (ICT) application is going to be implemented as a solution in improving the status of the agriculture sector [3]. Due to the manifestation and developments in the fields of sensor networks, robotics, GPS technology, communication systems etc, precision agriculture started emerging [10]. The objectives of precision agriculture are profit maximization, agricultural input rationalization and environmental damage reduction by adjusting the agricultural practices to the site demands. In the area of disease management, grade of the disease is determined to provide an accurate and precision treatment advisory. EXISTING SYSTEM: MANUAL GRADING Presently the plant pathologists mainly rely on the naked eye prediction and a disease scoring scale to grade the disease on leaves. There are some problems associated with this manual grading. Diseases are inevitable in plants. When a plant gets affected by the disease, a treatment advisory is required to cure the Arun Kumar R et al, Int. J. Comp. Tech. Appl., Vol 2 (5), 1709-1716 IJCTA | SEPT-OCT 2011 Available online@www.ijcta.com 1709 ISSN:2229-6093", "title": "" }, { "docid": "66f17513486e4d25c9be36e71aecbbf8", "text": "Fuzz testing is an active testing technique which consists in automatically generating and sending malicious inputs to an application in order to hopefully trigger a vulnerability. Fuzzing entails such questions as: Where to fuzz? Which parameter to fuzz? What kind of anomaly to introduce? Where to observe its effects? etc. Different test contexts depending on the degree of knowledge assumed about the target: recompiling the application (white-box), interacting only at the target interface (blackbox), dynamically instrumenting a binary (grey-box). In this paper, we focus on black-box test contest, and specifically address the questions: How to obtain a notion of coverage on unstructured inputs? How to capture human testers intuitions and use it for the fuzzing? How to drive the search in various directions? We specifically address the problems of detecting Memory Corruption in PDF interpreters and Cross Site Scripting (XSS) in web applications. We detail our approaches which use genetic algorithm, inference and anti-random testing. We empirically evaluate our implementations of XSS fuzzer KameleonFuzz and of PDF fuzzer ShiftMonkey.", "title": "" }, { "docid": "fd4bd9edcaff84867b6e667401aa3124", "text": "We give suggestions for the presentation of research results from frequentist, information-theoretic, and Bayesian analysis paradigms, followed by several general suggestions. The information-theoretic and Bayesian methods offer alternative approaches to data analysis and inference compared to traditionally used methods. Guidance is lacking on the presentation of results under these alternative procedures and on nontesting aspects of classical frequentist methods of statistical analysis. Null hypothesis testing has come under intense criticism. We recommend less reporting of the results of statistical tests of null hypotheses in cases where the null is surely false anyway, or where the null hypothesis is of little interest to science or management. JOURNAL OF WILDLIFE MANAGEMENT 65(3):373-378", "title": "" }, { "docid": "9b2b04acbbf5c847885c37c448fb99c8", "text": "We address the problem of substring searchable encryption. A single user produces a big stream of data and later on wants to learn the positions in the string that some patterns occur. Although current techniques exploit auxiliary data structures to achieve efficient substring search on the server side, the cost at the user side may be prohibitive. We revisit the work of substring searchable encryption in order to reduce the storage cost of auxiliary data structures. Our solution entails a suffix array based index design, which allows optimal storage cost $O(n)$ with small hidden factor at the size of the string n. Moreover, we implemented our scheme and the state of the art protocol \\citeChase to demonstrate the performance advantage of our solution with precise benchmark results.", "title": "" }, { "docid": "8a56b4d4f69466aee0d5eff0c09cd514", "text": "This paper explores how a robot’s physical presence affects human judgments of the robot as a social partner. For this experiment, participants collaborated on simple book-moving tasks with a humanoid robot that was either physically present or displayed via a live video feed. Multiple tasks individually examined the following aspects of social interaction: greetings, cooperation, trust, and personal space. Participants readily greeted and cooperated with the robot whether present physically or in live video display. However, participants were more likely both to fulfill an unusual request and to afford greater personal space to the robot when it was physically present, than when it was shown on live video. The same was true when the live video displayed robot’s gestures were augmented with disambiguating 3-D information. Questionnaire data support these behavioral findings and also show that participants had an overall more positive interaction with the physically present", "title": "" }, { "docid": "82d4b2aa3e3d3ec10425c6250268861c", "text": "Deep Neural Networks (DNNs) are typically trained by backpropagation in a batch learning setting, which requires the entire training data to be made available prior to the learning task. This is not scalable for many real-world scenarios where new data arrives sequentially in a stream form. We aim to address an open challenge of “Online Deep Learning” (ODL) for learning DNNs on the fly in an online setting. Unlike traditional online learning that often optimizes some convex objective function with respect to a shallow model (e.g., a linear/kernel-based hypothesis), ODL is significantly more challenging since the optimization of the DNN objective function is non-convex, and regular backpropagation does not work well in practice, especially for online learning settings. In this paper, we present a new online deep learning framework that attempts to tackle the challenges by learning DNN models of adaptive depth from a sequence of training data in an online learning setting. In particular, we propose a novel Hedge Backpropagation (HBP) method for online updating the parameters of DNN effectively, and validate the efficacy of our method on large-scale data sets, including both stationary and concept drifting scenarios.", "title": "" }, { "docid": "0d0eb6ed5dff220bc46ffbf87f90ee59", "text": "Objectives. The aim of this review was to investigate whether alternating hot–cold water treatment is a legitimate training tool for enhancing athlete recovery. A number of mechanisms are discussed to justify its merits and future research directions are reported. Alternating hot–cold water treatment has been used in the clinical setting to assist in acute sporting injuries and rehabilitation purposes. However, there is overwhelming anecdotal evidence for it’s inclusion as a method for post exercise recovery. Many coaches, athletes and trainers are using alternating hot–cold water treatment as a means for post exercise recovery. Design. A literature search was performed using SportDiscus, Medline and Web of Science using the key words recovery, muscle fatigue, cryotherapy, thermotherapy, hydrotherapy, contrast water immersion and training. Results. The physiologic effects of hot–cold water contrast baths for injury treatment have been well documented, but its physiological rationale for enhancing recovery is less known. Most experimental evidence suggests that hot–cold water immersion helps to reduce injury in the acute stages of injury, through vasodilation and vasoconstriction thereby stimulating blood flow thus reducing swelling. This shunting action of the blood caused by vasodilation and vasoconstriction may be one of the mechanisms to removing metabolites, repairing the exercised muscle and slowing the metabolic process down. Conclusion. To date there are very few studies that have focussed on the effectiveness of hot–cold water immersion for post exercise treatment. More research is needed before conclusions can be drawn on whether alternating hot–cold water immersion improves recuperation and influences the physiological changes that characterises post exercise recovery. q 2003 Published by Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "a05a953097e5081670f26e85c4b8e397", "text": "In European science and technology policy, various styles have been developed and institutionalised to govern the ethical challenges of science and technology innovations. In this paper, we give an account of the most dominant styles of the past 30 years, particularly in Europe, seeking to show their specific merits and problems. We focus on three styles of governance: a technocratic style, an applied ethics style, and a public participation style. We discuss their merits and deficits, and use this analysis to assess the potential of the recently established governance approach of 'Responsible Research and Innovation' (RRI). Based on this analysis, we reflect on the current shaping of RRI in terms of 'doing governance'.", "title": "" }, { "docid": "6f0283efa932663c83cc2c63d19fd6cf", "text": "Most research that explores the emotional state of users of spoken dialog systems does not fully utilize the contextual nature that the dialog structure provides. This paper reports results of machine learning experiments designed to automatically classify the emotional state of user turns using a corpus of 5,690 dialogs collected with the “How May I Help You” spoken dialog system. We show that augmenting standard lexical and prosodic features with contextual features that exploit the structure of spoken dialog and track user state increases classification accuracy by 2.6%.", "title": "" }, { "docid": "17ec5256082713e85c819bb0a0dd3453", "text": "Scholarly documents contain multiple figures representing experimental findings. These figures are generated from data which is not reported anywhere else in the paper. We propose a modular architecture for analyzing such figures. Our architecture consists of the following modules: 1. An extractor for figures and associated metadata (figure captions and mentions) from PDF documents; 2. A Search engine on the extracted figures and metadata; 3. An image processing module for automated data extraction from the figures and 4. A natural language processing module to understand the semantics of the figure. We discuss the challenges in each step, report an extractor algorithm to extract vector graphics from scholarly documents and a classification algorithm for figures. Our extractor algorithm improves the state of the art by more than 10% and the classification process is very scalable, yet achieves 85\\% accuracy. We also describe a semi-automatic system for data extraction from figures which is integrated with our search engine to improve user experience.", "title": "" }, { "docid": "99fa507d3b36e1a42f0dbda5420e329a", "text": "Reference Points and Effort Provision A key open question for theories of reference-dependent preferences is what determines the reference point. One candidate is expectations: what people expect could affect how they feel about what actually occurs. In a real-effort experiment, we manipulate the rational expectations of subjects and check whether this manipulation influences their effort provision. We find that effort provision is significantly different between treatments in the way predicted by models of expectation-based reference-dependent preferences: if expectations are high, subjects work longer and earn more money than if expectations are low. JEL Classification: C91, D01, D84, J22", "title": "" }, { "docid": "d593b96d11dd8a3516816d85fce5c7a0", "text": "This paper presents an approach for the integration of Virtual Reality (VR) and Computer-Aided Design (CAD). Our general goal is to develop a VR–CAD framework making possible intuitive and direct 3D edition on CAD objects within Virtual Environments (VE). Such a framework can be applied to collaborative part design activities and to immersive project reviews. The cornerstone of our approach is a model that manages implicit editing of CAD objects. This model uses a naming technique of B-Rep components and a set of logical rules to provide straight access to the operators of Construction History Graphs (CHG). Another set of logical rules and the replay capacities of CHG make it possible to modify in real-time the parameters of these operators according to the user's 3D interactions. A demonstrator of our model has been developed on the OpenCASCADE geometric kernel, but we explain how it can be applied to more standard CAD systems such as CATIA. We combined our VR–CAD framework with multimodal immersive interaction (using 6 DoF tracking, speech and gesture recognition systems) to gain direct and intuitive deformation of the objects' shapes within a VE, thus avoiding explicit interactions with the CHG within a classical WIMP interface. In addition, we present several haptic paradigms specially conceptualized and evaluated to provide an accurate perception of B-Rep components and to help the user during his/her 3D interactions. Finally, we conclude on some issues for future researches in the field of VR–CAD integration.", "title": "" }, { "docid": "d16a787399db6309ab4563f4265e91b9", "text": "The real-time information on news sites, blogs and social networking sites changes dynamically and spreads rapidly through the Web. Developing methods for handling such information at a massive scale requires that we think about how information content varies over time, how it is transmitted, and how it mutates as it spreads.\n We describe the News Information Flow Tracking, Yay! (NIFTY) system for large scale real-time tracking of \"memes\" - short textual phrases that travel and mutate through the Web. NIFTY is based on a novel highly-scalable incremental meme-clustering algorithm that efficiently extracts and identifies mutational variants of a single meme. NIFTY runs orders of magnitude faster than our previous Memetracker system, while also maintaining better consistency and quality of extracted memes.\n We demonstrate the effectiveness of our approach by processing a 20 terabyte dataset of 6.1 billion blog posts and news articles that we have been continuously collecting for the last four years. NIFTY extracted 2.9 billion unique textual phrases and identified more than 9 million memes. Our meme-tracking algorithm was able to process the entire dataset in less than five days using a single machine. Furthermore, we also provide a live deployment of the NIFTY system that allows users to explore the dynamics of online news in near real-time.", "title": "" }, { "docid": "9bf951269881138b9fae1d345be5b2e8", "text": "A biofuel from any biodegradable formation process such as a food waste bio-digester plant is a mixture of several gases such as methane (CH4), carbon dioxide (CO2), hydrogen sulfide (H2S), ammonia (NH3) and impurities like water and dust particles. The results are reported of a parametric study of the process of separation of methane, which is the most important gas in the mixture and usable as a biofuel, from particles and H2S. A cyclone, which is a conventional, economic and simple device for gas-solid separation, is considered based on the modification of three Texas A&M cyclone designs (1D2D, 2D2D and 1D3D) by the inclusion of an air inlet tube. A parametric sizing is performed of the cyclone for biogas purification, accounting for the separation of hydrogen sulfide (H2S) and dust particles from the biofuel. The stochiometric oxidation of H2S to form elemental sulphur is considered a useful cyclone design criterion. The proposed design includes geometric parameters and several criteria for quantifying the performance of cyclone separators such as the Lapple Model for minimum particle diameter collected, collection efficiency and pressure drop. For biogas volumetric flow rates between 0 and 1 m/s and inlet flow velocities of 12 m/s, 15 m/s and 18 m/s for the 1D2D, 2D2D and 1D3D cyclones, respectively, it is observed that the 2D2D configuration is most economic in terms of sizing (total height and diameter of cyclone). The 1D2D configuration experiences the lowest pressure drop. A design algorithm coupled with a user-friendly graphics interface is developed on the MATLAB platform, providing a tool for sizing and designing suitable cyclones.", "title": "" }, { "docid": "3eb50289c3b28d2ce88052199d40bf8d", "text": "Transportation Problem is an important aspect which has been widely studied in Operations Research domain. It has been studied to simulate different real life problems. In particular, application of this Problem in NPHard Problems has a remarkable significance. In this Paper, we present a comparative study of Transportation Problem through Probabilistic and Fuzzy Uncertainties. Fuzzy Logic is a computational paradigm that generalizes classical two-valued logic for reasoning under uncertainty. In order to achieve this, the notation of membership in a set needs to become a matter of degree. By doing this we accomplish two things viz., (i) ease of describing human knowledge involving vague concepts and (ii) enhanced ability to develop cost-effective solution to real-world problem. The multi-valued nature of Fuzzy Sets allows handling uncertain and vague information. It is a model-less approach and a clever disguise of Probability Theory. We give comparative simulation results of both approaches and discuss the Computational Complexity. To the best of our knowledge, this is the first work on comparative study of Transportation Problem using Probabilistic and Fuzzy Uncertainties.", "title": "" }, { "docid": "95fe3badecc7fa92af6b6aa49b6ff3b2", "text": "As low-resolution position sensors, a high placement accuracy of Hall-effect sensors is hard to achieve. Accordingly, a commutation angle error is generated. The commutation angle error will inevitably increase the loss of the low inductance motor and even cause serious consequence, which is the abnormal conduction of a freewheeling diode in the unexcited phase especially at high speed. In this paper, the influence of the commutation angle error on the power loss for the high-speed brushless dc motor with low inductance and nonideal back electromotive force in a magnetically suspended control moment gyro (MSCMG) is analyzed in detail. In order to achieve low steady-state loss of an MSCMG for space application, a straightforward method of self-compensation of commutation angle based on dc-link current is proposed. Both simulation and experimental results confirm the feasibility and effectiveness of the proposed method.", "title": "" }, { "docid": "17ab4797666afed3a37a8761fcbb0d1e", "text": "In this paper, we propose a CPW fed triple band notch UWB antenna array with EBG structure. The major consideration in the antenna array design is the mutual coupling effect that exists within the elements. The use of Electromagnetic Band Gap structures in the antenna arrays can limit the coupling by suppresssing the surface waves. The triple band notch antenna consists of three slots which act as notch resonators for a specific band of frequencies, the C shape slot at the main radiator (WiMax-3.5GHz), a pair of CSRR structures at the ground plane(WLAN-5.8GHz) and an inverted U shaped slot in the center of the patch (Satellite Service bands-8.2GHz). The main objective is to reduce mutual coupling which in turn improves the peak realized gain, directivity.", "title": "" }, { "docid": "bb01b5e24d7472ab52079dcb8a65358d", "text": "There are plenty of classification methods that perform well when training and testing data are drawn from the same distribution. However, in real applications, this condition may be violated, which causes degradation of classification accuracy. Domain adaptation is an effective approach to address this problem. In this paper, we propose a general domain adaptation framework from the perspective of prediction reweighting, from which a novel approach is derived. Different from the major domain adaptation methods, our idea is to reweight predictions of the training classifier on testing data according to their signed distance to the domain separator, which is a classifier that distinguishes training data (from source domain) and testing data (from target domain). We then propagate the labels of target instances with larger weights to ones with smaller weights by introducing a manifold regularization method. It can be proved that our reweighting scheme effectively brings the source and target domains closer to each other in an appropriate sense, such that classification in target domain becomes easier. The proposed method can be implemented efficiently by a simple two-stage algorithm, and the target classifier has a closed-form solution. The effectiveness of our approach is verified by the experiments on artificial datasets and two standard benchmarks, a visual object recognition task and a cross-domain sentiment analysis of text. Experimental results demonstrate that our method is competitive with the state-of-the-art domain adaptation algorithms.", "title": "" }, { "docid": "ff41327bad272a6d80d4daba25b6472f", "text": "The dense very deep submicron (VDSM) system on chips (SoC) face a serious limitation in performance due to reverse scaling of global interconnects. Interconnection techniques which decrease delay, delay variation and ensure signal integrity, play an important role in the growth of the semiconductor industry into future generations. Current-mode low-swing interconnection techniques provide an attractive alternative to conventional full-swing voltage mode signaling in terms of delay, power and noise immunity. In this paper, we present a new current-mode low-swing interconnection technique which reduces the delay and delay variations in global interconnects. Extensive simulations for performance of our circuit under crosstalk, supply voltage, process and temperature variations were performed. The results indicate significant savings in power, reduction in delay and increase in noise immunity compared to other techniques.", "title": "" } ]
scidocsrr
0741929f7a9d81f2f88a235f1c9e7ca0
IRIT at e-Risk
[ { "docid": "8c072981fd0b949f54a39c043dfb75ce", "text": "Several studies in the literature have shown that the words people use are indicative of their psychological states. In particular, depression was found to be associated with distinctive linguistic patterns. In this talk, I will describe our first steps to try to identify as early as possible if the writer is developing a depressive state. I will detail the methodology we have adopted to build and make publicly available a test collection on depression and language use. The resulting corpus includes a series of textual interactions written by different subjects. The new collection not only encourages research on differences in language between depressed and non-depressed individuals, but also on the evolution of the language use of depressed individuals. I will also present the new CLEF lab that we will run next year on this topic (eRisk 2017), that includes a novel depression detection task and the proposal of effectiveness measure to systematically compare early detection algorithms and baseline results. Bio : Fabio Crestani est titulaire d'un diplôme en statistiques de l'Université de Padoue (Italie) et d'une maîtrise et d'un doctorat en sciences informatiques de l'Université de Glasgow (Royaume-Uni). Ses principaux domaines de recherche sont la recherche d'information, la fouille de textes et les bibliothèques numériques. Il a co-édité 10 livres et publié plus de 160 publications dans ces domaines de recherche. Il a été rédacteur en chef de Information Processing and Management (Elsevier) jusqu'en 2015 et membre du comité de rédaction de plusieurs revues. Ses travaux sur les réseaux sociaux sont particulièrement en phase avec les recherches menées par plusieurs équipes de l'IRIT.", "title": "" }, { "docid": "cfbf63d92dfafe4ac0243acdff6cf562", "text": "In this paper we present a linguistic resource for the lexical representation of affective knowledge. This resource (named W ORDNETAFFECT) was developed starting from W ORDNET, through a selection and tagging of a subset of synsets representing the affective", "title": "" } ]
[ { "docid": "942be0aa4dab5904139919351d6d63d4", "text": "Since Hinton and Salakhutdinov published their landmark science paper in 2006 ending the previous neural-network winter, research in neural networks has increased dramatically. Researchers have applied neural networks seemingly successfully to various topics in the field of computer science. However, there is a risk that we overlook other methods. Therefore, we take a recent end-to-end neural-network-based work (Dhingra et al., 2018) as a starting point and contrast this work with more classical techniques. This prior work focuses on the LAMBADA word prediction task, where broad context is used to predict the last word of a sentence. It is often assumed that neural networks are good at such tasks where feature extraction is important. We show that with simpler syntactic and semantic features (e.g. Across Sentence Boundary (ASB) N-grams) a state-ofthe-art neural network can be outperformed. Our discriminative language-model-based approach improves the word prediction accuracy from 55.6% to 58.9% on the LAMBADA task. As a next step, we plan to extend this work to other language modeling tasks.", "title": "" }, { "docid": "4aa17982590e86fea90267e4386e2ef1", "text": "There are many promising psychological interventions on the horizon, but there is no clear methodology for preparing them to be scaled up. Drawing on design thinking, the present research formalizes a methodology for redesigning and tailoring initial interventions. We test the methodology using the case of fixed versus growth mindsets during the transition to high school. Qualitative inquiry and rapid, iterative, randomized \"A/B\" experiments were conducted with ~3,000 participants to inform intervention revisions for this population. Next, two experimental evaluations showed that the revised growth mindset intervention was an improvement over previous versions in terms of short-term proxy outcomes (Study 1, N=7,501), and it improved 9th grade core-course GPA and reduced D/F GPAs for lower achieving students when delivered via the Internet under routine conditions with ~95% of students at 10 schools (Study 2, N=3,676). Although the intervention could still be improved even further, the current research provides a model for how to improve and scale interventions that begin to address pressing educational problems. It also provides insight into how to teach a growth mindset more effectively.", "title": "" }, { "docid": "441d603c72f2d3e609a043b203f3144b", "text": "Empowering academic librarians for effective e-services: an assessment of Web 2.0 competency levels Lilian Ingutia Oyieke Archie L Dick Article information: To cite this document: Lilian Ingutia Oyieke Archie L Dick , (2017),\" Empowering academic librarians for effective e-services: an assessment of Web 2.0 competency levels \", The Electronic Library , Vol. 35 Iss 2 pp. Permanent link to this document: http://dx.doi.org/10.1108/EL-10-2015-0200", "title": "" }, { "docid": "b9f0d1d80ba7f8c304a601d179730951", "text": "A critical part of developing a reliable software system is testing its recovery code. This code is traditionally difficult to test in the lab, and, in the field, it rarely gets to run; yet, when it does run, it must execute flawlessly in order to recover the system from failure. In this article, we present a library-level fault injection engine that enables the productive use of fault injection for software testing. We describe automated techniques for reliably identifying errors that applications may encounter when interacting with their environment, for automatically identifying high-value injection targets in program binaries, and for producing efficient injection test scenarios. We present a framework for writing precise triggers that inject desired faults, in the form of error return codes and corresponding side effects, at the boundary between applications and libraries. These techniques are embodied in LFI, a new fault injection engine we are distributing http://lfi.epfl.ch. This article includes a report of our initial experience using LFI. Most notably, LFI found 12 serious, previously unreported bugs in the MySQL database server, Git version control system, BIND name server, Pidgin IM client, and PBFT replication system with no developer assistance and no access to source code. LFI also increased recovery-code coverage from virtually zero up to 60% entirely automatically without requiring new tests or human involvement.", "title": "" }, { "docid": "fb00601b60bcd1f7a112e34d93d55d01", "text": "Long Short-Term Memory (LSTM) has achieved state-of-the-art performances on a wide range of tasks. Its outstanding performance is guaranteed by the long-term memory ability which matches the sequential data perfectly and the gating structure controlling the information flow. However, LSTMs are prone to be memory-bandwidth limited in realistic applications and need an unbearable period of training and inference time as the model size is ever-increasing. To tackle this problem, various efficient model compression methods have been proposed. Most of them need a big and expensive pre-trained model which is a nightmare for resource-limited devices where the memory budget is strictly limited. To remedy this situation, in this paper, we incorporate the Sparse Evolutionary Training (SET) procedure into LSTM, proposing a novel model dubbed SET-LSTM. Rather than starting with a fully-connected architecture, SET-LSTM has a sparse topology and dramatically fewer parameters in both phases, training and inference. Considering the specific architecture of LSTMs, we replace the LSTM cells and embedding layers with sparse structures and further on, use an evolutionary strategy to adapt the sparse connectivity to the data. Additionally, we find that SET-LSTM can provide many different good combinations of sparse connectivity to substitute the overparameterized optimization problem of dense neural networks. Evaluated on four sentiment analysis classification datasets, the results demonstrate that our proposed model is able to achieve usually better performance than its fully connected counterpart while having less than 4% of its parameters. Department of Mathematics and Computer Science, Eindhoven University of Technology, Netherlands. Correspondence to: Shiwei Liu <s.liu3@tue.nl>.", "title": "" }, { "docid": "3da8d37fd9f9f512e8c07b465dd14d38", "text": "Neural network-based methods for image processing are becoming widely used in practical applications. Modern neural networks are computationally expensive and require specialized hardware, such as graphics processing units. Since such hardware is not always available in real life applications, there is a compelling need for the design of neural networks for mobile devices. Mobile neural networks typically have reduced number of parameters and require a relatively small number of arithmetic operations. However, they usually still are executed at the software level and use floating-point calculations. The use of mobile networks without further optimization may not provide sufficient performance when high processing speed is required, for example, in real-time video processing (30 frames per second). In this study, we suggest optimizations to speed up computations in order to efficiently use already trained neural networks on a mobile device. Specifically, we propose an approach for speeding up neural networks by moving computation from software to hardware and by using fixed-point calculations instead of floating-point. We propose a number of methods for neural network architecture design to improve the performance with fixed-point calculations. We also show an example of how existing datasets can be modified and adapted for the recognition task in hand. Finally, we present the design and the implementation of a floating-point gate array-based device to solve the practical problem of real-time handwritten digit classification from mobile camera video feed.", "title": "" }, { "docid": "6be3470d014aac14b6af9d343539b4b8", "text": "In this paper, we discuss logic circuit designs using the circuit model of three-state quantum dot gate field effect transistors (QDGFETs). QDGFETs produce one intermediate state between the two normal stable ON and OFF states due to a change in the threshold voltage over this range. We have developed a simplified circuit model that accounts for this intermediate state. Interesting logic can be implemented using QDGFETs. In this paper, we discuss the designs of various two-input three-state QDGFET gates, including NAND- and NOR-like operations and their application in different combinational circuits like decoder, multiplier, adder, and so on. Increased number of states in three-state QDGFETs will increase the number of bit-handling capability of this device and will help us to handle more number of bits at a time with less circuit elements.", "title": "" }, { "docid": "c843f4ba35aee9ef2ac7e852a1d489c4", "text": "We investigate the effect of a corporate culture of sustainability on multiple facets of corporate behavior and performance outcomes. Using a matched sample of 180 companies, we find that corporations that voluntarily adopted environmental and social policies many years ago termed as High Sustainability companies exhibit fundamentally different characteristics from a matched sample of firms that adopted almost none of these policies termed as Low Sustainability companies. In particular, we find that the boards of directors of these companies are more likely to be responsible for sustainability and top executive incentives are more likely to be a function of sustainability metrics. Moreover, they are more likely to have organized procedures for stakeholder engagement, to be more long-term oriented, and to exhibit more measurement and disclosure of nonfinancial information. Finally, we provide evidence that High Sustainability companies significantly outperform their counterparts over the long-term, both in terms of stock market and accounting performance. The outperformance is stronger in sectors where the customers are individual consumers instead of companies, companies compete on the basis of brands and reputations, and products significantly depend upon extracting large amounts of natural resources. Robert G. Eccles is a Professor of Management Practice at Harvard Business School. Ioannis Ioannou is an Assistant Professor of Strategic and International Management at London Business School. George Serafeim is an Assistant Professor of Business Administration at Harvard Business School, contact email: gserafeim@hbs.edu. Robert Eccles and George Serafeim gratefully acknowledge financial support from the Division of Faculty Research and Development of the Harvard Business School. We would like to thank Christopher Greenwald for supplying us with the ASSET4 data. Moreover, we would like to thank Cecile Churet and Iordanis Chatziprodromou from Sustainable Asset Management for giving us access to their proprietary data. We are grateful to Chris Allen, Jeff Cronin, Christine Rivera, and James Zeitler for research assistance. We thank Ben Esty, Joshua Margolis, Costas Markides, Catherine Thomas and seminar participants at Boston College for helpful comments. We are solely responsible for any errors in this manuscript.", "title": "" }, { "docid": "8db37f6f495a68da176e1ed411ce37a7", "text": "We present Bolt, a data management system for an emerging class of applications—those that manipulate data from connected devices in the home. It abstracts this data as a stream of time-tag-value records, with arbitrary, application-defined tags. For reliable sharing among applications, some of which may be running outside the home, Bolt uses untrusted cloud storage as seamless extension of local storage. It organizes data into chunks that contains multiple records and are individually compressed and encrypted. While chunking enables efficient transfer and storage, it also implies that data is retrieved at the granularity of chunks, instead of records. We show that the resulting overhead, however, is small because applications in this domain frequently query for multiple proximate records. We develop three diverse applications on top of Bolt and find that the performance needs of each are easily met. We also find that compared to OpenTSDB, a popular time-series database system, Bolt is up to 40 times faster than OpenTSDB while requiring 3–5 times less storage space.", "title": "" }, { "docid": "1c167e508a03587076531a4a14c822f2", "text": "We develop a new computational model for representing the fine-grained meanings of near-synonyms and the differences between them. We also develop a lexical-choice process that can decide which of several near-synonyms is most appropriate in a particular situation. This research has direct applications in machine translation and text generation. We first identify the problems of representing near-synonyms in a computational lexicon and show that no previous model adequately accounts for near-synonymy. We then propose a preliminary theory to account for near-synonymy, relying crucially on the notion of granularity of representation, in which the meaning of a word arises out of a context-dependent combination of a context-independent core meaning and a set of explicit differences to its near-synonyms. That is, near-synonyms cluster together. We then develop a clustered model of lexical knowledge, derived from the conventional ontological model. The model cuts off the ontology at a coarse grain, thus avoiding an awkward proliferation of language-dependent concepts in the ontology, yet maintaining the advantages of efficient computation and reasoning. The model groups near-synonyms into subconceptual clusters that are linked to the ontology. A cluster differentiates near-synonyms in terms of fine-grained aspects of denotation, implication, expressed attitude, and style. The model is general enough to account for other types of variation, for instance, in collocational behavior. An efficient, robust, and flexible fine-grained lexical-choice process is a consequence of a clustered model of lexical knowledge. To make it work, we formalize criteria for lexical choice as preferences to express certain concepts with varying indirectness, to express attitudes, and to establish certain styles. The lexical-choice process itself works on two tiers: between clusters and between near-synonyns of clusters. We describe our prototype implementation of the system, called I-Saurus.", "title": "" }, { "docid": "b525081979bebe54e2262086170cbb31", "text": " Activity recognition strategies assume large amounts of labeled training data which require tedious human labor to label.  They also use hand engineered features, which are not best for all applications, hence required to be done separately for each application.  Several recognition strategies have benefited from deep learning for unsupervised feature selection, which has two important property – fine tuning and incremental update. Question! Can deep learning be leveraged upon for continuous learning of activity models from streaming videos? Contributions", "title": "" }, { "docid": "60de343325a305b08dfa46336f2617b5", "text": "On Friday, May 12, 2017 a large cyber-attack was launched using WannaCry (or WannaCrypt). In a few days, this ransomware virus targeting Microsoft Windows systems infected more than 230,000 computers in 150 countries. Once activated, the virus demanded ransom payments in order to unlock the infected system. The widespread attack affected endless sectors – energy, transportation, shipping, telecommunications, and of course health care. Britain’s National Health Service (NHS) reported that computers, MRI scanners, blood-storage refrigerators and operating room equipment may have all been impacted. Patient care was reportedly hindered and at the height of the attack, NHS was unable to care for non-critical emergencies and resorted to diversion of care from impacted facilities. While daunting to recover from, the entire situation was entirely preventable. A Bcritical^ patch had been released by Microsoft on March 14, 2017. Once applied, this patch removed any vulnerability to the virus. However, hundreds of organizations running thousands of systems had failed to apply the patch in the first 59 days it had been released. This entire situation highlights a critical need to reexamine how we maintain our health information systems. Equally important is a need to rethink how organizations sunset older, unsupported operating systems, to ensure that security risks are minimized. For example, in 2016, the NHS was reported to have thousands of computers still running Windows XP – a version no longer supported or maintained by Microsoft. There is no question that this will happen again. However, health organizations can mitigate future risk by ensuring best security practices are adhered to.", "title": "" }, { "docid": "7d1a11732afff03a1be9677949e9806b", "text": "In this paper we survey the basics of Reinforcement Learning and (Evolutionary) Game Theory, applied to the field of Multi-Agent Systems. This paper contains three parts. We start with an overview on the fundamentals of Reinforcement Learning. Next we summarize the most important aspects of Evolutionary Game Theory. Finally, we discuss the state-of-the-art of Multi-Agent Reinforcement Learning and the mathematical connection with Evolutionary Game Theory.", "title": "" }, { "docid": "7b6c93b9e787ab0ba512cc8aaff185af", "text": "INTRODUCTION The field of second (or foreign) language teaching has undergone many fluctuations and dramatic shifts over the years. As opposed to physics or chemistry, where progress is more or less steady until a major discovery causes a radical theoretical revision (Kuhn, 1970), language teaching is a field where fads and heroes have come and gone in a manner fairly consistent with the kinds of changes that occur in youth culture. I believe that one reason for the frequent changes that have been taking place until recently is the fact that very few language teachers have even the vaguest sense of history about their profession and are unclear concerning the historical bases of the many methodological options they currently have at their disposal. It is hoped that this brief and necessarily oversimplified survey will encourage many language teachers to learn more about the origins of their profession. Such knowledge will give some healthy perspective in evaluating the socalled innovations or new approaches to methodology that will continue to emerge over time.", "title": "" }, { "docid": "c2a9a54b230e3c3a98f07c7619cc6ebe", "text": "H.264/AVC significantly outperforms previous video coding standards with many new coding tools. However, the better performance comes at the price of the extraordinarily huge computational complexity and memory access requirement, which makes it difficult to design a hardwired encoder for real-time applications. In addition, due to the complex, sequential, and highly data-dependent characteristics of the essential algorithms in H.264/AVC, both the pipelining and the parallel processing techniques are constrained to be employed. The hardware utilization and throughput are also decreased because of the block/MB/frame-level reconstruction loops. In this paper, we describe our techniques to design the H.264/AVC video encoder for HDTV applications. On the system design level, in consideration of the characteristics of the key components and the reconstruction loops, the four-stage macroblock pipelined system architecture is first proposed with an efficient scheduling and memory hierarchy. On the module design level, the design considerations of the significant modules are addressed followed by the hardware architectures, including low-bandwidth integer motion estimation, parallel fractional motion estimation, reconfigurable intrapredictor generator, dual-buffer block-pipelined entropy coder, and deblocking filter. With these techniques, the prototype chip of the efficient H.264/AVC encoder is implemented with 922.8 K logic gates and 34.72-KB SRAM at 108-MHz operation frequency.", "title": "" }, { "docid": "120ae3067b1f0027faefeee84e6bd296", "text": "We review arguments and empirical evidence found in the comparative literature that bear on the differences in the survival rates of parliamentary and presidential democracies. Most of these arguments focus on the fact that presidential democracies are based on the separation of executive and legislative powers, while parliamentary democracies are based on the fusion of these powers. From this basic distinction several implications are derived which would lead to radically different behavior and outcomes under each regime. We argue that this perspective is misguided and that we cannot deduce the functioning of the political system from the way governments are formed. There are other provisions, constitutional or otherwise, that also affect the way parliamentary and presidential democracies operate and that may counteract some of the tendencies that we would expect to observe if we were to derive the regime’s performance from its basic constitutional principle.", "title": "" }, { "docid": "29af3fa5673624831be3ba4a64a078d6", "text": "A low-profile single-fed, wideband, circularly-polarized slot antenna is proposed. The antenna comprises a square slot fed by a U-shaped microstrip line which provides a wide impedance bandwidth. Wideband circular polarization is obtained by incorporating a metasurface consisting of a 9 × 9 lattice of periodic metal plates. It is shown that this metasurface generates additional resonances, lowers the axial ratio (AR) of the radiating structure, and enhances the radiation pattern stability at higher frequencies. The overall size of the antenna is only 28 mm × 28 mm (0.3 λo × 0.3 λo). The proposed antenna shows an impedance bandwidth from 2.6 GHz to 9 GHz (110.3%) for |S11| > −10 dB, and axial ratio bandwidth from 3.5 GHz to 6.1 GHz (54.1%) for AR > 3 dB. The antenna has a stable radiation pattern and a gain of greater than 3 dBi over the entire frequency band.", "title": "" }, { "docid": "87c09def017d5e32f06a887e5d06b0ff", "text": "A blade element momentum theory propeller model is coupled with a commercial RANS solver. This allows the fully appended self propulsion of the autonomous underwater vehicle Autosub 3 to be considered. The quasi-steady propeller model has been developed to allow for circumferential and radial variations in axial and tangential inflow. The non-uniform inflow is due to control surface deflections and the bow-down pitch of the vehicle in cruise condition. The influence of propeller blade Reynolds number is included through the use of appropriate sectional lift and drag coefficients. Simulations have been performed over the vehicles operational speed range (Re = 6.8× 10 to 13.5× 10). A workstation is used for the calculations with mesh sizes up to 2x10 elements. Grid uncertainty is calculated to be 3.07% for the wake fraction. The initial comparisons with in service data show that the coupled RANS-BEMT simulation under predicts the drag of the vehicle and consequently the required propeller rpm. However, when an appropriate correction is made for the effect on resistance of various protruding sensors the predicted propulsor rpm matches well with that of in-service rpm measurements for vessel speeds (1m/s 2m/s). The developed analysis captures the important influence of the propeller blade and hull Reynolds number on overall system efficiency.", "title": "" }, { "docid": "2f17bb51bca7972726c25994333aa827", "text": "We present new approximation algorithms for the k-median and k-means clustering problems. To this end, we obtain small coresets for k-median and k-means clustering in general metric spaces and in Euclidean spaces. In Rd, these coresets are of size with polynomial dependency on the dimension d. This leads to (1 + ε)-approximation algorithms to the optimal k-median and kmeans clustering in Rd, with running time O(ndk +2(k/ε) O(1) d2 log n), where n is the number of points. This improves over previous results. We use those coresets to maintain a (1+ ε)-approximate k-median and k-means clustering of a stream of points in Rd, using O(d2k2ε−2 log n) space. These are the first streaming algorithms, for those problems, that have space complexity with polynomial dependency on the dimension.", "title": "" }, { "docid": "52e36a3910d9782f60cd8fcb3dc54c60", "text": "INTRODUCTION\nCognitive behavioural therapy (CBT) with trauma focus is the most evidence supported psychotherapeutic treatment of PTSD, but few CBT treatments for traumatized refugees have been described in detail.\n\n\nPURPOSE\nTo describe and evaluate a manualized cognitive behavioral therapy for traumatized refugees incorporating exposure therapy, mindfulness and acceptance and commitment therapy.\n\n\nMATERIAL AND METHODS\n85 patients received six months' treatment at a Copenhagen Trauma Clinic for Refugees and completed self-ratings before and after treatment. The treatment administered to each patient was monitored in detail. The changes in mental state and the treatment components associated with change in state were analyzed statistically.\n\n\nRESULTS\nDespite the low level of functioning and high co-morbidity of patients, 42% received highly structured CBT, which was positively associated with all treatment outcomes. The more methods used and the more time each method was used, the better the outcome. The majority of patients were able to make homework assignments and this was associated with better treatment outcome. Correlation analysis showed no association between severity of symptoms at baseline and the observed change.\n\n\nCONCLUSION\nThe study suggests that CBT treatment incorporating mindfulness and acceptance and commitment therapy is promising for traumatized refugees and punctures the myth that this group of patients are unable to participate fully in structured CBT. However, treatment methods must be adapted to the special needs of refugees and trauma exposure should be further investigated.", "title": "" } ]
scidocsrr
0b0dbb23aab7a430c7e2bdd177bf9a6a
Using a novel computational drug-repositioning approach (DrugPredict) to rapidly identify potent drug candidates for cancer treatment
[ { "docid": "5386fa18e817fd865c0119bc3ee041c3", "text": "DrugBank is a unique bioinformatics/cheminformatics resource that combines detailed drug (i.e. chemical) data with comprehensive drug target (i.e. protein) information. The database contains >4100 drug entries including >800 FDA approved small molecule and biotech drugs as well as >3200 experimental drugs. Additionally, >14,000 protein or drug target sequences are linked to these drug entries. Each DrugCard entry contains >80 data fields with half of the information being devoted to drug/chemical data and the other half devoted to drug target or protein data. Many data fields are hyperlinked to other databases (KEGG, PubChem, ChEBI, PDB, Swiss-Prot and GenBank) and a variety of structure viewing applets. The database is fully searchable supporting extensive text, sequence, chemical structure and relational query searches. Potential applications of DrugBank include in silico drug target discovery, drug design, drug docking or screening, drug metabolism prediction, drug interaction prediction and general pharmaceutical education. DrugBank is available at http://redpoll.pharmacy.ualberta.ca/drugbank/.", "title": "" } ]
[ { "docid": "411b7397d42463bdc5459a401fee4d2b", "text": "Principled development techniques could greatly enhance the understandability of expert systems for both users and system developers. Current systems have limited explanatory capabilities and present maintenance problems because of a failure to explicitly represent the knowledge and reasoning that went into their design. This paper describes a paradigm for constructing expert systems which attempts to identify that tacit knowledge, provide means for capturing it in the knowledge bases of expert systems, and, apply it towards more perspicuous machine-generated explanations and more consistent and maintainable system organization.", "title": "" }, { "docid": "490df7bfea3338d98cbc0bd945463606", "text": "This study examined perceived coping (perceived problem-solving ability and progress in coping with problems) as a mediator between adult attachment (anxiety and avoidance) and psychological distress (depression, hopelessness, anxiety, anger, and interpersonal problems). Survey data from 515 undergraduate students were analyzed using structural equation modeling. Results indicated that perceived coping fully mediated the relationship between attachment anxiety and psychological distress and partially mediated the relationship between attachment avoidance and psychological distress. These findings suggest not only that it is important to consider attachment anxiety or avoidance in understanding distress but also that perceived coping plays an important role in these relationships. Implications for these more complex relations are discussed for both counseling interventions and further research.", "title": "" }, { "docid": "dd952376732c8b202baec3b8455e9a96", "text": "The paper discusses basic principles of different types of working gyroscope based resonators whispering gallery modes.", "title": "" }, { "docid": "c97ecd776c03b222c98fa63910f1f986", "text": "Existing image classification datasets used in computer vision tend to have a uniform distribution of images across object categories. In contrast, the natural world is heavily imbalanced, as some species are more abundant and easier to photograph than others. To encourage further progress in challenging real world conditions we present the iNaturalist species classification and detection dataset, consisting of 859,000 images from over 5,000 different species of plants and animals. It features visually similar species, captured in a wide variety of situations, from all over the world. Images were collected with different camera types, have varying image quality, feature a large class imbalance, and have been verified by multiple citizen scientists. We discuss the collection of the dataset and present extensive baseline experiments using state-of-the-art computer vision classification and detection models. Results show that current non-ensemble based methods achieve only 67% top one classification accuracy, illustrating the difficulty of the dataset. Specifically, we observe poor results for classes with small numbers of training examples suggesting more attention is needed in low-shot learning.", "title": "" }, { "docid": "5c0dea7721a5f63a11fe4df28c60d64f", "text": "INTRODUCTION\nReducing postoperative opioid consumption is a priority given its impact upon recovery, and the efficacy of ketamine as an opioid-sparing agent in children is debated. The goal of this study was to update a previous meta-analysis on the postoperative opioid-sparing effect of ketamine, adding trial sequential analysis (TSA) and four new studies.\n\n\nMATERIALS AND METHODS\nA comprehensive literature search was conducted to identify clinical trials that examined ketamine as a perioperative opioid-sparing agent in children and infants. Outcomes measured were postoperative opioid consumption to 48 h (primary outcome: postoperative opioid consumption to 24 h), postoperative pain intensity, postoperative nausea and vomiting and psychotomimetic symptoms. The data were combined to calculate the pooled mean difference, odds ratios or standard mean differences. In addition to this classical meta-analysis approach, a TSA was performed.\n\n\nRESULTS\nEleven articles were identified, with four added to seven from the previous meta-analysis. Ketamine did not exhibit a global postoperative opioid-sparing effect to 48 postoperative hours, nor did it decrease postoperative pain intensity. This result was confirmed using TSA, which found a lack of power to draw any conclusion regarding the primary outcome of this meta-analysis (postoperative opioid consumption to 24 h). Ketamine did not increase the prevalence of either postoperative nausea and vomiting or psychotomimetic complications.\n\n\nCONCLUSIONS\nThis meta-analysis did not find a postoperative opioid-sparing effect of ketamine. According to the TSA, this negative result might involve a lack of power of this meta-analysis. Further studies are needed in order to assess the postoperative opioid-sparing effects of ketamine in children.", "title": "" }, { "docid": "b657aeceeee6c29330cf45dcc40d6198", "text": "A small form-factor 60-GHz SiGe BiCMOS radio with two antennas-in-package is presented. The fully-integrated feature-rich transceiver provides a complete RF solution for mobile WiGig/IEEE 802.11ad applications.", "title": "" }, { "docid": "d3c059d0889fc390a91d58aa82980fcc", "text": "In recent trends industries, organizations and many companies are using personal identification strategies like finger print identification, RFID for tracking attendance and etc. Among of all these personal identification strategies face recognition is most natural, less time taken and high efficient one. It’s has several applications in attendance management systems and security systems. The main strategy involve in this paper is taking attendance in organizations, industries and etc. using face detection and recognition technology. A time period is settled for taking the attendance and after completion of time period attendance will directly stores into storage device mechanically without any human intervention. A message will send to absent student parent mobile using GSM technology. This attendance will be uploaded into web server using Ethernet. This raspberry pi 2 module is used in this system to achieve high speed of operation. Camera is interfaced to one USB port of raspberry pi 2. Eigen faces algorithm is used for face detection and recognition technology. Eigen faces algorithm is less time taken and high effective than other algorithms like viola-jones algorithm etc. the attendance will directly stores in storage device like pen drive that is connected to one of the USB port of raspberry pi 2. This system is most effective, easy and less time taken for tracking attendance in organizations with period wise without any human intervention.", "title": "" }, { "docid": "0974e9b09666c8a4f5a6ecd2446c9571", "text": "The aim of this systematic review is to assess the effectiveness and safety of Chinese herbal medicine (CHM) in treatment of anovulation and infertility in women. Eight (8) databases were extensively retrieved. The Chinese electronic databases included VIP Information, CMCC, and CNKI. The English electronic databases included AMED, CINAHL, Cochrane Library, Embase, and MEDLINE(®). Randomized controlled trials using CHM as intervention were included in the study selection. The quality of studies was assessed by the Jadad scale and the criteria referred to Cochrane reviewers' handbook. The efficacy of CHM treatment for infertility with anovulation was evaluated by meta-analysis. There were 692 articles retrieved according to the search strategy, and 1659 participants were involved in the 15 studies that satisfied the selection criteria. All the included trials were done in China. Meta-analysis indicated that CHM significantly increased the pregnancy rate (odds ratio [OR] 3.12, 95% confidence interval [CI] 2.50-3.88) and reduced the miscarriage rate (OR 0.2, 95% CI 0.10-0.41) compared to clomiphene. In addition, CHM also increased the ovulation rate (OR 1.55, 95% CI 1.06-2.25) and improved the cervical mucus score (OR 3.82, 95% CI 1.78-8.21) compared to clomiphene, while there were no significant difference between CHM and clomiphene combined with other medicine. CHM is effective in treating infertility with anovulation. Also, no significant adverse effects were identified for the use of CHM from the studies included in this review. However, owing to the low quality of the studies investigated, more randomized controlled trials are needed before evidence-based recommendation regarding the effectiveness and safety of CHM in the management of infertility with anovulation can be provided.", "title": "" }, { "docid": "9973ae8007662ae54ad272f84c771f69", "text": "Skeletal deficiency in the central midface impacts nasal aesthetics. This lack of lower face projection can be corrected by alloplastic augmentation of the pyriform aperture. Creating convexity in the deficient midface will make the nose seem less prominent. Augmentation of the pyriform aperture is, therefore, often a useful adjunct during the rhinoplasty procedure. Augmenting the skeleton in this area can alter the projection of the nasal base, the nasolabial angle, and the vertical plane of the lip. The implant design and surgical techniques described here are extensions of others' previous efforts to improve paranasal aesthetics.", "title": "" }, { "docid": "4007287be14b0cc732f5c87458f01147", "text": "In view of the importance of molecular sensing in the function of the gastrointestinal (GI) tract, we assessed whether signal transduction proteins that mediate taste signaling are expressed in cells of the human gut. Here, we demonstrated that the alpha-subunit of the taste-specific G protein gustducin (Galpha(gust)) is expressed prominently in cells of the human colon that also contain chromogranin A, an established marker of endocrine cells. Double-labeling immunofluorescence and staining of serial sections demonstrated that Galpha(gust) localized to enteroendocrine L cells that express peptide YY and glucagon-like peptide-1 in the human colonic mucosa. We also found expression of transcripts encoding human type 2 receptor (hT2R) family members, hT1R3, and Galpha(gust) in the human colon and in the human intestinal endocrine cell lines (HuTu-80 and NCI-H716 cells). Stimulation of HuTu-80 or NCI-H716 cells with the bitter-tasting compound phenylthiocarbamide, which binds hT2R38, induced a rapid increase in the intracellular Ca2+ concentration in these cells. The identification of Galpha(gust) and chemosensory receptors that perceive chemical components of ingested substances, including drugs and toxins, in open enteroendocrine L cells has important implications for understanding molecular sensing in the human GI tract and for developing novel therapeutic compounds that modify the function of these receptors in the gut.", "title": "" }, { "docid": "4d331769ca3f02e9ec96e172d98f3fab", "text": "This review focuses on the most recent applications of zinc oxide (ZnO) nanostructures for tissue engineering. ZnO is one of the most investigated metal oxides, thanks to its multifunctional properties coupled with the ease of preparing various morphologies, such as nanowires, nanorods, and nanoparticles. Most ZnO applications are based on its semiconducting, catalytic and piezoelectric properties. However, several works have highlighted that ZnO nanostructures may successfully promote the growth, proliferation and differentiation of several cell lines, in combination with the rise of promising antibacterial activities. In particular, osteogenesis and angiogenesis have been effectively demonstrated in numerous cases. Such peculiarities have been observed both for pure nanostructured ZnO scaffolds as well as for three-dimensional ZnO-based hybrid composite scaffolds, fabricated by additive manufacturing technologies. Therefore, all these findings suggest that ZnO nanostructures represent a powerful tool in promoting the acceleration of diverse biological processes, finally leading to the formation of new living tissue useful for organ repair.", "title": "" }, { "docid": "15208617386aeb77f73ca7c2b7bb2656", "text": "Multiplication is the basic building block for several DSP processors, Image processing and many other. Over the years the computational complexities of algorithms used in Digital Signal Processors (DSPs) have gradually increased. This requires a parallel array multiplier to achieve high execution speed or to meet the performance demands. A typical implementation of such an array multiplier is Braun design. Braun multiplier is a type of parallel array multiplier. The architecture of Braun multiplier mainly consists of some Carry Save Adders, array of AND gates and one Ripple Carry Adder. In this research work, a new design of Braun Multiplier is proposed and this proposed design of multiplier uses a very fast parallel prefix adder ( Kogge Stone Adder) in place of Ripple Carry Adder. The architecture of standard Braun Multiplier is modified in this work for reducing the delay due to Ripple Carry Adder and performing faster multiplication of two binary numbers. This research also presents a comparative study of FPGA implementation on Spartan2 and Spartartan2E for new multiplier design and standard braun multiplier. The RTL design of proposed new Braun Multiplier and standard braun multiplier is done using Verilog HDL. The simulation is performed using ModelSim. The Xilinx ISE design tool is used for FPGA implementation. Comparative result shows the modified design is effective when compared in terms of delay with the standard design.", "title": "" }, { "docid": "e6ac18a296303d4b113bd8114247bbd3", "text": "Enabling a sustainable mobility is one of primary goals of the so-called Smart Cities vision, and in this perspective, the deployment of intelligent parking systems represents a key aspect. This paper presents a novel IoT-aware Smart Parking System based on the jointly use of different technologies, such as RFID, WSN, NFC, and mobile. It is able to collect, in real time, both environmental parameters and information about the occupancy state of parking spaces. To reduce the overall system costs, the possibility to use a solar RFID tag as cars’ detection system has been analyzed. The system allows drivers to reach the nearest vacant parking spot and to pay for the parking fee, by using a customized mobile application. Furthermore, a software application based on RESTful Java and Google Cloud Messaging technologies has been installed on a CS in order to manage alert events. A proof-of-concept has been defined to demonstrate that the proposed solution is able to satisfy real requirements of an innovative Smart Parking System, while preliminary analysis of solar tag usage investigates the feasibility of the proposed detection solution.", "title": "" }, { "docid": "658f2d045fe005ee1a4016b2de0ae1b1", "text": "Given a partial description like “she opened the hood of the car,” humans can reason about the situation and anticipate what might come next (“then, she examined the engine”). In this paper, we introduce the task of grounded commonsense inference, unifying natural language inference and commonsense reasoning. We present Swag, a new dataset with 113k multiple choice questions about a rich spectrum of grounded situations. To address the recurring challenges of the annotation artifacts and human biases found in many existing datasets, we propose Adversarial Filtering (AF), a novel procedure that constructs a de-biased dataset by iteratively training an ensemble of stylistic classifiers, and using them to filter the data. To account for the aggressive adversarial filtering, we use state-of-theart language models to massively oversample a diverse set of potential counterfactuals. Empirical results demonstrate that while humans can solve the resulting inference problems with high accuracy (88%), various competitive models struggle on our task. We provide comprehensive analysis that indicates significant opportunities for future research.", "title": "" }, { "docid": "70ab1376ca7f1f39770e2d1d9bed6435", "text": "PURPOSE\nTo estimate the overall effect of 1000 ppm F relative to 250 ppm F toothpaste.\n\n\nMETHODS\nExperimental caries increment studies from the dental literature, which compared 1000 ppm with 250 ppm fluoride toothpastes, were summarized using meta-analytic methods.\n\n\nRESULTS\nThe overall caries reduction of 1000 ppm F relative to 250 ppm F paste was estimated to be 0.142 (95%-CL: 0.074-0.210) when applying a fixed effects model and 0.129 (95%-CL: 0.012-0.230) when applying a random effects model.\n\n\nCLINICAL SIGNIFICANCE\nThe present analysis found slightly lower caries increments (14%, 13%) in children using 1000 ppm F toothpastes compared to children using 250 ppm F pastes. On the other hand, the use of 1000 ppm F pastes is associated with dental fluorosis. Considering these effects it seems justifiable to the authors to keep the use of 250 ppm F pastes for preschool children in Switzerland.", "title": "" }, { "docid": "ab28314dbecefd6cb55e7c40752a6647", "text": "This paper deals with the exposition of how robotics can be applied to various fields of agriculture. One of the most important occupations in a developing country like India is agriculture. It is very important to improve the efficiency and productivity of agriculture by replacing laborers with intelligent machines like robots using latest technologies. The paper proposes a new strategy to replace humans in various agricultural operations like detection of presence of pests, spraying of pesticides, spraying of fertilizers, etc there by providing safety to the farmers and precision agriculture. The developed system involves designing a prototype which uses simple cost effective equipments like microprocessors, wireless camera, various motors and terminal equipments which is an aid to the farmers in various crop field activities.", "title": "" }, { "docid": "c560dd620b3c9c6718ce717ac33f0c21", "text": "This paper investigates the autocalibration of microelectromechanical systems (MEMS) triaxial accelerometer (TA) based on experimental design (DoE). First, for a special 6-parameter second-degree model, a six-point experimental scheme is proposed, and its G-optimality has been proven based on optimal DoE. Then, a new linearization approach is introduced, by which the TA model for autocalibration can be simplified as the expected second-degree form so that the proposed optimal experimental scheme can be applied. To reliably estimate the model parameter, a convergence-guaranteed iterative algorithm is also proposed, which can significantly reduce the bias caused by linearization. Thereafter, the effectiveness and robustness of the proposed approach have been demonstrated by simulation. Finally, the proposed calibration method has been experimentally verified using two typical types of MEMS TA, and desired experimental results effectively demonstrate the efficiency and accuracy of the proposed calibration approach.", "title": "" }, { "docid": "6a1d1be521a4ac0d838cebe2a779b1a9", "text": "Immunoglobulin (IgM) was isolated from the serum of four fish species, Atlantic salmon (Salmo salar L.), halibut (Hippoglossus hippoglossus L.), haddock (Melanogrammus aeglefinus L.) and cod (Gadus morhua L.) and a comparison made of some physical and biochemical properties. The molecular weight of IgM varied between the different species and between the different analytical methods used. IgM from all four species was tetrameric in serum although a proportion of the molecule was held together by noncovalent forces. Salmon and haddock IgM were composed of two IgM types as regards the overall charge whereas halibut and cod IgM were homogeneous in this respect. The molecular weight of the heavy and light chains was similar for all four species. The oligosaccharide moiety, which was N-linked and associated with the heavy chain varied from 7.8 to 11.4% of the total molecular weight. Lectin analysis indicated variable composition of the carbohydrate moiety between species. The sensitivity to PNGase and trypsin varied between the four species.", "title": "" }, { "docid": "996f1743ca60efa05f5113a4459f8b61", "text": "This paper presents a method for movie genre categorization of movie trailers, based on scene categorization. We view our approach as a step forward from using only low-level visual feature cues, towards the eventual goal of high-level seman- tic understanding of feature films. Our approach decom- poses each trailer into a collection of keyframes through shot boundary analysis. From these keyframes, we use state-of- the-art scene detectors and descriptors to extract features, which are then used for shot categorization via unsuper- vised learning. This allows us to represent trailers using a bag-of-visual-words (bovw) model with shot classes as vo- cabularies. We approach the genre classification task by mapping bovw temporally structured trailer features to four high-level movie genres: action, comedy, drama or horror films. We have conducted experiments on 1239 annotated trailers. Our experimental results demonstrate that exploit- ing scene structures improves film genre classification com- pared to using only low-level visual features.", "title": "" }, { "docid": "2a45fb350731967591487e0b6c9a820c", "text": "In this chapter, we report the first experimental explorations of reinforcement learning in Tourette syndrome, realized by our team in the last few years. This report will be preceded by an introduction aimed to provide the reader with the state of the art of the knowledge concerning the neural bases of reinforcement learning at the moment of these studies and the scientific rationale beyond them. In short, reinforcement learning is learning by trial and error to maximize rewards and minimize punishments. This decision-making and learning process implicates the dopaminergic system projecting to the frontal cortex-basal ganglia circuits. A large body of evidence suggests that the dysfunction of the same neural systems is implicated in the pathophysiology of Tourette syndrome. Our results show that Tourette condition, as well as the most common pharmacological treatments (dopamine antagonists), affects reinforcement learning performance in these patients. Specifically, the results suggest a deficit in negative reinforcement learning, possibly underpinned by a functional hyperdopaminergia, which could explain the persistence of tics, despite their evident inadaptive (negative) value. This idea, together with the implications of these results in Tourette therapy and the future perspectives, is discussed in Section 4 of this chapter.", "title": "" } ]
scidocsrr
f6ce34afe32912d171ab691584da41d0
Homography flow for dense correspondences
[ { "docid": "fbe4aa483a475943408c347210a1f03d", "text": "We present a Generalized Deformable Spatial Pyramid (GDSP) matching algorithm for calculating the dense correspondence between a pair of images with large appearance variations. The main challenges of the problem generally originate in appearance dissimilarities and geometric variations between images. To address these challenges, we improve the existing Deformable Spatial Pyramid (DSP) [10] model by generalizing the search space and devising the spatial smoothness. The former is leveraged by rotations and scales, and the latter simultaneously considers dependencies between high-dimensional labels through the pyramid structure. Our spatial regularization in the high-dimensional space enables our model to effectively preserve the meaningful geometry of objects in the input images while allowing for a wide range of geometry variations such as perspective transform and non-rigid deformation. The experimental results on public datasets and challenging scenarios show that our method outperforms the state-of-the-art methods both qualitatively and quantitatively.", "title": "" } ]
[ { "docid": "d90a6f0b13b42ea44d214b3584fd41d7", "text": "Much work on the demographics of social media platforms such as Twitter has focused on the properties of individuals, such as gender or age. However, because credible detectors for organization accounts do not exist, these and future largescale studies of human behavior on social media can be contaminated by the presence of accounts belonging to organizations. We analyze organizations on Twitter to assess their distinct behavioral characteristics and determine what types of organizations are active. We first create a dataset of manually classified accounts from a representative sample of Twitter and then introduce a classifier to distinguish between organizational and personal accounts. In addition, we find that although organizations make up less than 10% of the accounts, they are significantly more connected, with an order of magnitude more friends and followers.", "title": "" }, { "docid": "e3079a6c47a804498cea0caf804d6f11", "text": "For realtime walking control of a biped robot, we analyze the dynamics of a three-dimensional inverted pendulum whose motions are constrained onto an arbitrarily defined plane. This analysis leads us a simple linear dynamics, the Three-Dimensional Linear Inverted Pendulum Mode (3D-LIPM). Geometric nature of trajectories under the 3D-LIPM is discussed, and an algorithm for walking pattern generation is presented. Experimental results of realtime walking control of a 12 d.o.f. biped robot HRP-2L using an input device such as a game pad are also shown.", "title": "" }, { "docid": "ad88d2e2213624270328be0aa019b5cd", "text": "The traditional decision-making framework for newsvendor models is to assume a distribution of the underlying demand. However, the resulting optimal policy is typically sensitive to the choice of the distribution. A more conservative approach is to assume that the distribution belongs to a set parameterized by a few known moments. An ambiguity-averse newsvendor would choose to maximize the worst-case profit. Most models of this type assume that only the mean and the variance are known, but do not attempt to include asymmetry properties of the distribution. Other recent models address asymmetry by including skewness and kurtosis. However, closed-form expressions on the optimal bounds are difficult to find for such models. In this paper, we propose a framework under which the expectation of a piecewise linear objective function is optimized over a set of distributions with known asymmetry properties. This asymmetry is represented by the first two moments of multiple random variables that result from partitioning the original distribution. In the simplest case, this reduces to semivariance. The optimal bounds can be solved through a second-order cone programming (SOCP) problem. This framework can be applied to the risk-averse and risk-neutral newsvendor problems and option pricing. We provide a closed-form expression for the worst-case newsvendor profit with only mean, variance and semivariance information.", "title": "" }, { "docid": "94b5123af687f4acc7a897da67431809", "text": "Over the last several years, it has become apparent that there are critical problems with the hypothesis that brain dopamine (DA) systems, particularly in the nucleus accumbens, directly mediate the rewarding or primary motivational characteristics of natural stimuli such as food. Hypotheses related to DA function are undergoing a substantial restructuring, such that the classic emphasis on hedonia and primary reward is giving way to diverse lines of research that focus on aspects of instrumental learning, reward prediction, incentive motivation, and behavioral activation. The present review discusses dopaminergic involvement in behavioral activation and, in particular, emphasizes the effort-related functions of nucleus accumbens DA and associated forebrain circuitry. The effects of accumbens DA depletions on food-seeking behavior are critically dependent upon the work requirements of the task. Lever pressing schedules that have minimal work requirements are largely unaffected by accumbens DA depletions, whereas reinforcement schedules that have high work (e.g., ratio) requirements are substantially impaired by accumbens DA depletions. Moreover, interference with accumbens DA transmission exerts a powerful influence over effort-related decision making. Rats with accumbens DA depletions reallocate their instrumental behavior away from food-reinforced tasks that have high response requirements, and instead, these rats select a less-effortful type of food-seeking behavior. Along with prefrontal cortex and the amygdala, nucleus accumbens is a component of the brain circuitry regulating effort-related functions. Studies of the brain systems regulating effort-based processes may have implications for understanding drug abuse, as well as energy-related disorders such as psychomotor slowing, fatigue, or anergia in depression.", "title": "" }, { "docid": "19a78b1fc19fe25ec5d29baebfe14feb", "text": "A split-capacitor Vcm-based capacitor-switching scheme is proposed for successive approximation register (SAR) analog-to-digital converters (ADCs) to reduce the capacitor-switching energy. By rearranging the structure and procedure of the capacitive array, the scheme can save the capacitor-switching energy by about 92% than the conventional scheme with better monotonicity. Meanwhile, a two-segment DC offset correction scheme for the comparator is also proposed to meet the speed and accuracy requirements. These techniques are utilized in the design of a 10b 70MS/s SAR ADC in 65nm 1P9M CMOS technology. Measurement results show a peak signal-to-noise-and-distortion ratio (SNDR) of 53.2dB, while consuming 960μW from 1.2V supply. The figure of merit (FoM) is 36.8fJ/Conversion-step and the total active area is 220×220μm2.", "title": "" }, { "docid": "813977850636d545d946503fa09c47ba", "text": "In this paper we discuss the opportunities and challenges of the recently introduced Lean UX software development philosophy. The point of view is product design and development in a software agency. Lean UX philosophy is identified by three ingredients: design thinking, Lean production and Agile development. The major challenge for an agency is the organizational readiness of the client organization to adopt a new way of working. Rather than any special tool or practice, we see that the renewal of user-centered design and development is hindered by existing purchase processes and slow decision making patterns.", "title": "" }, { "docid": "c4a69d651b166503abd2686b02162b72", "text": "Existing asymmetric encryption algorithms require the storage of the secret private key. Stored keys are often protected by poorly selected user passwords that can either be guessed or obtained through brute force attacks. This is a weak link in the overall encryption system and can potentially compromise the integrity of sensitive data. Combining biometrics with cryptography is seen as a possible solution but any biometric cryptosystem must be able to overcome small variations present between different acquisitions of the same biometric in order to produce consistent keys. This paper discusses a new method which uses an entropy based feature extraction process coupled with Reed-Solomon error correcting codes that can generate deterministic bit-sequences from the output of an iterative one-way transform. The technique is evaluated using 3D face data and is shown to reliably produce keys of suitable length for 128-bit Advanced Encryption Standard (AES).", "title": "" }, { "docid": "8fb99cd1e2db6b1e4f3f0c2fa1b125bc", "text": "Temptation pervades modern social life, including the temptation to engage in infidelity. The present investigation examines one factor that may put individuals at a greater risk of being unfaithful to their partner: dispositional avoidant attachment style. The authors hypothesize that avoidantly attached people may be less resistant to temptations for infidelity due to lower levels of commitment in romantic relationships. This hypothesis was confirmed in 8 studies. People with high, vs. low, levels of dispositional avoidant attachment had more permissive attitudes toward infidelity (Study 1), showed attentional bias toward attractive alternative partners (Study 2), expressed greater daily interest in meeting alternatives to their current relationship partner (Study 5), perceived alternatives to their current relationship partner more positively (Study 6), and engaged in more infidelity over time (Studies 3, 4, 7, and 8). This effect was mediated by lower levels of commitment (Studies 5-8). Thus, avoidant attachment predicted a broad spectrum of responses indicative of interest in alternatives and propensity to engage in infidelity, which were mediated by low levels of commitment.", "title": "" }, { "docid": "f83a8c7d80085c9428421a69202af206", "text": "The simulation of EM (electromagnetic) wave propagation requires considerable computation time, as it analyzes a large number of propagation paths. To overcome this problem, we propose a GPU (graphics processing unit)-based parallel algorithm for VPL (vertical plane launch)-approximated EM wave propagation. The conventional algorithm computes the gain along propagation paths with irregular memory access, which results in low GPU performance. In our proposed algorithm, a CPU reorders irregular propagation paths to a GPU-suitable linear order on the CPU memory at each receiving point. We hid the reordering time behind CPU-GPU communication and GPU-based computation of gain on the reordered memory. We found that our proposed algorithm with a quad GPU is up to 30 times faster than the conventional algorithm with a 16-threaded dual CPU.", "title": "" }, { "docid": "c7935c949a27be17dcebe047c38538e1", "text": "One of the main challenges online social systems face is the prevalence of antisocial behavior, such as harassment and personal attacks. In this work, we introduce the task of predicting from the very start of a conversation whether it will get out of hand. As opposed to detecting undesirable behavior after the fact, this task aims to enable early, actionable prediction at a time when the conversation might still be salvaged. To this end, we develop a framework for capturing pragmatic devices—such as politeness strategies and rhetorical prompts—used to start a conversation, and analyze their relation to its future trajectory. Applying this framework in a controlled setting, we demonstrate the feasibility of detecting early warning signs of antisocial behavior in online discussions.", "title": "" }, { "docid": "54ceed51f750eadda3038b42eb9977a5", "text": "Starting from the revolutionary Retinex by Land and McCann, several further perceptually inspired color correction models have been developed with different aims, e.g. reproduction of color sensation, robust features recognition, enhancement of color images. Such models have a differential, spatially-variant and non-linear nature and they can coarsely be distinguished between white-patch (WP) and gray-world (GW) algorithms. In this paper we show that the combination of a pure WP algorithm (RSR: random spray Retinex) and an essentially GW one (ACE) leads to a more robust and better performing model (RACE). The choice of RSR and ACE follows from the recent identification of a unified spatially-variant approach for both algorithms. Mathematically, the originally distinct non-linear and differential mechanisms of RSR and ACE have been fused using the spray technique and local average operations. The investigation of RACE allowed us to put in evidence a common drawback of differential models: corruption of uniform image areas. To overcome this intrinsic defect, we devised a local and global contrast-based and image-driven regulation mechanism that has a general applicability to perceptually inspired color correction algorithms. Tests, comparisons and discussions are presented.", "title": "" }, { "docid": "5b2a088f0f53b2a960c1ebad0f9e7251", "text": "The detailed balance method for calculating the radiative recombination limit to the performance of solar cells has been extended to include free carrier absorption and Auger recombination in addition to radiative losses. This method has been applied to crystalline silicon solar cells where the limiting efficiency is found to be 29.8 percent under AM1.5, based on the measured optical absorption spectrum and published values of the Auger and free carrier absorption coefficients. The silicon is assumed to be textured for maximum benefit from light-trapping effects.", "title": "" }, { "docid": "88486271f9e455bdba5d02c99dcc19c3", "text": "TextCNN, the convolutional neural network for text, is a useful deep learning algorithm for sentence classification tasks such as sentiment analysis and question classification[2]. However, neural networks have long been known as black boxes because interpreting them is a challenging task. Researchers have developed several tools to understand a CNN for image classification by deep visualization[6], but research about deep TextCNNs is still insufficient. In this paper, we are trying to understand what a TextCNN learns on two classical NLP datasets. Our work focuses on functions of different convolutional kernels and correlations between convolutional kernels.", "title": "" }, { "docid": "50dd728b4157aefb7df35366f5822d0d", "text": "This paper describes iDriver, an iPhone software to remote control “Spirit of Berlin”. “Spirit of Berlin” is a completely autonomous car developed by the Free University of Berlin which is capable of unmanned driving in urban areas. iDriver is an iPhone application sending control packets to the car in order to remote control its steering wheel, gas and brake pedal, gear shift and turn signals. Additionally, a video stream from two top-mounted cameras is broadcasted back to the iPhone.", "title": "" }, { "docid": "7fadaf4545b410729d5ab8aeb6b493da", "text": "Designing robust end-effector plays a crucial role in performance of a robot workcell. Design automation of industrial grippers’ fingers/jaws is therefore of the highest interest in the robot industry. This paper systematically reviews the enormous studies performed in relevant research areas for finger design automation. Key processes for successfully achieving automatic finger design are identified and research contributions in each key process are critically reviewed. The proposed approaches in each key process are analyzed, verified and benchmarked. The most promising methods to accomplish finger design automation are highlighted and presented. © 2015 Hosting by Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "b9467dabbac0ef26cd6e56bdae8a66e5", "text": "This paper analyses the relation between economic inequality at the macro-level and the political representation of poor citizens in a comparative perspective. More specifically it addresses the research question: Does the level of economic inequality at the time of the election affect how well citizens belonging to the two lowest quintiles of the income distribution are represented by the party system and governments as compared to richer citizens? Using survey data for citizens’ policy preferences and expert placement of political parties, we find that in economically more unequal societies the party system represents relatively poor citizens worse than in more equal societies. This moderating effect of economic inequality is also found for policy congruence between citizens and governments, albeit slightly less clear-cut. ∗ Jan Rosset is a doctoral student at the Swiss Foundation for Research in Social Sciences, c/o University of Lausanne, CH-1015 Lausanne, jan.rosset@fors.unil.ch. Dr Nathalie Giger is a post-doc research fellow at the Mannheim Centre for European Social Research, University of Mannheim, D68131 Mannheim, nathalie.giger@mzes.uni-mannheim.de. Julian Bernauer is a doctoral student at the Department of Politics and Management, University of Konstanz, D-78457 Konstanz, julian.bernauer@uni-konstanz.de. The authors gratefully acknowledge the financial support provided by the EUROCORES Programme of the European Science Foundation. Julian Bernauer has received support from the Heinrich Böll Foundation. We would like to thank Anna Walsdorf for excellent research assistance.", "title": "" }, { "docid": "c955e63d5c5a30e18c008dcc51d1194b", "text": "We report, for the first time, the identification of fatty acid particles in formulations containing the surfactant polysorbate 20. These fatty acid particles were observed in multiple mAb formulations during their expected shelf life under recommended storage conditions. The fatty acid particles were granular or sand-like in morphology and were several microns in size. They could be identified by distinct IR bands, with additional confirmation from energy-dispersive X-ray spectroscopy analysis. The particles were readily distinguishable from protein particles by these methods. In addition, particles containing a mixture of protein and fatty acids were also identified, suggesting that the particulation pathways for the two particle types may not be distinct. The techniques and observations described will be useful for the correct identification of proteinaceous versus nonproteinaceous particles in pharmaceutical products.", "title": "" }, { "docid": "34f83c7dde28c720f82581804accfa71", "text": "The main threats to human health from heavy metals are associated with exposure to lead, cadmium, mercury and arsenic. These metals have been extensively studied and their effects on human health regularly reviewed by international bodies such as the WHO. Heavy metals have been used by humans for thousands of years. Although several adverse health effects of heavy metals have been known for a long time, exposure to heavy metals continues, and is even increasing in some parts of the world, in particular in less developed countries, though emissions have declined in most developed countries over the last 100 years. Cadmium compounds are currently mainly used in re-chargeable nickel-cadmium batteries. Cadmium emissions have increased dramatically during the 20th century, one reason being that cadmium-containing products are rarely re-cycled, but often dumped together with household waste. Cigarette smoking is a major source of cadmium exposure. In non-smokers, food is the most important source of cadmium exposure. Recent data indicate that adverse health effects of cadmium exposure may occur at lower exposure levels than previously anticipated, primarily in the form of kidney damage but possibly also bone effects and fractures. Many individuals in Europe already exceed these exposure levels and the margin is very narrow for large groups. Therefore, measures should be taken to reduce cadmium exposure in the general population in order to minimize the risk of adverse health effects. The general population is primarily exposed to mercury via food, fish being a major source of methyl mercury exposure, and dental amalgam. The general population does not face a significant health risk from methyl mercury, although certain groups with high fish consumption may attain blood levels associated with a low risk of neurological damage to adults. Since there is a risk to the fetus in particular, pregnant women should avoid a high intake of certain fish, such as shark, swordfish and tuna; fish (such as pike, walleye and bass) taken from polluted fresh waters should especially be avoided. There has been a debate on the safety of dental amalgams and claims have been made that mercury from amalgam may cause a variety of diseases. However, there are no studies so far that have been able to show any associations between amalgam fillings and ill health. The general population is exposed to lead from air and food in roughly equal proportions. During the last century, lead emissions to ambient air have caused considerable pollution, mainly due to lead emissions from petrol. Children are particularly susceptible to lead exposure due to high gastrointestinal uptake and the permeable blood-brain barrier. Blood levels in children should be reduced below the levels so far considered acceptable, recent data indicating that there may be neurotoxic effects of lead at lower levels of exposure than previously anticipated. Although lead in petrol has dramatically decreased over the last decades, thereby reducing environmental exposure, phasing out any remaining uses of lead additives in motor fuels should be encouraged. The use of lead-based paints should be abandoned, and lead should not be used in food containers. In particular, the public should be aware of glazed food containers, which may leach lead into food. Exposure to arsenic is mainly via intake of food and drinking water, food being the most important source in most populations. Long-term exposure to arsenic in drinking-water is mainly related to increased risks of skin cancer, but also some other cancers, as well as other skin lesions such as hyperkeratosis and pigmentation changes. Occupational exposure to arsenic, primarily by inhalation, is causally associated with lung cancer. Clear exposure-response relationships and high risks have been observed.", "title": "" }, { "docid": "e50ecaca506b294b0d978a1a59571417", "text": "From the Department of Obstetrics and Gynecology, University of Utah Health Sciences Center (R.M.S., D.W.B.), and the Women and Newborns Clinical Program of Intermountain Healthcare (D.W.B.) — both in Salt Lake City. Address reprint requests to Dr. Silver at the Department of Obstetrics and Gynecology, University of Utah Health Sciences Center, 30 N. 1900 East, Rm. 2B200 SOM, Salt Lake City, UT 84132, or at bob . silver@ hsc . utah . edu.", "title": "" }, { "docid": "c44d2d76f66eb09a1cb7b3a8c0f13c45", "text": "We consider grading a fashion outfit for recommendation, where we assume that users have a closet of items and we aim at producing a score for an arbitrary combination of items in the closet. The challenge in outfit grading is that the input to the system is a bag of item pictures that are unordered and vary in size. We build a deep neural network-based system that can take variable-length items and predict a score. We collect a large number of outfits from a popular fashion sharing website, Polyvore, and evaluate the performance of our grading system. We compare our model with a random-choice baseline, both on the traditional classification evaluation and on people's judgment using a crowdsourcing platform. With over 84% in classification accuracy and 91% matching ratio to human annotators, our model can reliably grade the quality of an outfit. We also build an outfit recommender on top of our grader to demonstrate the practical application of our model for a personal closet assistant.", "title": "" } ]
scidocsrr
c9e213aa185cd28376b04d7ea8f4e079
Time constraints and resource sharing in adults' working memory spans.
[ { "docid": "6c8151eee3fcfaec7da724c2a6899e8f", "text": "Classic work on interruptions by Zeigarnik showed that tasks that were interrupted were more likely to be recalled after a delay than tasks that were not interrupted. Much of the literature on interruptions has been devoted to examining this effect, although more recently interruptions have been used to choose between competing designs for interfaces to complex devices. However, none of this work looks at what makes some interruptions disruptive and some not. This series of experiments uses a novel computer-based adventure-game methodology to investigate the effects of the length of the interruption, the similarity of the interruption to the main task, and the complexity of processing demanded by the interruption. It is concluded that subjects make use of some form of nonarticulatory memory which is not affected by the length of the interruption. It is affected by processing similar material however, and by a complex mentalarithmetic task which makes large demands on working memory.", "title": "" } ]
[ { "docid": "91cb5e59cb11f7d5ba3300cf4f00ff5d", "text": "Blockchain is a technology uniquely suited to support massive number of transactions and smart contracts within the Internet of Things (IoT) ecosystem, thanks to the decentralized accounting mechanism. In a blockchain network, the states of the accounts are stored and updated by the validator nodes, interconnected in a peer-to-peer fashion. IoT devices are characterized by relatively low computing capabilities and low power consumption, as well as sporadic and low-bandwidth wireless connectivity. An IoT device connects to one or more validator nodes to observe or modify the state of the accounts. In order to interact with the most recent state of accounts, a device needs to be synchronized with the blockchain copy stored by the validator nodes. In this work, we describe general architectures and synchronization protocols that enable synchronization of the IoT endpoints to the blockchain, with different communication costs and security levels. We model and analytically characterize the traffic generated by the synchronization protocols, and also investigate the power consumption and synchronization trade-off via numerical simulations. To the best of our knowledge, this is the first study that rigorously models the role of wireless connectivity in blockchain-powered IoT systems.", "title": "" }, { "docid": "77c2f29beedae831d3bf771bb0388484", "text": "Two types of 3-CRU translational parallel manipulators (TPM) are compared and investigated in this paper. The two 3-CRU TPMs have identical kinematics, but some differences exist in terms of the chain arrangement, one is in fully symmetrical chain arrangement, called symmetrical 3-CRU TPM, the other one is in asymmetrical chain arrangement, called asymmetrical 3-CRU TPM. This paper focuses on discussing the differences between the two 3-CRU TPMs in kinematics, workspace and performance. This study provides insights into parallel manipulators with identical kinematics arranged differently.", "title": "" }, { "docid": "657f020ce1977882fc80ba9b6c0db4b3", "text": "BACKGROUND\nThe delivery of effective, high-quality patient care is a complex activity. It demands health and social care professionals collaborate in an effective manner. Research continues to suggest that collaboration between these professionals can be problematic. Interprofessional education (IPE) offers a possible way to improve interprofessional collaboration and patient care.\n\n\nOBJECTIVES\nTo assess the effectiveness of IPE interventions compared to separate, profession-specific education interventions; and to assess the effectiveness of IPE interventions compared to no education intervention.\n\n\nSEARCH METHODS\nFor this update we searched the Cochrane Effective Practice and Organisation of Care Group specialised register, MEDLINE and CINAHL, for the years 2006 to 2011. We also handsearched the Journal of Interprofessional Care (2006 to 2011), reference lists of all included studies, the proceedings of leading IPE conferences, and websites of IPE organisations.\n\n\nSELECTION CRITERIA\nRandomised controlled trials (RCTs), controlled before and after (CBA) studies and interrupted time series (ITS) studies of IPE interventions that reported objectively measured or self reported (validated instrument) patient/client or healthcare process outcomes.\n\n\nDATA COLLECTION AND ANALYSIS\nAt least two review authors independently assessed the eligibility of potentially relevant studies. For included studies, at least two review authors extracted data and assessed study quality. A meta-analysis of study outcomes was not possible due to heterogeneity in study designs and outcome measures. Consequently, the results are presented in a narrative format.\n\n\nMAIN RESULTS\nThis update located nine new studies, which were added to the six studies from our last update in 2008. This review now includes 15 studies (eight RCTs, five CBA and two ITS studies). All of these studies measured the effectiveness of IPE interventions compared to no educational intervention. Seven studies indicated that IPE produced positive outcomes in the following areas: diabetes care, emergency department culture and patient satisfaction; collaborative team behaviour and reduction of clinical error rates for emergency department teams; collaborative team behaviour in operating rooms; management of care delivered in cases of domestic violence; and mental health practitioner competencies related to the delivery of patient care. In addition, four of the studies reported mixed outcomes (positive and neutral) and four studies reported that the IPE interventions had no impact on either professional practice or patient care.\n\n\nAUTHORS' CONCLUSIONS\nThis updated review reports on 15 studies that met the inclusion criteria (nine studies from this update and six studies from the 2008 update). Although these studies reported some positive outcomes, due to the small number of studies and the heterogeneity of interventions and outcome measures, it is not possible to draw generalisable inferences about the key elements of IPE and its effectiveness. To improve the quality of evidence relating to IPE and patient outcomes or healthcare process outcomes, the following three gaps will need to be filled: first, studies that assess the effectiveness of IPE interventions compared to separate, profession-specific interventions; second, RCT, CBA or ITS studies with qualitative strands examining processes relating to the IPE and practice changes; third, cost-benefit analyses.", "title": "" }, { "docid": "4d42e42469fcead51969f3e642920abc", "text": "In this paper, we present a dual-band antenna for Long Term Evolution (LTE) handsets. The proposed antenna is composed of a meandered monopole operating in the 700 MHz band and a parasitic element which radiates in the 2.5–2.7 GHz band. Two identical antennas are then closely positioned on the same 120×50 mm2 ground plane (Printed Circuit Board) which represents a modern-size PDA-mobile phone. To enhance the port-to-port isolation of the antennas, a neutralization technique is implemented between them. Scattering parameters, radiations patterns and total efficiencies are presented to illustrate the performance of the antenna-system.", "title": "" }, { "docid": "4812ae9ee481b8c4b4f74b4ab01f3e1b", "text": "Recent work has shown how to train Convolutional Neural Networks (CNNs) rapidly on large image datasets [1], then transfer the knowledge gained from these models to a variety of tasks [2]. Following [3], in this work, we demonstrate similar scalability and transfer for Recurrent Neural Networks (RNNs) for Natural Language tasks. By utilizing mixed precision arithmetic and a 32k batch size distributed across 128 NVIDIA Tesla V100 GPUs, we are able to train a character-level 4096-dimension multiplicative LSTM (mLSTM) [4] for unsupervised text reconstruction over 3 epochs of the 40 GB Amazon Reviews dataset [5] in four hours. This runtime compares favorably with previous work taking one month to train the same size and configuration for one epoch over the same dataset [3]. Converging large batch RNN models can be challenging. Recent work has suggested scaling the learning rate as a function of batch size, but we find that simply scaling the learning rate as a function of batch size leads either to significantly worse convergence or immediate divergence for this problem. We provide a learning rate schedule that allows our model to converge with a 32k batch size. Since our model converges over the Amazon Reviews dataset in hours, and our compute requirement of 128 Tesla V100 GPUs, while substantial, is commercially available, this work opens up large scale unsupervised NLP training to most commercial applications and deep learning researchers 11Our code is publicly available: https://github.com/NVIDIA/sentiment-discovery, A model can be trained over most public or private text datasets overnight.", "title": "" }, { "docid": "d1b3534f0b6e721c4835188bc4b3f0b6", "text": "The interaction of multiple autonomous agents gives rise to highly dynamic and nondeterministic environments, contributing to the complexity in applications such as automated financial markets, smart grids, or robotics. Due to the sheer number of situations that may arise, it is not possible to foresee and program the optimal behaviour for all agents beforehand. Consequently, it becomes essential for the success of the system that the agents can learn their optimal behaviour and adapt to new situations or circumstances. The past two decades have seen the emergence of reinforcement learning, both in single and multiagent settings, as a strong, robust and adaptive learning paradigm. Progress has been substantial, and a wide range of algorithms are now available. An important challenge in the domain of multi-agent learning is to gain qualitative insights into the resulting system dynamics. In the past decade, tools and methods from evolutionary game theory have been successfully employed to study multi-agent learning dynamics formally in strategic interactions. This article surveys the dynamical models that have been derived for various multi-agent reinforcement learning algorithms, making it possible to study and compare them qualitatively. Furthermore, new learning algorithms that have been introduced using these evolutionary game theoretic tools are reviewed. The evolutionary models can be used to study complex strategic interactions. Examples of such analysis are given for the domains of automated trading in stock markets and collision avoidance in multi-robot systems. The paper provides a roadmap on the progress that has been achieved in analysing the evolutionary dynamics of multi-agent learning by highlighting the main results and accomplishments.", "title": "" }, { "docid": "015449616e6a0526ea3b1f79420bfb26", "text": "Online fraud, described as dubious business transactions and deceit carried out electronically, has reached an alarming rate worldwide and has become a major challenge to organizations and governments. In the Gulf region, particularly Saudi Arabia, where there is high Internet penetration and many online financial transactions, the need to put effective measures to deter, prevent and detect online fraud, has become imperative. This paper examines how online fraud control measures in financial institutions in Saudi Arabia are organized and managed. Through qualitative interviews with experts in Saudi Arabia, the study found that people’s perceptions (from their moral, social, cultural and religious backgrounds) have significant effect on awareness and fraud prevention and detection. It also argues that technological measures alone may not be adequate. Deterrence, prevention, detection and remedy activities, together making General Deterrence Theory (GDT) as an approach for systematically and effectively combatting online fraud in Saudi.", "title": "" }, { "docid": "b35efe68d99331d481e439ae8fbb4a64", "text": "Semantic matching (SM) for textual information can be informally defined as the task of effectively modeling text matching using representations more complex than those based on simple and independent set of surface forms of words or stems (typically indicated as bag-of-words). In this perspective, matching named entities (NEs) implies that the associated model can both overcomes mismatch between different representations of the same entities, e.g., George H. W. Bush vs. George Bush, and carry out entity disambiguation to avoid incorrect matches between different but similar entities, e.g., the entity above with his son George W. Bush. This means that both the context and structure of NEs must be taken into account in the IR model. SM becomes even more complex when attempting to match the shared semantics between two larger pieces of text, e.g., phrases or clauses, as there is currently no theory indicating how words should be semantically composed for deriving the meaning of text. The complexity above has traditionally led to define IR models based on bag-of-word representations in the vector space model (VSM), where (i) the necessary structure is minimally taken into account by considering n-grams or phrases; and (ii) the matching coverage is increased by projecting text in latent semantic spaces or alternatively by applying query expansion. Such methods introduce a considerable amount of noise, which negatively balances the benefit of achieving better coverage in most cases, thus producing no IR system improvement. In the last decade, a new class of semantic matching approaches based on the so-called Kernel Methods (KMs) for structured data (see e.g., [4]) have been proposed. KMs also adopt scalar products (which, in this context, take the names of kernel functions) in VSM. However, KMs introduce two new important aspects: • the scalar product is implicitly computed using smart techniques, which enable the use of huge feature spaces, e.g., all possible skip n-grams; and • KMs are typically applied within supervised algorithms, e.g., SVMs, which, exploiting training data, can filter out irrelevant features and noise. In this talk, we will briefly introduce and summarize, the latest results on kernel methods for semantic matching by focusing on structural kernels. These can be applied to match syntactic and/or semantic representations of text shaped as trees. Several variants are available: the Syntactic Tree Kernels (STK), [2], the String Kernels (SK) [5] and the Partial Tree Kernels (PTK) [4]. Most interestingly, we will present tree kernels exploiting SM between words contained in a text structure, i.e., the Syntactic Semantic Tree Kernels (SSTK) [1] and the Smoothed Partial Tree Kernels (SPTK) [3]. These extend STK and PTK by allowing for soft matching (i.e., via similarity computation) between nodes associated with different but related labels, e.g., synonyms. The node similarity can be derived from manually annotated resources, e.g., WordNet or Wikipedia, as well as using corpus-based clustering approaches, e.g., latent semantic analysis (LSA). An example of the use of such kernels for question classification in the question answering domain will illustrate the potentials of their structural similarity approach.", "title": "" }, { "docid": "5087353b4888832c2c801f06c94d3c67", "text": "Many Automatic Question Generation (AQG) approaches have been proposed focusing on reading comprehension support; however, none of them addressed academic writing. We conducted a large-scale case study with 25 supervisors and 36 research students enroled in an Engineering Research Method course. We investigated trigger questions, as a form of feedback, produced by supervisors, and how they support these students’ literature review writing. In this paper, we identified the most frequent question types according to Graesser and Person’s Question Taxonomy and discussed how the human experts generate such questions from the source text. Finally, we proposed a more practical Automatic Question Generation Framework for supporting academic writing in engineering education.", "title": "" }, { "docid": "c002eab17c87343b5d138b34e3be73f3", "text": "Finding semantically rich and computer-understandable representations for textual dialogues, utterances and words is crucial for dialogue systems (or conversational agents), as their performance mostly depends on understanding the context of conversations. In recent research approaches, responses have been generated utilizing a decoder architecture, given the distributed vector representation (embedding) of the current conversation. In this paper, the utilization of embeddings for answer retrieval is explored by using Locality-Sensitive Hashing Forest (LSH Forest), an Approximate Nearest Neighbor (ANN) model, to find similar conversations in a corpus and rank possible candidates. Experimental results on the well-known Ubuntu Corpus (in English) and a customer service chat dataset (in Dutch) show that, in combination with a candidate selection method, retrieval-based approaches outperform generative ones and reveal promising future research directions towards the usability of such a system.", "title": "" }, { "docid": "099947c3bf9595d98f01398727fa413e", "text": "The RoboCup Middle Size League competition is a standard real-world test bed for autonomous multi-robot control, robot vision and other relative research subjects. In the past decade, omnidirectional vision system has become one of the most important sensors for the RoboCup soccer robots, for it can provide a 360° view of the robot's surrounding environment in a single image. The robot can use it for tracking and self-localization which very important for robot's control, strategy, and coordination. This paper will discuss the vision system to detect ball, goals, and calculate the angle and real distance from those objects. Based on the research that has been done, the system can detect the ball and the goal, and calculate the angle and the actual distance with a maximum error distance is 5%.", "title": "" }, { "docid": "27bc95568467efccb3e6cc185e905e42", "text": "Major studios and independent production firms (Indies) often have to select or “greenlight” a portfolio of scripts to turn into movies. Despite the huge financial risk at stake, there is currently no risk management tool they can use to aid their decisions, even though such a tool is sorely needed. In this paper, we developed a forecasting and risk management tool, based on movies scripts, to aid movie studios and production firms in their green-lighting decisions. The methodology developed can also assist outside investors if they have access to the scripts. Building upon and extending the previous literature, we extracted three levels of textual information (genre/content, bag-of-words, and semantics) from movie scripts. We then incorporate these textual variables as predictors, together with the contemplated production budget, into a BART-QL (Bayesian Additive Regression Tree for Quasi-Linear) model to obtain the posterior predictive distributions, rather than point forecasts, of the box office revenues for the corresponding movies. We demonstrate how the predictive distributions of box office revenues can potentially be used to help movie producers intelligently select their movie production portfolios based on their risk preferences, and we describe an illustrative analysis performed for an independent production firm.", "title": "" }, { "docid": "40cea15a4fbe7f939a490ea6b6c9a76a", "text": "An application provider leases resources (i.e., virtual machine instances) of variable configurations from a IaaS provider over some lease duration (typically one hour). The application provider (i.e., consumer) would like to minimize their cost while meeting all service level obligations (SLOs). The mechanism of adding and removing resources at runtime is referred to as autoscaling. The process of autoscaling is automated through the use of a management component referred to as an autoscaler. This paper introduces a novel autoscaling approach in which both cloud and application dynamics are modeled in the context of a stochastic, model predictive control problem. The approach exploits trade-off between satisfying performance related objectives for the consumer's application while minimizing their cost. Simulation results are presented demonstrating the efficacy of this new approach.", "title": "" }, { "docid": "81bfa44ec29532d07031fa3b74ba818d", "text": "We propose a recurrent extension of the Ladder networks [22] whose structure is motivated by the inference required in hierarchical latent variable models. We demonstrate that the recurrent Ladder is able to handle a wide variety of complex learning tasks that benefit from iterative inference and temporal modeling. The architecture shows close-to-optimal results on temporal modeling of video data, competitive results on music modeling, and improved perceptual grouping based on higher order abstractions, such as stochastic textures and motion cues. We present results for fully supervised, semi-supervised, and unsupervised tasks. The results suggest that the proposed architecture and principles are powerful tools for learning a hierarchy of abstractions, learning iterative inference and handling temporal information.", "title": "" }, { "docid": "e96dfb8aca4aa06b759b607ae1ffd005", "text": "This paper describes scalable convex optimization methods for phase retrieval. The main characteristics of these methods are the cheap per-iteration complexity and the low-memory footprint. With a variant of the original PhaseLift formulation, we first illustrate how to leverage the scalable Frank-Wolfe (FW) method (also known as the conditional gradient algorithm), which requires a tuning parameter. We demonstrate that we can estimate the tuning parameter of the FW algorithm directly from the measurements, with rigorous theoretical guarantees. We then illustrate numerically that recent advances in universal primal-dual convex optimization methods offer significant scalability improvements over the FW method, by recovering full HD resolution color images from their quadratic measurements.", "title": "" }, { "docid": "8aefd572e089cb29c13cefc6e59bdda8", "text": "Different linguistic perspectives causes many diverse segmentation criteria for Chinese word segmentation (CWS). Most existing methods focus on improve the performance for each single criterion. However, it is interesting to exploit these different criteria and mining their common underlying knowledge. In this paper, we propose adversarial multi-criteria learning for CWS by integrating shared knowledge from multiple heterogeneous segmentation criteria. Experiments on eight corpora with heterogeneous segmentation criteria show that the performance of each corpus obtains a significant improvement, compared to single-criterion learning. Source codes of this paper are available on Github1.", "title": "" }, { "docid": "e82e44e851486b557948a63366486fef", "text": "v Combinatorial and algorithmic aspects of identifying codes in graphs Abstract: An identifying code is a set of vertices of a graph such that, on the one hand, each vertex out of the code has a neighbour in the code (the domination property), and, on the other hand, all vertices have a distinct neighbourhood within the code (the separation property). In this thesis, we investigate combinatorial and algorithmic aspects of identifying codes. For the combinatorial part, we rst study extremal questions by giving a complete characterization of all nite undirected graphs having their order minus one as the minimum size of an identifying code. We also characterize nite directed graphs, in nite undirected graphs and in nite oriented graphs having their whole vertex set as the unique identifying code. These results answer open questions that were previously studied in the literature. We then study the relationship between the minimum size of an identifying code and the maximum degree of a graph. In particular, we give several upper bounds for this parameter as a function of the order and the maximum degree. These bounds are obtained using two techniques. The rst one consists in the construction of independent sets satisfying certain properties, and the second one is the combination of two tools from the probabilistic method: the Lovász local lemma and a Cherno bound. We also provide constructions of graph families related to this type of upper bounds, and we conjecture that they are optimal up to an additive constant. We also present new lower and upper bounds for the minimum cardinality of an identifying code in speci c graph classes. We study graphs of girth at least 5 and of given minimum degree by showing that the combination of these two parameters has a strong in uence on the minimum size of an identifying code. We apply these results to random regular graphs. Then, we give lower bounds on the size of a minimum identifying code of interval and unit interval graphs. Finally, we prove several lower and upper bounds for this parameter when considering line graphs. The latter question is tackled using the new notion of an edge-identifying code. For the algorithmic part, it is known that the decision problem associated with the notion of an identifying code is NP-complete, even for restricted graph classes. We extend the known results to other classes such as split graphs, co-bipartite graphs, line graphs or interval graphs. To this end, we propose polynomial-time reductions from several classical hard algorithmic problems. These results show that in many graph classes, the identifying code problem is computationally more di cult than related problems (such as the dominating set problem). Furthermore, we extend the knowledge of the approximability of the optimization problem associated to identifying codes. We extend the known result of NP-hardness of approximating this problem within a sub-logarithmic factor (as a function of the instance graph) to bipartite, split and co-bipartite graphs, respectively. We also extend the known result of its APX-hardness for graphs of given maximum degree to a subclass of split graphs, bipartite graphs of maximum degree 4 and line graphs. Finally, we show the existence of a PTAS algorithm for unit interval graphs. An identifying code is a set of vertices of a graph such that, on the one hand, each vertex out of the code has a neighbour in the code (the domination property), and, on the other hand, all vertices have a distinct neighbourhood within the code (the separation property). In this thesis, we investigate combinatorial and algorithmic aspects of identifying codes. For the combinatorial part, we rst study extremal questions by giving a complete characterization of all nite undirected graphs having their order minus one as the minimum size of an identifying code. We also characterize nite directed graphs, in nite undirected graphs and in nite oriented graphs having their whole vertex set as the unique identifying code. These results answer open questions that were previously studied in the literature. We then study the relationship between the minimum size of an identifying code and the maximum degree of a graph. In particular, we give several upper bounds for this parameter as a function of the order and the maximum degree. These bounds are obtained using two techniques. The rst one consists in the construction of independent sets satisfying certain properties, and the second one is the combination of two tools from the probabilistic method: the Lovász local lemma and a Cherno bound. We also provide constructions of graph families related to this type of upper bounds, and we conjecture that they are optimal up to an additive constant. We also present new lower and upper bounds for the minimum cardinality of an identifying code in speci c graph classes. We study graphs of girth at least 5 and of given minimum degree by showing that the combination of these two parameters has a strong in uence on the minimum size of an identifying code. We apply these results to random regular graphs. Then, we give lower bounds on the size of a minimum identifying code of interval and unit interval graphs. Finally, we prove several lower and upper bounds for this parameter when considering line graphs. The latter question is tackled using the new notion of an edge-identifying code. For the algorithmic part, it is known that the decision problem associated with the notion of an identifying code is NP-complete, even for restricted graph classes. We extend the known results to other classes such as split graphs, co-bipartite graphs, line graphs or interval graphs. To this end, we propose polynomial-time reductions from several classical hard algorithmic problems. These results show that in many graph classes, the identifying code problem is computationally more di cult than related problems (such as the dominating set problem). Furthermore, we extend the knowledge of the approximability of the optimization problem associated to identifying codes. We extend the known result of NP-hardness of approximating this problem within a sub-logarithmic factor (as a function of the instance graph) to bipartite, split and co-bipartite graphs, respectively. We also extend the known result of its APX-hardness for graphs of given maximum degree to a subclass of split graphs, bipartite graphs of maximum degree 4 and line graphs. Finally, we show the existence of a PTAS algorithm for unit interval graphs.", "title": "" }, { "docid": "7143c97b6ea484566f521e36a3fa834e", "text": "To determine the reliability and concurrent validity of a visual analogue scale (VAS) for disability as a single-item instrument measuring disability in chronic pain patients was the objective of the study. For the reliability study a test-retest design and for the validity study a cross-sectional design was used. A general rehabilitation centre and a university rehabilitation centre was the setting for the study. The study population consisted of patients over 18 years of age, suffering from chronic musculoskeletal pain; 52 patients in the reliability study, 344 patients in the validity study. Main outcome measures were as follows. Reliability study: Spearman's correlation coefficients (rho values) of the test and retest data of the VAS for disability; validity study: rho values of the VAS disability scores with the scores on four domains of the Short-Form Health Survey (SF-36) and VAS pain scores, and with Roland-Morris Disability Questionnaire scores in chronic low back pain patients. Results were as follows: in the reliability study rho values varied from 0.60 to 0.77; and in the validity study rho values of VAS disability scores with SF-36 domain scores varied from 0.16 to 0.51, with Roland-Morris Disability Questionnaire scores from 0.38 to 0.43 and with VAS pain scores from 0.76 to 0.84. The conclusion of the study was that the reliability of the VAS for disability is moderate to good. Because of a weak correlation with other disability instruments and a strong correlation with the VAS for pain, however, its validity is questionable.", "title": "" }, { "docid": "95d624c86fcd86377e46738689bb18a8", "text": "EEG desynchronization is a reliable correlate of excited neural structures of activated cortical areas. EEG synchronization within the alpha band may be an electrophysiological correlate of deactivated cortical areas. Such areas are not processing sensory information or motor output and can be considered to be in an idling state. One example of such an idling cortical area is the enhancement of mu rhythms in the primary hand area during visual processing or during foot movement. In both circumstances, the neurons in the hand area are not needed for visual processing or preparation for foot movement. As a result of this, an enhanced hand area mu rhythm can be observed.", "title": "" }, { "docid": "c4332dfb8e8117c3deac7d689b8e259b", "text": "Learning through experience is time-consuming, inefficient and often bad for your cortisol levels. To address this problem, a number of recently proposed teacherstudent methods have demonstrated the benefits of private tuition, in which a single model learns from an ensemble of more experienced tutors. Unfortunately, the cost of such supervision restricts good representations to a privileged minority. Unsupervised learning can be used to lower tuition fees, but runs the risk of producing networks that require extracurriculum learning to strengthen their CVs and create their own LinkedIn profiles1. Inspired by the logo on a promotional stress ball at a local recruitment fair, we make the following three contributions. First, we propose a novel almost no supervision training algorithm that is effective, yet highly scalable in the number of student networks being supervised, ensuring that education remains affordable. Second, we demonstrate our approach on a typical use case: learning to bake, developing a method that tastily surpasses the current state of the art. Finally, we provide a rigorous quantitive analysis of our method, proving that we have access to a calculator2. Our work calls into question the long-held dogma that life is the best teacher. Give a student a fish and you feed them for a day, teach a student to gatecrash seminars and you feed them until the day they move to Google.", "title": "" } ]
scidocsrr
0d5feece78418ba9d1245678e4c8f034
Delving Deeper into Convolutional Networks for Learning Video Representations
[ { "docid": "6af09f57f2fcced0117dca9051917a0d", "text": "We present a novel per-dimension learning rate method for gradient descent called ADADELTA. The method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent. The method requires no manual tuning of a learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyperparameters. We show promising results compared to other methods on the MNIST digit classification task using a single machine and on a large scale voice dataset in a distributed cluster environment.", "title": "" }, { "docid": "87b67f9ed23c27a71b6597c94ccd6147", "text": "Recently, deep learning approach, especially deep Convolutional Neural Networks (ConvNets), have achieved overwhelming accuracy with fast processing speed for image classification. Incorporating temporal structure with deep ConvNets for video representation becomes a fundamental problem for video content analysis. In this paper, we propose a new approach, namely Hierarchical Recurrent Neural Encoder (HRNE), to exploit temporal information of videos. Compared to recent video representation inference approaches, this paper makes the following three contributions. First, our HRNE is able to efficiently exploit video temporal structure in a longer range by reducing the length of input information flow, and compositing multiple consecutive inputs at a higher level. Second, computation operations are significantly lessened while attaining more non-linearity. Third, HRNE is able to uncover temporal tran-sitions between frame chunks with different granularities, i.e. it can model the temporal transitions between frames as well as the transitions between segments. We apply the new method to video captioning where temporal information plays a crucial role. Experiments demonstrate that our method outperforms the state-of-the-art on video captioning benchmarks.", "title": "" } ]
[ { "docid": "fb904fc99acf8228ae7585e29074f96c", "text": "One of the biggest problems in manufacturing is the failure of machine tools due to loss of surface material in cutting operations like drilling and milling. Carrying on the process with a dull tool may damage the workpiece material fabricated. On the other hand, it is unnecessary to change the cutting tool if it is still able to continue cutting operation. Therefore, an effective diagnosis mechanism is necessary for the automation of machining processes so that production loss and downtime can be avoided. This study concerns with the development of a tool wear condition-monitoring technique based on a two-stage fuzzy logic scheme. For this, signals acquired from various sensors were processed to make a decision about the status of the tool. In the first stage of the proposed scheme, statistical parameters derived from thrust force, machine sound (acquired via a very sensitive microphone) and vibration signals were used as inputs to fuzzy process; and the crisp output values of this process were then taken as the input parameters of the second stage. Conclusively, outputs of this stage were taken into a threshold function, the output of which is used to assess the condition of the tool. r 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "0be24a284a7490b709bbbdfea458b211", "text": "This article provides a meta-analytic review of the relationship between the quality of leader-member exchanges (LMX) and citizenship behaviors performed by employees. Results based on 50 independent samples (N = 9,324) indicate a moderately strong, positive relationship between LMX and citizenship behaviors (rho = .37). The results also support the moderating role of the target of the citizenship behaviors on the magnitude of the LMX-citizenship behavior relationship. As expected, LMX predicted individual-targeted behaviors more strongly than it predicted organizational targeted behaviors (rho = .38 vs. rho = .31), and the difference was statistically significant. Whether the LMX and the citizenship behavior ratings were provided by the same source or not also influenced the magnitude of the correlation between the 2 constructs.", "title": "" }, { "docid": "0939a703cb2eeb9396c4e681f95e1e4d", "text": "Learning-based methods for visual segmentation have made progress on particular types of segmentation tasks, but are limited by the necessary supervision, the narrow definitions of fixed tasks, and the lack of control during inference for correcting errors. To remedy the rigidity and annotation burden of standard approaches, we address the problem of few-shot segmentation: given few image and few pixel supervision, segment any images accordingly. We propose guided networks, which extract a latent task representation from any amount of supervision, and optimize our architecture end-to-end for fast, accurate few-shot segmentation. Our method can switch tasks without further optimization and quickly update when given more guidance. We report the first results for segmentation from one pixel per concept and show real-time interactive video segmentation. Our unified approach propagates pixel annotations across space for interactive segmentation, across time for video segmentation, and across scenes for semantic segmentation. Our guided segmentor is state-of-the-art in accuracy for the amount of annotation and time. See http://github.com/shelhamer/revolver for code, models, and more details.", "title": "" }, { "docid": "341e0b7d04b333376674dac3c0888f50", "text": "Software archives contain historical information about the development process of a software system. Using data mining techniques rules can be extracted from these archives. In this paper we discuss how standard visualization techniques can be applied to interactively explore these rules. To this end we extended the standard visualization techniques for association rules and sequence rules to also show the hierarchical order of items. Clusters and outliers in the resulting visualizations provide interesting insights into the relation between the temporal development of a system and its static structure. As an example we look at the large software archive of the MOZILLA open source project. Finally we discuss what kind of regularities and anomalies we found and how these can then be leveraged to support software engineers.", "title": "" }, { "docid": "5e858796f025a9e2b91109835d827c68", "text": "Several divergent application protocols have been proposed for Internet of Things (IoT) solutions including CoAP, REST, XMPP, AMQP, MQTT, DDS, and others. Each protocol focuses on a specific aspect of IoT communications. The lack of a protocol that can handle the vertical market requirements of IoT applications including machine-to-machine, machine-to-server, and server-to-server communications has resulted in a fragmented market between many protocols. In turn, this fragmentation is a main hindrance in the development of new services that require the integration of multiple IoT services to unlock new capabilities and provide horizontal integration among services. In this work, after articulating the major shortcomings of the current IoT protocols, we outline a rule-based intelligent gateway that bridges the gap between existing IoT protocols to enable the efficient integration of horizontal IoT services. While this intelligent gateway enhances the gloomy picture of protocol fragmentation in the context of IoT, it does not address the root cause of this fragmentation, which lies in the inability of the current protocols to offer a wide range of QoS guarantees. To offer a solution that stems the root cause of this protocol fragmentation issue, we propose a generic IoT protocol that is flexible enough to address the IoT vertical market requirements. In this regard, we enhance the baseline MQTT protocol by allowing it to support rich QoS features by exploiting a mix of IP multicasting, intelligent broker queuing management, and traffic analytics techniques. Our initial evaluation of the lightweight enhanced MQTT protocol reveals significant improvement over the baseline protocol in terms of the delay performance.", "title": "" }, { "docid": "54260da63de773aa9374ab00917c2977", "text": "A slew rate controlled output driver adopting delay compensation method is implemented using 0.18 µm CMOS process for storage device interface. Phase-Locked Loop is used to generate compensation current and constant delay time. Compensation current reduces the slew rate variation over process, voltage and temperature variation in output driver. To generate constant delay time, the replica of VCO in PLL is used in output driver's slew rate control block. That reduces the slew rate variation over load capacitance variation. That has less 25% variation at slew rate than that of conventional output driver. The proposed output driver can satisfy UDMA100 interface which specify load capacitance as 15 ∼ 40pF and slew rate as 0.4 ∼ 1.0[V/ns].", "title": "" }, { "docid": "06ab903f3de4c498e1977d7d0257f8f3", "text": "BACKGROUND\nthe analysis of microbial communities through dna sequencing brings many challenges: the integration of different types of data with methods from ecology, genetics, phylogenetics, multivariate statistics, visualization and testing. With the increased breadth of experimental designs now being pursued, project-specific statistical analyses are often needed, and these analyses are often difficult (or impossible) for peer researchers to independently reproduce. The vast majority of the requisite tools for performing these analyses reproducibly are already implemented in R and its extensions (packages), but with limited support for high throughput microbiome census data.\n\n\nRESULTS\nHere we describe a software project, phyloseq, dedicated to the object-oriented representation and analysis of microbiome census data in R. It supports importing data from a variety of common formats, as well as many analysis techniques. These include calibration, filtering, subsetting, agglomeration, multi-table comparisons, diversity analysis, parallelized Fast UniFrac, ordination methods, and production of publication-quality graphics; all in a manner that is easy to document, share, and modify. We show how to apply functions from other R packages to phyloseq-represented data, illustrating the availability of a large number of open source analysis techniques. We discuss the use of phyloseq with tools for reproducible research, a practice common in other fields but still rare in the analysis of highly parallel microbiome census data. We have made available all of the materials necessary to completely reproduce the analysis and figures included in this article, an example of best practices for reproducible research.\n\n\nCONCLUSIONS\nThe phyloseq project for R is a new open-source software package, freely available on the web from both GitHub and Bioconductor.", "title": "" }, { "docid": "7533347e8c5daf17eb09e64db0fa4394", "text": "Android has become the most popular smartphone operating system. This rapidly increasing adoption of Android has resulted in significant increase in the number of malwares when compared with previous years. There exist lots of antimalware programs which are designed to effectively protect the users’ sensitive data in mobile systems from such attacks. In this paper, our contribution is twofold. Firstly, we have analyzed the Android malwares and their penetration techniques used for attacking the systems and antivirus programs that act against malwares to protect Android systems. We categorize many of the most recent antimalware techniques on the basis of their detection methods. We aim to provide an easy and concise view of the malware detection and protection mechanisms and deduce their benefits and limitations. Secondly, we have forecast Android market trends for the year up to 2018 and provide a unique hybrid security solution and take into account both the static and dynamic analysis an android application. Keywords—Android; Permissions; Signature", "title": "" }, { "docid": "14d8bf0bdf519cf0197098d56e6a0c49", "text": "Overlapped subarray networks produce flat-topped sector patterns with low sidelobes that suppress grating lobes outside of the main beam of the subarray pattern. They are typically used in limited scan applications, where it is desired to minimize the number of controls required to steer the beam. However, the architecture of an overlapped subarray antenna includes many signal crossovers and a wide variation in splitting/combining ratios, which make it difficult to maintain required error tolerances. This paper presents the design considerations and results for an overlapped subarray radar antenna, including a custom subarray weighting function and the corresponding circuit design and fabrication. Measured pattern results will be shown for a prototype design compared with desired patterns.", "title": "" }, { "docid": "4981fa05fafb4076bcfdc4ff5bf3aa6c", "text": "Steganography is an important area of research in recent years involving a number of applications. It is the science of embedding information into the cover image viz., text, video,and image (payload) without causing statistically significant modification to the cover image. The modern secure image steganography presents a challenging task of transferring the embedded information to the destination without being detected.This paper deals with hiding text in an image file using Least Significant Bit (LSB) based Steganography, Discrete Cosine Transform (DCT) based Steganography and Discrete Wavelet Transform (DWT) based steganography.The LSB algorithm is implemented in spatial domain in which the payload bits are embedded into the least significant bits of cover image to derive the stegoimage whereas DCT & DWT algorithm are implemented in frequency domain in which the stego-image is transformed from spatial domain to the frequency domain and the payload bits are embedded into the frequency components of the cover image.The performance and comparison of these three techniques is evaluated on the basis of the parameters MSE, PSNR, NC, processing time, Capacity& Robustness.", "title": "" }, { "docid": "8ae21da19b8afabb941bc5bb450434a9", "text": "A 7-month-old child presented with imperforate anus, penoscrotal hypospadias and transposition, and a midline mucosa-lined perineal mass. At surgery the mass was found to be supplied by the median sacral artery. It was excised and the anorectal malformation was repaired by posterior sagittal anorectoplasty. Histologically the mass revealed well-differentiated colonic tissue. The final diagnosis was well-differentiated sacrococcygeal teratoma in association with anorectal malformation.", "title": "" }, { "docid": "d0f71092df2eab53e7f32eff1cb7af2e", "text": "Topic modeling of textual corpora is an important and challenging problem. In most previous work, the “bag-of-words” assumption is usually made which ignores the ordering of words. This assumption simplifies the computation, but it unrealistically loses the ordering information and the semantic of words in the context. In this paper, we present a Gaussian Mixture Neural Topic Model (GMNTM) which incorporates both the ordering of words and the semantic meaning of sentences into topic modeling. Specifically, we represent each topic as a cluster of multi-dimensional vectors and embed the corpus into a collection of vectors generated by the Gaussian mixture model. Each word is affected not only by its topic, but also by the embedding vector of its surrounding words and the context. The Gaussian mixture components and the topic of documents, sentences and words can be learnt jointly. Extensive experiments show that our model can learn better topics and more accurate word distributions for each topic. Quantitatively, comparing to state-of-the-art topic modeling approaches, GMNTM obtains significantly better performance in terms of perplexity, retrieval accuracy and classification accuracy.", "title": "" }, { "docid": "31c62f403e6d7f06ff2ab028894346ff", "text": "Automated text summarization is important to for humans to better manage the massive information explosion. Several machine learning approaches could be successfully used to handle the problem. This paper reports the results of our study to compare the performance between neural networks and support vector machines for text summarization. Both models have the ability to discover non-linear data and are effective model when dealing with large datasets.", "title": "" }, { "docid": "bcf27c4f750ab74031b8638a9b38fd87", "text": "δ opioid receptor (DOR) was the first opioid receptor of the G protein‑coupled receptor family to be cloned. Our previous studies demonstrated that DOR is involved in regulating the development and progression of human hepatocellular carcinoma (HCC), and is involved in the regulation of the processes of invasion and metastasis of HCC cells. However, whether DOR is involved in the development and progression of drug resistance in HCC has not been reported and requires further elucidation. The aim of the present study was to investigate the expression levels of DOR in the drug‑resistant HCC BEL‑7402/5‑fluorouracil (BEL/FU) cell line, and its effects on drug resistance, in order to preliminarily elucidate the effects of DOR in HCC drug resistance. The results of the present study demonstrated that DOR was expressed at high levels in the BEL/FU cells, and the expression levels were higher, compared with those in normal liver cells. When the expression of DOR was silenced, the proliferation of the drug‑resistant HCC cells were unaffected. However, when the cells were co‑treated with a therapeutic dose of 5‑FU, the proliferation rate of the BEL/FU cells was significantly inhibited, a large number of cells underwent apoptosis, cell cycle progression was arrested and changes in the expression levels of drug‑resistant proteins were observed. Overall, the expression of DOR was upregulated in the drug‑resistant HCC cells, and its functional status was closely associated with drug resistance in HCC. Therefore, DOR may become a recognized target molecule with important roles in the clinical treatment of drug‑resistant HCC.", "title": "" }, { "docid": "bf2f9a0387de2b2aa3136a2879a07e83", "text": "Rich representations in reinforcement learning have been studied for the purpose of enabling generalization and making learning feasible in large state spaces. We introduce Object-Oriented MDPs (OO-MDPs), a representation based on objects and their interactions, which is a natural way of modeling environments and offers important generalization opportunities. We introduce a learning algorithm for deterministic OO-MDPs and prove a polynomial bound on its sample complexity. We illustrate the performance gains of our representation and algorithm in the well-known Taxi domain, plus a real-life videogame.", "title": "" }, { "docid": "b7c4cf9798325724c4094c617ed0b4f6", "text": "Today’s commodity disk drives, the basic unit of storage for computer systems large and small, are actually small computers, with a processor, memory and a network connection, in addition to the spinning magnetic material that stores the data. Large collections of data are becoming larger, and people are beginning to analyze, rather than simply store-and-forget, these masses of data. At the same time, advances in I/O performance have lagged the rapid development of commodity processor and memory technology. This paper describes the use of Active Disks to take advantage of the processing power on individual disk drives to run a carefully chosen portion of a relational database system. Moving a portion of the database processing to execute directly at the disk drives improves performance by: 1) dramatically reducing data traffic; and 2) exploiting the parallelism in large storage systems. It provides a new point of leverage to overcome the I/O bottleneck. This paper discusses how to map all the basic database operations select, project, and join onto an Active Disk system. The changes required are small and the performance gains are dramatic. A prototype based on the Postgres database system demonstrates a factor of 2x performance improvement on a small system using a portion of the TPC-D decision support benchmark, with the promise of larger improvements in more realistically-sized systems. Active Disk Architecture for Databases Erik Riedel1, Christos Faloutsos, David Nagle April 2000 CMU-CS-00-145 This research was sponsored by DARPA/ITO through ARPA Order D306, and issued by Indian Head Division, NSWC under contract N00174-96-0002. Partial funding was provided by the National Science Foundation under grants IRI-9625428, DMS-9873442, IIS9817496, and IIS-9910606. Additional funding was provided by donations from NEC and Intel. We are indebted to generous contributions from the member companies of the Parallel Data Consortium. At the time of this writing, these companies include Hewlett-Packard Laboratories, LSI Logic, Data General, EMC, Compaq, Intel, 3Com, Quantum, IBM, Seagate Technology, Hitachi, Infineon, Novell, and Wind River Systems. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of any supporting organization or the U.S. Government. 1. now with Hewlett-Packard Labs, riedel@hpl.hp.com", "title": "" }, { "docid": "532c53e24f6b1c691dd429b02c83049a", "text": "Endurance exercise training can promote an adaptive muscle fiber transformation and an increase of mitochondrial biogenesis by triggering scripted changes in gene expression. However, no transcription factor has yet been identified that can direct this process. We describe the engineering of a mouse capable of continuous running of up to twice the distance of a wild-type littermate. This was achieved by targeted expression of an activated form of peroxisome proliferator-activated receptor delta (PPARdelta) in skeletal muscle, which induces a switch to form increased numbers of type I muscle fibers. Treatment of wild-type mice with PPARdelta agonist elicits a similar type I fiber gene expression profile in muscle. Moreover, these genetically generated fibers confer resistance to obesity with improved metabolic profiles, even in the absence of exercise. These results demonstrate that complex physiologic properties such as fatigue, endurance, and running capacity can be molecularly analyzed and manipulated.", "title": "" }, { "docid": "3686072146ecb0a3972c9debdc73d88d", "text": "This paper exposes the Navigation and Control technology embedded in a recently commercialized micro Unmanned Aerial Vehicle (UAV), the AR.Drone, which cost and performance are unprecedented among any commercial product for mass markets. The system relies on state-of-the-art indoor navigation systems combining low-cost inertial sensors, computer vision techniques, sonar, and accounting for aerodynamics models.", "title": "" }, { "docid": "6090d8c6e8ef8532c5566908baa9a687", "text": "Cardiovascular diseases (CVD) are known to be the most widespread causes to death. Therefore, detecting earlier signs of cardiac anomalies is of prominent importance to ease the treatment of any cardiac complication or take appropriate actions. Electrocardiogram (ECG) is used by doctors as an important diagnosis tool and in most cases, it's recorded and analyzed at hospital after the appearance of first symptoms or recorded by patients using a device named holter ECG and analyzed afterward by doctors. In fact, there is a lack of systems able to capture ECG and analyze it remotely before the onset of severe symptoms. With the development of wearable sensor devices having wireless transmission capabilities, there is a need to develop real time systems able to accurately analyze ECG and detect cardiac abnormalities. In this paper, we propose a new CVD detection system using Wireless Body Area Networks (WBAN) technology. This system processes the captured ECG using filtering and Undecimated Wavelet Transform (UWT) techniques to remove noises and extract nine main ECG diagnosis parameters, then the system uses a Bayesian Network Classifier model to classify ECG based on its parameters into four different classes: Normal, Premature Atrial Contraction (PAC), Premature Ventricular Contraction (PVC) and Myocardial Infarction (MI). The experimental results on ECGs from real patients databases show that the average detection rate (TPR) is 96.1% for an average false alarm rate (FPR) of 1.3%.", "title": "" }, { "docid": "53b38576a378b7680a69bba1ebe971ba", "text": "The detection of symmetry axes through the optimization of a given symmetry measure, computed as a function of the mean-square error between the original and reflected images, is investigated in this paper. A genetic algorithm and an optimization scheme derived from the self-organizing maps theory are presented. The notion of symmetry map is then introduced. This transform allows us to map an object into a symmetry space where its symmetry properties can be analyzed. The locations of the different axes that globally and locally maximize the symmetry value can be obtained. The input data are assumed to be vector-valued, which allow to focus on either shape. color or texture information. Finally, the application to skin cancer diagnosis is illustrated and discussed.", "title": "" } ]
scidocsrr
57b1d04583d357d0389aec4d224e08fc
ConRec: A Software Framework for Context-Aware Recommendation Based on Dynamic and Personalized Context
[ { "docid": "e26c73004a3f29b1abbadd515a0ca748", "text": "The situation in which a choice is made is an important information for recommender systems. Context-aware recommenders take this information into account to make predictions. So far, the best performing method for context-aware rating prediction in terms of predictive accuracy is Multiverse Recommendation based on the Tucker tensor factorization model. However this method has two drawbacks: (1) its model complexity is exponential in the number of context variables and polynomial in the size of the factorization and (2) it only works for categorical context variables. On the other hand there is a large variety of fast but specialized recommender methods which lack the generality of context-aware methods.\n We propose to apply Factorization Machines (FMs) to model contextual information and to provide context-aware rating predictions. This approach results in fast context-aware recommendations because the model equation of FMs can be computed in linear time both in the number of context variables and the factorization size. For learning FMs, we develop an iterative optimization method that analytically finds the least-square solution for one parameter given the other ones. Finally, we show empirically that our approach outperforms Multiverse Recommendation in prediction quality and runtime.", "title": "" } ]
[ { "docid": "17cd41a64a845ba400ee5018eb899d15", "text": "Structured prediction requires searching over a combinatorial number of structures. To tackle it, we introduce SparseMAP: a new method for sparse structured inference, and its natural loss function. SparseMAP automatically selects only a few global structures: it is situated between MAP inference, which picks a single structure, and marginal inference, which assigns nonzero probability to all structures, including implausible ones. SparseMAP can be computed using only calls to a MAP oracle, making it applicable to problems with intractable marginal inference, e.g., linear assignment. Sparsity makes gradient backpropagation efficient regardless of the structure, enabling us to augment deep neural networks with generic and sparse structured hidden layers. Experiments in dependency parsing and natural language inference reveal competitive accuracy, improved interpretability, and the ability to capture natural language ambiguities, which is attractive for pipeline systems.", "title": "" }, { "docid": "cd82eb636078b633060a857a4eb2b47b", "text": "The importance of mobile application specific testing techniques and methods has been attracting much attention of software engineers over the past few years. This is due to the fact that mobile applications are different than traditional web and desktop applications, and more and more they are moving to being used in critical domains. Mobile applications require a different approach to application quality and dependability and require an effective testing approach to build high quality and more reliable software. We performed a systematic mapping study to categorize and to structure the research evidence that has been published in the area of mobile application testing techniques and challenges that they have reported. Seventy nine (79) empirical studies are mapped to a classification schema. Several research gaps are identified and specific key testing issues for practitioners are identified: there is a need for eliciting testing requirements early during development process; the need to conduct research in real-world development environments; specific testing techniques targeting application life-cycle conformance and mobile services testing; and comparative studies for security and usability testing.", "title": "" }, { "docid": "45df307e591eb146c1313686e345dede", "text": "A high-precision CMOS time-to-digital converter IC has been designed. Time interval measurement is based on a counter and two-level interpolation realized with stabilized delay lines. Reference recycling in the delay line improves the integral nonlinearity of the interpolator and enables the use of a low frequency reference clock. Multi-level interpolation reduces the number of delay elements and registers and lowers the power consumption. The load capacitor scaled parallel structure in the delay line permits very high resolution. An INL look-up table reduces the effect of the remaining nonlinearity. The digitizer measures time intervals from 0 to 204 /spl mu/s with 8.1 ps rms single-shot precision. The resolution of 12.2 ps from a 5-MHz external reference clock is divided by means of only 20 delay elements.", "title": "" }, { "docid": "52cde6191c79d085127045a62deacf31", "text": "Deep Reinforcement Learning methods have achieved state of the art performance in learning control policies for the games in the Atari 2600 domain. One of the important parameters in the Arcade Learning Environment (ALE, [Bellemare et al., 2013]) is the frame skip rate. It decides the granularity at which agents can control game play. A frame skip value of k allows the agent to repeat a selected action k number of times. The current state of the art architectures like Deep QNetwork (DQN, [Mnih et al., 2015]) and Dueling Network Architectures (DuDQN, [Wang et al., 2015]) consist of a framework with a static frame skip rate, where the action output from the network is repeated for a fixed number of frames regardless of the current state. In this paper, we propose a new architecture, Dynamic Frame skip Deep Q-Network (DFDQN) which makes the frame skip rate a dynamic learnable parameter. This allows us to choose the number of times an action is to be repeated based on the current state. We show empirically that such a setting improves the performance on relatively harder games like Seaquest.", "title": "" }, { "docid": "e7bb48197f48567e28c8657ec15fd6b1", "text": "Data mining is the computer based process of analyzing enormous sets of data and then extracting the meaning of the data. Data mining tools predict future trends, allowing business to make proactive, knowledge-driven decisions. Data mining tools can answer business questions that traditionally taken much time consuming to resolve. The huge amounts of data generated for prediction of heart disease are too complex and voluminous to be processed and analyzed by traditional methods. Data mining provides the methodology and technology to transform these mounds of data into useful information for decision making. By using data mining techniques it takes less time for the prediction of the disease with more accuracy. In this paper we survey different papers in which one or more algorithms of data mining used for the prediction of heart disease. Result from using neural networks is nearly 100% in one paper [10] and in [6]. So that the prediction by using data mining algorithm given efficient results. Applying data mining techniques to heart disease treatment data can provide as reliable performance as that achieved in diagnosing heart disease.", "title": "" }, { "docid": "012da2dd973e4b3fa94c46e417ed8d17", "text": "Sustainable HCI is now a recognized area of human-computer interaction drawing from a variety of disciplinary approaches, including the arts. How might HCI researchers working on sustainability productively understand the discourses and practices of ecologically engaged art as a means of enriching their own activities? We argue that an understanding of both the history of ecologically engaged art, and the art-historical and critical discourses surrounding it, provide a fruitful entry-point into a more critically aware sustainable HCI. We illustrate this through a consideration of frameworks from the arts, looking specifically at how these frameworks act more as generative devices than prescriptive recipes. Taking artistic influences seriously will require a concomitant rethinking of sustainable HCI standpoints - a potentially useful exercise for HCI research in general.", "title": "" }, { "docid": "dab8aa9867fb842c1e37570924a9d81c", "text": "Ellenberg indicator values (EIV) have been widely used to estimate habitat variables from floristic data and to predict vegetation composition based on habitat properties. Geographical Information Systems (GIS) and Digital Elevation Models (DEM) are valuable tools for studying the relationships between topographic and ecological characters of river systems. A 3-meter resolution DEM was derived for a. 3-km-long break section of the Szum River (SE Poland) from a 1:10,000 topographic map. Data on the diversity and ecological requirements of the local vascular flora were obtained while making floristic charts for 32 sections of the river valley (each 200 m long) and physical and chemical soil measurements; next, the data were translated into EIV. The correlations of the primary and secondary topographic attributes of the valley, species richness, and EIV (adapted for the Polish vascular flora) were assessed for all species recognized in each valley section. The total area and proportion of a flat area, mean slope, slope curvature, solar radiation (SRAD), and topographic wetness index (TWI) are the most important factors influencing local flora richness and diversity. The highest correlations were found for three ecological indicators, namely light, soil moisture, and soil organic content. The DEM seems to be useful in determination of correlations between topographic and ecological attributes along a minor river valley.", "title": "" }, { "docid": "b4c395b97f0482f3c1224ed6c8623ac2", "text": "The Scientific Computation Language (SCL) was designed mainly for developing computational models in education and research. This paper presents the justification for such a language, its relevant features, and a case study of a computational model implemented with the SCL.\n Development of the SCL language is part of the OOPsim project, which has had partial NSF support (CPATH). One of the goals of this project is to develop tools and approaches for designing and implementing computational models, emphasizing multi-disciplinary teams in the development process.\n A computational model is a computer implementation of the solution to a (scientific) problem for which a mathematical representation has been formulated. Developing a computational model consists of applying Computer Science concepts, principles and methods.\n The language syntax is defined at a higher level of abstraction than C, and includes language statements for improving program readability, debugging, maintenance, and correctness. The language design was influenced by Ada, Pascal, Eiffel, Java, C, and C++.\n The keywords have been added to maintain full compatibility with C. The SCL language translator is an executable program that is implemented as a one-pass language processor that generates C source code. The generated code can be integrated conveniently with any C and/or C++ library, on Linux and Windows (and MacOS). The semantics of SCL is informally defined to be the same C semantics.", "title": "" }, { "docid": "b181d6fd999fdcd8c5e5b52518998175", "text": "Hydrogels are used to create 3D microenvironments with properties that direct cell function. The current study demonstrates the versatility of hyaluronic acid (HA)-based hydrogels with independent control over hydrogel properties such as mechanics, architecture, and the spatial distribution of biological factors. Hydrogels were prepared by reacting furan-modified HA with bis-maleimide-poly(ethylene glycol) in a Diels-Alder click reaction. Biomolecules were photopatterned into the hydrogel by two-photon laser processing, resulting in spatially defined growth factor gradients. The Young's modulus was controlled by either changing the hydrogel concentration or the furan substitution on the HA backbone, thereby decoupling the hydrogel concentration from mechanical properties. Porosity was controlled by cryogelation, and the pore size distribution, by the thaw temperature. The addition of galactose further influenced the porosity, pore size, and Young's modulus of the cryogels. These HA-based hydrogels offer a tunable platform with a diversity of properties for directing cell function, with applications in tissue engineering and regenerative medicine.", "title": "" }, { "docid": "c187a6ad17503d269fe4c3a03fc4fd89", "text": "Despite the widespread support for live migration of Virtual Machines (VMs) in current hypervisors, these have significant shortcomings when it comes to migration of certain types of VMs. More specifically, with existing algorithms, there is a high risk of service interruption when migrating VMs with high workloads and/or over low-bandwidth networks. In these cases, VM memory pages are dirtied faster than they can be transferred over the network, which leads to extended migration downtime. In this contribution, we study the application of delta compression during the transfer of memory pages in order to increase migration throughput and thus reduce downtime. The delta compression live migration algorithm is implemented as a modification to the KVM hypervisor. Its performance is evaluated by migrating VMs running different type of workloads and the evaluation demonstrates a significant decrease in migration downtime in all test cases. In a benchmark scenario the downtime is reduced by a factor of 100. In another scenario a streaming video server is live migrated with no perceivable downtime to the clients while the picture is frozen for eight seconds using standard approaches. Finally, in an enterprise application scenario, the delta compression algorithm successfully live migrates a very large system that fails after migration using the standard algorithm. Finally, we discuss some general effects of delta compression on live migration and analyze when it is beneficial to use this technique.", "title": "" }, { "docid": "8419883ea9216cdf1ec65e18be78b182", "text": "Image captioning has attracted ever-increasing research attention in multimedia and computer vision. To encode the visual content, existing approaches typically utilize the off-the-shelf deep Convolutional Neural Network (CNN) model to extract visual features, which are sent to Recurrent Neural Network (RNN) based textual generators to output word sequence. Some methods encode visual objects and scene information with attention mechanism more recently. Despite the promising progress, one distinct disadvantage lies in distinguishing and modeling key semantic entities and their relations, which are in turn widely regarded as the important cues for us to describe image content. In this paper, we propose a novel image captioning model, termed StructCap. It parses a given image into key entities and their relations organized in a visual parsing tree, which is transformed and embedded under an encoder-decoder framework via visual attention. We give an end-to-end formulation to facilitate joint training of visual tree parser, structured semantic attention and RNN-based captioning modules. Experimental results on two public benchmarks, Microsoft COCO and Flickr30K, show that the proposed StructCap model outperforms the state-of-the-art approaches under various standard evaluation metrics.", "title": "" }, { "docid": "b999fe9bd7147ef9c555131d106ea43e", "text": "This paper presents the DeepCD framework which learns a pair of complementary descriptors jointly for image patch representation by employing deep learning techniques. It can be achieved by taking any descriptor learning architecture for learning a leading descriptor and augmenting the architecture with an additional network stream for learning a complementary descriptor. To enforce the complementary property, a new network layer, called data-dependent modulation (DDM) layer, is introduced for adaptively learning the augmented network stream with the emphasis on the training data that are not well handled by the leading stream. By optimizing the proposed joint loss function with late fusion, the obtained descriptors are complementary to each other and their fusion improves performance. Experiments on several problems and datasets show that the proposed method1 is simple yet effective, outperforming state-of-the-art methods.", "title": "" }, { "docid": "284587aa1992afe3c90fddc2cf2a8906", "text": "Plant genomes contribute to the structure and function of the plant microbiome, a key determinant of plant health and productivity. High-throughput technologies are revealing interactions between these complex communities and their hosts in unprecedented detail.", "title": "" }, { "docid": "5bb040a8b1efdf69edda2cb6461c28d3", "text": "Health surveillance systems based on online user-generated content often rely on the identification of textual markers that are related to a target disease. Given the high volume of available data, these systems benefit from an automatic feature selection process. This is accomplished either by applying statistical learning techniques, which do not consider the semantic relationship between the selected features and the inference task, or by developing labour-intensive text classifiers. In this paper, we use neural word embeddings, trained on social media content from Twitter, to determine, in an unsupervised manner, how strongly textual features are semantically linked to an underlying health concept. We then refine conventional feature selection methods by a priori operating on textual variables that are sufficiently close to a target concept. Our experiments focus on the supervised learning problem of estimating influenza-like illness rates from Google search queries. A “flu infection” concept is formulated and used to reduce spurious —and potentially confounding— features that were selected by previously applied approaches. In this way, we also address forms of scepticism regarding the appropriateness of the feature space, alleviating potential cases of overfitting. Ultimately, the proposed hybrid feature selection method creates a more reliable model that, according to our empirical analysis, improves the inference performance (Mean Absolute Error) of linear and nonlinear regressors by 12% and 28.7%, respectively.", "title": "" }, { "docid": "fdefc782f0438f4451c91d3b96e27b0b", "text": "Abstract: The roles played by learning and memorization represent an important topic in deep learning research. Recent work on this subject has shown that the optimization behavior of DNNs trained on shuffled labels is qualitatively different from DNNs trained with real labels. Here, we propose a novel permutation approach that can differentiate memorization from learning in deep neural networks (DNNs) trained as usual (i.e., using the real labels to guide the learning, rather than shuffled labels). The evaluation of weather the DNN has learned and/or memorized, happens in a separate step where we compare the predictive performance of a shallow classifier trained with the features learned by the DNN, against multiple instances of the same classifier, trained on the same input, but using shuffled labels as outputs. By evaluating these shallow classifiers in validation sets that share structure with the training set, we are able to tell apart learning from memorization. Application of our permutation approach to multi-layer perceptrons and convolutional neural networks trained on image data corroborated many findings from other groups. Most importantly, our illustrations also uncovered interesting dynamic patterns about how DNNs memorize over increasing numbers of training epochs, and support the surprising result that DNNs are still able to learn, rather than only memorize, when trained with pure Gaussian noise as input.", "title": "" }, { "docid": "2ead9e973f2a237b604bf68284e0acf1", "text": "Cognitive radio networks challenge the traditional wireless networking paradigm by introducing concepts firmly stemmed into the Artificial Intelligence (AI) field, i.e., learning and reasoning. This fosters optimal resource usage and management allowing a plethora of potential applications such as secondary spectrum access, cognitive wireless backbones, cognitive machine-to-machine etc. The majority of overview works in the field of cognitive radio networks deal with the notions of observation and adaptations, which are not a distinguished cognitive radio networking aspect. Therefore, this paper provides insight into the mechanisms for obtaining and inferring knowledge that clearly set apart the cognitive radio networks from other wireless solutions.", "title": "" }, { "docid": "d18a636768e6aea2e84c7fc59593ec89", "text": "Enterprise social networking (ESN) techniques have been widely adopted by firms to provide a platform for public communication among employees. This study investigates how the relationships between stressors (i.e., challenge and hindrance stressors) and employee innovation are moderated by task-oriented and relationship-oriented ESN use. Since challenge-hindrance stressors and employee innovation are individual-level variables and task-oriented ESN use and relationship-oriented ESN use are team-level variables, we thus use hierarchical linear model to test this cross-level model. The results of a survey of 191 employees in 50 groups indicate that two ESN use types differentially moderate the relationship between stressors and employee innovation. Specifically, task-oriented ESN use positively moderates the effects of the two stressors on employee innovation, while relationship-oriented ESN use negatively moderates the relationship between the two stressors and employee innovation. In addition, we find that challenge stressors significantly improve employee innovation. Theoretical and practical implications are discussed.", "title": "" }, { "docid": "936c9360d1a6c466d0da4a8ab686c871", "text": "Counterfactual Regret Minimization (CFR) is an efficient no-regret learning algorithm for decision problems modeled as extensive games. CFR’s regret bounds depend on the requirement of perfect recall: players always remember information that was revealed to them and the order in which it was revealed. In games without perfect recall, however, CFR’s guarantees do not apply. In this paper, we present the first regret bound for CFR when applied to a general class of games with imperfect recall. In addition, we show that CFR applied to any abstraction belonging to our general class results in a regret bound not just for the abstract game, but for the full game as well. We verify our theory and show how imperfect recall can be used to trade a small increase in regret for a significant reduction in memory in three domains: die-roll poker, phantom tic-tac-toe, and Bluff.", "title": "" }, { "docid": "82255315845c61fd6b8b33457a6dfbd8", "text": "Wireless Sensor Networks (WSNs) have been a subject of extensive research and have undergone explosive growth in the last few years. WSNs utilize collaborative measures such as data gathering, aggregation, processing, and management of sensing activities for enhanced performance. In order to communicate with the sink node, node having low power may have to traverse multi-hops. This requires neighbors' nodes to be used as relays. However, if the relay nodes are compromised or malicious, they may leak confidential information to unauthorized nodes in the WSN. Moreover, in many WSN applications, the deployment of sensor nodes is carried out in an ad-hoc fashion without careful examination. In such networks it is desirable to ensure the source to sink privacy and maximize the lifetime of the network, by finding secure energy-efficient route discovery and forwarding mechanisms. Careful management is also necessary, as processing required for secure routing is distributed over multiple nodes. An important consideration in this regard is energy-aware secure routing, which is significant in ensuring smooth operation of WSNs. As, these networks deal in sensitive data and are vulnerable to attack, it is important to make them secure against various types of threats. However, resource constraints could make the design, deployment and management of large WSNs a challenging proposition. The purpose of this paper is to highlight routing based security threats, provide a detailed assessment of existing solutions and present a Trust-based Energy Efficient Secure Routing Protocol (TEESR). The paper also highlights future research directions in of secure routing in multi-hop WSNs.", "title": "" }, { "docid": "6347b642cec08bf062f6e5594f805bd3", "text": "Using a multimethod approach, the authors conducted 4 studies to test life span hypotheses about goal orientations across adulthood. Confirming expectations, in Studies 1 and 2 younger adults reported a primary growth orientation in their goals, whereas older adults reported a stronger orientation toward maintenance and loss prevention. Orientation toward prevention of loss correlated negatively with well-being in younger adults. In older adults, orientation toward maintenance was positively associated with well-being. Studies 3 and 4 extend findings of a self-reported shift in goal orientation to the level of behavioral choice involving cognitive and physical fitness goals. Studies 3 and 4 also examine the role of expected resource demands. The shift in goal orientation is discussed as an adaptive mechanism to manage changing opportunities and constraints across adulthood.", "title": "" } ]
scidocsrr
d46a68970036d78f48b58af241bd6375
Math Anxiety , Working Memory , and Math Achievement in Early Elementary School
[ { "docid": "13fbd264cf1f515c0ad6ebb30644e32e", "text": "This article presents a new model that accounts for working memory spans in adults, the time-based resource-sharing model. The model assumes that both components (i.e., processing and maintenance) of the main working memory tasks require attention and that memory traces decay as soon as attention is switched away. Because memory retrievals are constrained by a central bottleneck and thus totally capture attention, it was predicted that the maintenance of the items to be recalled depends on both the number of memory retrievals required by the intervening treatment and the time allowed to perform them. This number of retrievals:time ratio determines the cognitive load of the processing component. The authors show in 7 experiments that working memory spans vary as a function of this cognitive load.", "title": "" } ]
[ { "docid": "c6a519ce49dc7b5776afe8035f79fc73", "text": "For 100 years, there has been no change in the basic structure of the electrical power grid. Experiences have shown that the hierarchical, centrally controlled grid of the 20th Century is ill-suited to the needs of the 21st Century. To address the challenges of the existing power grid, the new concept of smart grid has emerged. The smart grid can be considered as a modern electric power grid infrastructure for enhanced efficiency and reliability through automated control, high-power converters, modern communications infrastructure, sensing and metering technologies, and modern energy management techniques based on the optimization of demand, energy and network availability, and so on. While current power systems are based on a solid information and communication infrastructure, the new smart grid needs a different and much more complex one, as its dimension is much larger. This paper addresses critical issues on smart grid technologies primarily in terms of information and communication technology (ICT) issues and opportunities. The main objective of this paper is to provide a contemporary look at the current state of the art in smart grid communications as well as to discuss the still-open research issues in this field. It is expected that this paper will provide a better understanding of the technologies, potential advantages and research challenges of the smart grid and provoke interest among the research community to further explore this promising research area.", "title": "" }, { "docid": "e651af2be422e13548af7d3152d27539", "text": "A sample of 116 children (M=6 years 7 months) in Grade 1 was randomly assigned to experimental (n=60) and control (n=56) groups, with equal numbers of boys and girls in each group. The experimental group received a program aimed at improving representation and transformation of visuospatial information, whereas the control group received a substitute program. All children were administered mental rotation tests before and after an intervention program and a Global-Local Processing Strategies test before the intervention. The results revealed that initial gender differences in spatial ability disappeared following treatment in the experimental but not in the control group. Gender differences were moderated by strategies used to process visuospatial information. Intervention and processing strategies were essential in reducing gender differences in spatial abilities.", "title": "" }, { "docid": "554d0255aef7ffac9e923da5d93b97e3", "text": "In this demo paper, we present a text simplification approach that is directed at improving the performance of state-of-the-art Open Relation Extraction (RE) systems. As syntactically complex sentences often pose a challenge for current Open RE approaches, we have developed a simplification framework that performs a pre-processing step by taking a single sentence as input and using a set of syntactic-based transformation rules to create a textual input that is easier to process for subsequently applied Open RE systems.", "title": "" }, { "docid": "40ebf37907d738dd64b5a87b93b4a432", "text": "Deep learning has led to many breakthroughs in machine perception and data mining. Although there are many substantial advances of deep learning in the applications of image recognition and natural language processing, very few work has been done in video analysis and semantic event detection. Very deep inception and residual networks have yielded promising results in the 2014 and 2015 ILSVRC challenges, respectively. Now the question is whether these architectures are applicable to and computationally reasonable in a variety of multimedia datasets. To answer this question, an efficient and lightweight deep convolutional network is proposed in this paper. This network is carefully designed to decrease the depth and width of the state-of-the-art networks while maintaining the high-performance. The proposed deep network includes the traditional convolutional architecture in conjunction with residual connections and very light inception modules. Experimental results demonstrate that the proposed network not only accelerates the training procedure, but also improves the performance in different multimedia classification tasks.", "title": "" }, { "docid": "33ef3a8f8f218ef38dce647bf232a3a7", "text": "Network traffic monitoring and analysis-related research has struggled to scale for massive amounts of data in real time. Some of the vertical scaling solutions provide good implementation of signature based detection. Unfortunately these approaches treat network flows across different subnets and cannot apply anomaly-based classification if attacks originate from multiple machines at a lower speed, like the scenario of Peer-to-Peer Botnets. In this paper the authors build up on the progress of open source tools like Hadoop, Hive and Mahout to provide a scalable implementation of quasi-real-time intrusion detection system. The implementation is used to detect Peer-to-Peer Botnet attacks using machine learning approach. The contributions of this paper are as follows: (1) Building a distributed framework using Hive for sniffing and processing network traces enabling extraction of dynamic network features; (2) Using the parallel processing power of Mahout to build Random Forest based Decision Tree model which is applied to the problem of Peer-to-Peer Botnet detection in quasi-real-time. The implementation setup and performance metrics are presented as initial observations and future extensions are proposed. 2014 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "201db4c8aaa43d766ec707d7fff5fd65", "text": "Sentiment Analysis is a way of considering and grouping of opinions or views expressed in a text. In this age when social media technologies are generating vast amounts of data in the form of tweets, Facebook comments, blog posts, and Instagram comments, sentiment analysis of these usergenerated data provides very useful feedback. Since it is undisputable facts that twitter sentiment analysis has become an effective way in determining public sentiment about a certain topic product or issue. Thus, a lot of research have been ongoing in recent years to build efficient models for sentiment classification accuracy and precision. In this work, we analyse twitter data using support vector machine algorithm to classify tweets into positive, negative and neutral sentiments. This research try to find the relationship between feature hash bit size and the accuracy and precision of the model that is generated. We measure the effect of varying the feature has bit size on the accuracy and precision of the model. The research showed that as the feature hash bit size increases at a certain point the accuracy and precision value started decreasing with increase in the feature hash bit size. General Terms Hadoop, Data Processing, Machine learning", "title": "" }, { "docid": "a5cd7d46dc74d15344e2f3e9b79388a3", "text": "A number of differences have emerged between modern and classic approaches to constituency parsing in recent years, with structural components like grammars and featurerich lexicons becoming less central while recurrent neural network representations rise in popularity. The goal of this work is to analyze the extent to which information provided directly by the model structure in classical systems is still being captured by neural methods. To this end, we propose a high-performance neural model (92.08 F1 on PTB) that is representative of recent work and perform a series of investigative experiments. We find that our model implicitly learns to encode much of the same information that was explicitly provided by grammars and lexicons in the past, indicating that this scaffolding can largely be subsumed by powerful general-purpose neural machinery.", "title": "" }, { "docid": "8c9bc40dd1378ce74681426c59fe8d7f", "text": "Network Function Virtualization promises to reduce the overall operational and capital expenses experienced by the network operators. Running multiple network functions on top of a standard x86 server instead of dedicated appliances can increase the utilization of the underlying hardware and reduce the maintenance and management costs. However, total cost of ownership calculations are typically a function of the attainable network throughput, which in a virtualized system is highly dependent on the overall system architecture - in particular the input/ output (I/O) path. In this paper, we investigate the attainable performance of an x86 host running multiple Virtualized Network Functions (VNFs) under different I/O architectures: OVS, SRIOV and FD.io VPP. We show that the system throughput in a multi-VNF environment differs significantly from deployments where only a single VNF is running on a server, while different I/O architectures can achieve different levels of performance.", "title": "" }, { "docid": "c34e2227c97f71fbe3d2514e1e77e6e6", "text": "A major difficulty in a recommendation system for groups is to use a group aggregation strategy to ensure, among other things, the maximization of the average satisfaction of group members. This paper presents an approach based on the theory of noncooperative games to solve this problem. While group members can be seen as game players, the items for potential recommendation for the group comprise the set of possible actions. Achieving group satisfaction as a whole becomes, then, a problem of finding the Nash equilibrium. Experiments with a MovieLens dataset and a function of arithmetic mean to compute the prediction of group satisfaction for the generated recommendation have shown statistically significant results when compared to state-of-the-art aggregation strategies, in particular, when evaluation among group members are more heterogeneous. The feasibility of this unique approach is shown by the development of an application for Facebook, which recommends movies to groups of friends.", "title": "" }, { "docid": "20e13726ebc2430f7305c75d70761a18", "text": "The procedure of pancreaticoduodenectomy consists of three parts: resection, lymph node dissection, and reconstruction. A transection of the pancreas is commonly performed after a maneuver of the pancreatic head, exposing of the portal vein or lymph node dissection, and it should be confirmed as a safe method for pancreatic transection for decreasing the incidence of pancreatic fistula. However, there are only a few clinical trials with high levels of evidence for pancreatic surgery. In this report, we discuss the following issues: dissection of peripancreatic tissue, exposing the portal vein, pancreatic transection, dissection of the right hemicircle of the peri-superior mesenteric artery including plexus and lymph nodes, and dissection of the pancreatic parenchyma.", "title": "" }, { "docid": "6e1fe3f940bf7f824722f3ad1dd5fd40", "text": "Research on violent television and films, video games, and music reveals unequivocal evidence that media violence increases the likelihood of aggressive and violent behavior in both immediate and long-term contexts. The effects appear larger for milder than for more severe forms of aggression, but the effects on severe forms of violence are also substantial (r = .13 to .32) when compared with effects of other violence risk factors or medical effects deemed important by the medical community (e.g., effect of aspirin on heart attacks). The research base is large; diverse in methods, samples, and media genres; and consistent in overall findings. The evidence is clearest within the most extensively researched domain, television and film violence. The growing body of video-game research yields essentially the same conclusions. Short-term exposure increases the likelihood of physically and verbally aggressive behavior, aggressive thoughts, and aggressive emotions. Recent large-scale longitudinal studies provide converging evidence linking frequent exposure to violent media in childhood with aggression later in life, including physical assaults and spouse abuse. Because extremely violent criminal behaviors (e.g., forcible rape, aggravated assault, homicide) are rare, new longitudinal studies with larger samples are needed to estimate accurately how much habitual childhood exposure to media violence increases the risk for extreme violence. Well-supported theory delineates why and when exposure to media violence increases aggression and violence. Media violence produces short-term increases by priming existing aggressive scripts and cognitions, increasing physiological arousal, and triggering an automatic tendency to imitate observed behaviors. Media violence produces long-term effects via several types of learning processes leading to the acquisition of lasting (and automatically accessible) aggressive scripts, interpretational schemas, and aggression-supporting beliefs about social behavior, and by reducing individuals' normal negative emotional responses to violence (i.e., desensitization). Certain characteristics of viewers (e.g., identification with aggressive characters), social environments (e.g., parental influences), and media content (e.g., attractiveness of the perpetrator) can influence the degree to which media violence affects aggression, but there are some inconsistencies in research results. This research also suggests some avenues for preventive intervention (e.g., parental supervision, interpretation, and control of children's media use). However, extant research on moderators suggests that no one is wholly immune to the effects of media violence. Recent surveys reveal an extensive presence of violence in modern media. Furthermore, many children and youth spend an inordinate amount of time consuming violent media. Although it is clear that reducing exposure to media violence will reduce aggression and violence, it is less clear what sorts of interventions will produce a reduction in exposure. The sparse research literature suggests that counterattitudinal and parental-mediation interventions are likely to yield beneficial effects, but that media literacy interventions by themselves are unsuccessful. Though the scientific debate over whether media violence increases aggression and violence is essentially over, several critical tasks remain. Additional laboratory and field studies are needed for a better understanding of underlying psychological processes, which eventually should lead to more effective interventions. Large-scale longitudinal studies would help specify the magnitude of media-violence effects on the most severe types of violence. Meeting the larger societal challenge of providing children and youth with a much healthier media diet may prove to be more difficult and costly, especially if the scientific, news, public policy, and entertainment communities fail to educate the general public about the real risks of media-violence exposure to children and youth.", "title": "" }, { "docid": "23f3ab8e7bc934ebb786916a5c4c7d27", "text": "This paper presents a Haskell library for graph processing: DeltaGraph. One unique feature of this system is that intentions to perform graph updates can be memoized in-graph in a decentralized fashion, and the propagation of these intentions within the graph can be decoupled from the realization of the updates. As a result, DeltaGraph can respond to updates in constant time and work elegantly with parallelism support. We build a Twitter-like application on top of DeltaGraph to demonstrate its effectiveness and explore parallelism and opportunistic computing optimizations.", "title": "" }, { "docid": "0e8cde83260d6ca4d8b3099628c25fc2", "text": "1Department of Molecular Virology, Immunology and Medical Genetics, The Ohio State University Medical Center, Columbus, Ohio, USA. 2Department of Physics, Pohang University of Science and Technology, Pohang, Korea. 3School of Interdisciplinary Bioscience and Bioengineering, Pohang, Korea. 4Physics Department, The Ohio State University, Columbus, Ohio, USA. 5These authors contributed equally to this work. e-mail: fishel.7@osu.edu", "title": "" }, { "docid": "475c6c50d2b8e0a3f66628412f5bcf34", "text": "Task allocation is an important aspect of many multi-robot systems. The features and complexity of multi-robot task allocation (MRTA) problems are dictated by the requirements of the particular domain under consideration. These problems can range from those involving instantaneous distribution of simple, independent tasks among members of a homogenous team, to those requiring the time-extended scheduling of complex interrelated multi-step tasks for a members of a heterogenous team related by several constraints. The existing widely-used taxonomy for task allocation in multi-robot systems addresses only problems with independent tasks and does not deal with problems with interrelated utilities and constraints. A survey of recent work in multi-robot task allocation reveals that this is a significant deficiency with respect to realistic multi-robot task allocation problems. Thus, in this paper, we present a new, comprehensive taxonomy, iTax, that explicitly takes into consideration the issues of interrelated utilities and constraints. Our taxonomy maps categories of MRTA problems to existing mathematical models from combinatorial optimization and operations research, and hence draws important parallels between robotics and these fields.", "title": "" }, { "docid": "63ba0ed4931cadf5a9559a3c1aa0be20", "text": "MT19937 is a kind of Mersenne Twister, which is a pseudo-random number generator. This study presents new designs for a MT19937 circuit suitable for custom computing machinery for highperformance scientific simulations. Our designs can generate multiple random numbers per cycle (multi-port design). The estimated throughput of a 52-port design was 262 Gbps, which is 115 times higher than the software on a Pentium 4 (2.53 GHz) processor. Multi-port designs were proven to be more cost-effective than using multiple single-port designs. The initialization circuit can be included without performance loss in exchange for a slight increase of logic scale. key words: custom circuit, simulation, random number, Mersenne Twister, FPGA", "title": "" }, { "docid": "6494669dc199660c50e22d4eb62646fb", "text": "Recent advances in the instrumentation technology of sensory substitution have presented new opportunities to develop systems for compensation of sensory loss. In sensory substitution (e.g. of sight or vestibular function), information from an artificial receptor is coupled to the brain via a human-machine interface. The brain is able to use this information in place of that usually transmitted from an intact sense organ. Both auditory and tactile systems show promise for practical sensory substitution interface sites. This research provides experimental tools for examining brain plasticity and has implications for perceptual and cognition studies more generally.", "title": "" }, { "docid": "b4b6b51c8f8a0da586fe66b61711222c", "text": "Although game-tree search works well in perfect-information games, it is less suitable for imperfect-information games such as contract bridge. The lack of knowledge about the opponents' possible moves gives the game tree a very large branching factor, making it impossible to search a signiicant portion of this tree in a reasonable amount of time. This paper describes our approach for overcoming this problem. We represent information about bridge in a task network that is extended to represent multi-agency and uncertainty. Our game-playing procedure uses this task network to generate game trees in which the set of alternative choices is determined not by the set of possible actions, but by the set of available tactical and strategic schemes. We have tested this approach on declarer play in the game of bridge, in an implementation called Tignum 2. On 5000 randomly generated notrump deals, Tignum 2 beat the strongest commercially available program by 1394 to 1302, with 2304 ties. These results are statistically signiicant at the = 0:05 level. Tignum 2 searched an average of only 8745.6 moves per deal in an average time of only 27.5 seconds per deal on a Sun SPARCstation 10. Further enhancements to Tignum 2 are currently underway.", "title": "" }, { "docid": "436a250dc621d58d70bee13fd3595f06", "text": "The solid-state transformer allows add-on intelligence to enhance power quality compatibility between source and load. It is desired to demonstrate the benefits gained by the use of such a device. Recent advancement in semiconductor devices and converter topologies facilitated a newly proposed intelligent universal transformer (IUT), which can isolate a disturbance from either source or load. This paper describes the basic circuit and the operating principle for the multilevel converter based IUT and its applications for medium voltages. Various power quality enhancement features are demonstrated with computer simulation for a complete IUT circuit.", "title": "" }, { "docid": "cb7d7c083106e808ec3ca5196c310f53", "text": "In a data streaming setting, data points are observed one by one. The concepts to be learned from the data points may change infinitely often as the data is streaming. In this paper, we extend the idea of testing exchangeability online (Vovk et al., 2003) to a martingale framework to detect concept changes in time-varying data streams. Two martingale tests are developed to detect concept changes using: (i) martingale values, a direct consequence of the Doob's Maximal Inequality, and (ii) the martingale difference, justified using the Hoeffding-Azuma Inequality. Under some assumptions, the second test theoretically has a lower probability than the first test of rejecting the null hypothesis, \"no concept change in the data stream\", when it is in fact correct. Experiments show that both martingale tests are effective in detecting concept changes in time-varying data streams simulated using two synthetic data sets and three benchmark data sets.", "title": "" } ]
scidocsrr
4b389899d6cd2c2f2eaa32c8a985d1e9
Attacks on the Keeloq Block Cipher and Authentication Systems
[ { "docid": "f7bed669e86a76f707e0f22e58f15de9", "text": "A new stream cipher, Grain, is proposed. The design targets hardware environments where gate count, power consumption and memory is very limited. It is based on two shift registers and a nonlinear output function. The cipher has the additional feature that the speed can be increased at the expense of extra hardware. The key size is 80 bits and no attack faster than exhaustive key search has been identified. The hardware complexity and throughput compares favourably to other hardware oriented stream ciphers like E0 and A5/1.", "title": "" } ]
[ { "docid": "ea544860e3c8d8b154985af822c4a9ea", "text": "Learning to walk over a graph towards a target node for a given input query and a source node is an important problem in applications such as knowledge base completion (KBC). It can be formulated as a reinforcement learning (RL) problem with a known state transition model. To overcome the challenge of sparse reward, we develop a graph-walking agent called M-Walk, which consists of a deep recurrent neural network (RNN) and Monte Carlo Tree Search (MCTS). The RNN encodes the state (i.e., history of the walked path) and maps it separately to a policy, a state value and state-action Q-values. In order to effectively train the agent from sparse reward, we combine MCTS with the neural policy to generate trajectories yielding more positive rewards. From these trajectories, the network is improved in an off-policy manner using Q-learning, which modifies the RNN policy via parameter sharing. Our proposed RL algorithm repeatedly applies this policy-improvement step to learn the entire model. At test time, MCTS is again combined with the neural policy to predict the target node. Experimental results on several graph-walking benchmarks show that M-Walk is able to learn better policies than other RL-based methods, which are mainly based on policy gradients. M-Walk also outperforms traditional KBC baselines.", "title": "" }, { "docid": "990d811789fd5025d784a147facf9d07", "text": "1389-1286/$ see front matter 2012 Elsevier B.V http://dx.doi.org/10.1016/j.comnet.2012.06.016 ⇑ Corresponding author. Tel.: +216 96 819 500. E-mail addresses: olfa.gaddour@enis.rnu.tn (O isep.ipp.pt (A. Koubâa). IPv6 Routing Protocol for Low Power and Lossy Networks (RPL) is a routing protocol specifically designed for Low power and Lossy Networks (LLN) compliant with the 6LoWPAN protocol. It currently shows up as an RFC proposed by the IETF ROLL working group. However, RPL has gained a lot of maturity and is attracting increasing interest in the research community. The absence of surveys about RPL motivates us to write this paper, with the objective to provide a quick introduction to RPL. In addition, we present the most relevant research efforts made around RPL routing protocol that pertain to its performance evaluation, implementation, experimentation, deployment and improvement. We also present an experimental performance evaluation of RPL for different network settings to understand the impact of the protocol attributes on the network behavior, namely in terms of convergence time, energy, packet loss and packet delay. Finally, we point out open research challenges on the RPL design. We believe that this survey will pave the way for interested researchers to understand its behavior and contributes for further relevant research works. 2012 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "0444b38c0d20c999df4cb1294b5539c3", "text": "Decimal hardware arithmetic units have recently regained popularity, as there is now a high demand for high performance decimal arithmetic. We propose a novel method for carry-free addition of decimal numbers, where each equally weighted decimal digit pair of the two operands is partitioned into two weighted bit-sets. The arithmetic values of these bit-sets are evaluated, in parallel, for fast computation of the transfer digit and interim sum. In the proposed fully redundant adder (VS semi-redundant ones such as decimal carry-save adders) both operands and sum are redundant decimal numbers with overloaded decimal digit set [0, 15]. This adder is shown to improve upon the latest high performance similar works and outperform all the previous alike adders. However, there is a drawback that the adder logic cannot be efficiently adapted for subtraction. Nevertheless, this adder and its restricted-input varieties are shown to efficiently fit in the design of a parallel decimal multiplier. The two-to-one partial product reduction ratio that is attained via the proposed adder has lead to a VLSI-friendly recursive partial product reduction tree. Two alternative architectures for decimal multipliers are presented; one is slower, but area-improved, and the other one consumes more area, but is delay-improved. However, both are faster in comparison with previously reported parallel decimal multipliers. The area and latency comparisons are based on logical effort analysis under the same assumptions for all the evaluated adders and multipliers. Moreover, performance correctness of all the adders is checked via running exhaustive tests on the corresponding VHDL codes. For more reliable evaluation, we report the result of synthesizing these adders by Synopsys Design Compiler using TSMC 0.13mm standard CMOS process under various time constrains. & 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "b82facfc85ef2ae55f03beef7d1bb968", "text": "Stock movements are essentially driven by new information. Market data, financial news, and social sentiment are believed to have impacts on stock markets. To study the correlation between information and stock movements, previous works typically concatenate the features of different information sources into one super feature vector. However, such concatenated vector approaches treat each information source separately and ignore their interactions. In this article, we model the multi-faceted investors’ information and their intrinsic links with tensors. To identify the nonlinear patterns between stock movements and new information, we propose a supervised tensor regression learning approach to investigate the joint impact of different information sources on stock markets. Experiments on CSI 100 stocks in the year 2011 show that our approach outperforms the state-of-the-art trading strategies.", "title": "" }, { "docid": "c9be0a4079800f173cf9553b9a69581c", "text": "A 500W classical three-way Doherty power amplifier (DPA) with LDMOS devices at 1.8GHz is presented. Optimized device ratio is selected to achieve maximum efficiency as well as linearity. With a simple passive input driving network implementation, the demonstrator exhibits more than 55% efficiency with 9.9PAR WCDMA signal from 1805MHz-1880MHz. It can be linearized at -60dBc level with 20MHz LTE signal at an average output power of 49dBm.", "title": "" }, { "docid": "3ef70894ab9f80eeb7e5172eca3d4066", "text": "BACKGROUND\nWhile physical activity has been shown to improve cognitive performance and well-being, office workers are essentially sedentary. We compared the effects of physical activity performed as (i) one bout in the morning or (ii) as microbouts spread out across the day to (iii) a day spent sitting, on mood and energy levels and cognitive function.\n\n\nMETHODS\nIn a randomized crossover trial, 30 sedentary adults completed each of three conditions: 6 h of uninterrupted sitting (SIT), SIT plus 30 min of moderate-intensity treadmill walking in the morning (ONE), and SIT plus six hourly 5-min microbouts of moderate-intensity treadmill walking (MICRO). Self-perceived energy, mood, and appetite were assessed with visual analog scales. Vigor and fatigue were assessed with the Profile of Mood State questionnaire. Cognitive function was measured using a flanker task and the Comprehensive Trail Making Test. Intervention effects were tested using linear mixed models.\n\n\nRESULTS\nBoth ONE and MICRO increased self-perceived energy and vigor compared to SIT (p < 0.05 for all). MICRO, but not ONE, improved mood, decreased levels of fatigue and reduced food cravings at the end of the day compared to SIT (p < 0.05 for all). Cognitive function was not significantly affected by condition.\n\n\nCONCLUSIONS\nIn addition to the beneficial impact of physical activity on levels of energy and vigor, spreading out physical activity throughout the day improved mood, decreased feelings of fatigue and affected appetite. Introducing short bouts of activity during the workday of sedentary office workers is a promising approach to improve overall well-being at work without negatively impacting cognitive performance.\n\n\nTRIAL REGISTRATION\nNCT02717377 , registered 22 March 2016.", "title": "" }, { "docid": "072b842bb999a348ac2b6aa4a44f5ff2", "text": "Eating disorders, such as anorexia nervosa are a major health concern affecting many young individuals. Given the extensive adoption of social media technologies in the anorexia affected demographic, we study behavioral characteristics of this population focusing on the social media Tumblr. Aligned with observations in prior literature, we find the presence of two prominent anorexia related communities on Tumblr -- pro-anorexia and pro-recovery. Empirical analyses on several thousand Tumblr posts show use of the site as a media-rich platform replete with triggering content for enacting anorexia as a lifestyle choice. Through use of common pro-anorexia tags, the pro-recovery community however attempts to \"permeate\" into the pro-anorexia community to educate them of the health risks of anorexia. Further, the communities exhibit distinctive affective, social, cognitive, and linguistic style markers. Compared with recover- ing anorexics, pro-anorexics express greater negative affect, higher cognitive impairment, and greater feelings of social isolation and self-harm. We also observe that these characteristics may be used in a predictive setting to detect anorexia content with 80% accuracy. Based on our findings, clinical implications of detecting anorexia related content on social media are discussed.", "title": "" }, { "docid": "24e54cbc2c419de1d2d56e64eb428004", "text": "Internet of Things has become a predominant phenomenon in every sphere of smart life. Connected Cars and Vehicular Internet of Things, which involves communication and data exchange between vehicles, traffic infrastructure or other entities are pivotal to realize the vision of smart city and intelligent transportation. Vehicular Cloud offers a promising architecture wherein storage and processing capabilities of smart objects are utilized to provide on-the-fly fog platform. Researchers have demonstrated vulnerabilities in this emerging vehicular IoT ecosystem, where data has been stolen from critical sensors and smart vehicles controlled remotely. Security and privacy is important in Internet of Vehicles (IoV) where access to electronic control units, applications and data in connected cars should only be authorized to legitimate users, sensors or vehicles. In this paper, we propose an authorization framework to secure this dynamic system where interactions among entities is not pre-defined. We provide an extended access control oriented (E-ACO) architecture relevant to IoV and discuss the need of vehicular clouds in this time and location sensitive environment. We outline approaches to different access control models which can be enforced at various layers of E-ACO architecture and in the authorization framework. Finally, we discuss use cases to illustrate access control requirements in our vision of cloud assisted connected cars and vehicular IoT, and discuss possible research directions.", "title": "" }, { "docid": "c4e94803ae52dbbf4ac58831ff381467", "text": "Dynamic Adaptive Streaming over HTTP (DASH) is broadly deployed on the Internet for live and on-demand video streaming services. Recently, a new version of HTTP was proposed, named HTTP/2. One of the objectives of HTTP/2 is to improve the end-user perceived latency compared to HTTP/1.1. HTTP/2 introduces the possibility for the server to push resources to the client. This paper focuses on using the HTTP/2 protocol and the server push feature to reduce the start-up delay in a DASH streaming session. In addition, the paper proposes a new approach for video adaptation, which consists in estimating the bandwidth, using WebSocket (WS) over HTTP/2, and in making partial adaptation on the server side. Obtained results show that, using the server push feature and WebSocket layered over HTTP/2 allow faster loading time and faster convergence to the nominal state. Proposed solution is studied in the context of a direct client-server HTTP/2 connection. Intermediate caches are not considered in this study.", "title": "" }, { "docid": "7a98fe4a64c17587ed09c2fa924eb018", "text": "This article describes a methodology for collecting text from the Web to match a target sublanguage both in style (register) and topic. Unlike other work that estimates n-gram statistics from page counts, the approach here is to select and filter documents, which provides more control over the type of material contributing to the n-gram counts. The data can be used in a variety of ways; here, the different sources are combined in two types of mixture models. Focusing on conversational speech where data collection can be quite costly, experiments demonstrate the positive impact of Web collections on several tasks with varying amounts of data, including Mandarin and English telephone conversations and English meetings and lectures.", "title": "" }, { "docid": "266f89564a34239cf419ed9e83a2c988", "text": "The potential of high-resolution IKONOS and QuickBird satellite imagery for mapping and analysis of land and water resources at local scales in Minnesota is assessed in a series of three applications. The applications and accuracies evaluated include: (1) classification of lake water clarity (r = 0.89), (2) mapping of urban impervious surface area (r = 0.98), and (3) aquatic vegetation surveys of emergent and submergent plant groups (80% accuracy). There were several notable findings from these applications. For example, modeling and estimation approaches developed for Landsat TM data for continuous variables such as lake water clarity and impervious surface area can be applied to high-resolution satellite data. The rapid delivery of spatial data can be coupled with current GPS and field computer technologies to bring the imagery into the field for cover type validation. We also found several limitations in working with this data type. For example, shadows can influence feature classification and their effects need to be evaluated. Nevertheless, high-resolution satellite data has excellent potential to extend satellite remote sensing beyond what has been possible with aerial photography and Landsat data, and should be of interest to resource managers as a way to create timely and reliable assessments of land and water resources at a local scale. D 2003 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "f81723af1cb8bf52b1348fe1f4d91d90", "text": "The backpropagation (BP) algorithm is often thought to be biologically implausible in the brain. One of the main reasons is that BP requires symmetric weight matrices in the feedforward and feedback pathways. To address this “weight transport problem” (Grossberg, 1987), two more biologically plausible algorithms, proposed by Liao et al. (2016) and Lillicrap et al. (2016), relax BP’s weight symmetry requirements and demonstrate comparable learning capabilities to that of BP on small datasets. However, a recent study by Bartunov et al. (2018) evaluate variants of target-propagation (TP) and feedback alignment (FA) on MINIST, CIFAR, and ImageNet datasets, and find that although many of the proposed algorithms perform well on MNIST and CIFAR, they perform significantly worse than BP on ImageNet. Here, we additionally evaluate the sign-symmetry algorithm (Liao et al., 2016), which differs from both BP and FA in that the feedback and feedforward weights share signs but not magnitudes. We examine the performance of sign-symmetry and feedback alignment on ImageNet and MS COCO datasets using different network architectures (ResNet-18 and AlexNet for ImageNet, RetinaNet for MS COCO). Surprisingly, networks trained with sign-symmetry can attain classification performance approaching that of BP-trained networks. These results complement the study by Bartunov et al. (2018), and establish a new benchmark for future biologically plausible learning algorithms on more difficult datasets and more complex architectures. This material is based upon work supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216. ar X iv :1 81 1. 03 56 7v 2 [ cs .L G ] 2 5 N ov 2 01 8 BIOLOGICALLY-PLAUSIBLE LEARNING ALGORITHMS CAN SCALE TO LARGE DATASETS", "title": "" }, { "docid": "d1eb1b18105d79c44dc1b6b3b2c06ee2", "text": "An implementation of high speed AES algorithm based on FPGA is presented in this paper in order to improve the safety of data in transmission. The mathematic principle, encryption process and logic structure of AES algorithm are introduced. So as to reach the porpose of improving the system computing speed, the pipelining and papallel processing methods were used. The simulation results show that the high-speed AES encryption algorithm implemented correctly. Using the method of AES encryption the data could be protected effectively.", "title": "" }, { "docid": "67265d70b2d704c0ab2898c933776dc2", "text": "The intima-media thickness (IMT) of the common carotid artery (CCA) is widely used as an early indicator of cardiovascular disease (CVD). Typically, the IMT grows with age and this is used as a sign of increased risk of CVD. Beyond thickness, there is also clinical interest in identifying how the composition and texture of the intima-media complex (IMC) changed and how these textural changes grow into atherosclerotic plaques that can cause stroke. Clearly though texture analysis of ultrasound images can be greatly affected by speckle noise, our goal here is to develop effective despeckle noise methods that can recover image texture associated with increased rates of atherosclerosis disease. In this study, we perform a comparative evaluation of several despeckle filtering methods, on 100 ultrasound images of the CCA, based on the extracted multiscale Amplitude-Modulation Frequency-Modulation (AM-FM) texture features and visual image quality assessment by two clinical experts. Texture features were extracted from the automatically segmented IMC for three different age groups. The despeckle filters hybrid median and the homogeneous mask area filter showed the best performance by improving the class separation between the three age groups and also yielded significantly improved image quality.", "title": "" }, { "docid": "8ae257994c6f412ceb843fcb98a67043", "text": "Discovering the author's interest over time from documents has important applications in recommendation systems, authorship identification and opinion extraction. In this paper, we propose an interest drift model (IDM), which monitors the evolution of author interests in time-stamped documents. The model further uses the discovered author interest information to help finding better topics. Unlike traditional topic models, our model is sensitive to the ordering of words, thus it extracts more information from the semantic meaning of the context. The experiment results show that the IDM model learns better topics than state-of-the-art topic models.", "title": "" }, { "docid": "036a8188c736e210f06ccdaf339c612b", "text": "MOTIVATION\nIn flux balance analysis of genome scale stoichiometric models of metabolism, the principal constraints are uptake or secretion rates, the steady state mass conservation assumption and reaction directionality. Here, we introduce an algorithmic pipeline for quantitative assignment of reaction directionality in multi-compartmental genome scale models based on an application of the second law of thermodynamics to each reaction. Given experimental or computationally estimated standard metabolite species Gibbs energy and metabolite concentrations, the algorithms bounds reaction Gibbs energy, which is transformed to in vivo pH, temperature, ionic strength and electrical potential.\n\n\nRESULTS\nThis cross-platform MATLAB extension to the COnstraint-Based Reconstruction and Analysis (COBRA) toolbox is computationally efficient, extensively documented and open source.\n\n\nAVAILABILITY\nhttp://opencobra.sourceforge.net.", "title": "" }, { "docid": "a5052a27ebbfb07b02fa18b3d6bff6fc", "text": "Popular techniques for domain adaptation such as the feature augmentation method of Daumé III (2009) have mostly been considered for sparse binary-valued features, but not for dense realvalued features such as those used in neural networks. In this paper, we describe simple neural extensions of these techniques. First, we propose a natural generalization of the feature augmentation method that uses K + 1 LSTMs where one model captures global patterns across all K domains and the remaining K models capture domain-specific information. Second, we propose a novel application of the framework for learning shared structures by Ando and Zhang (2005) to domain adaptation, and also provide a neural extension of their approach. In experiments on slot tagging over 17 domains, our methods give clear performance improvement over Daumé III (2009) applied on feature-rich CRFs.", "title": "" }, { "docid": "dcdaeb7c1da911d0b1a2932be92e0fb4", "text": "As computational agents are increasingly used beyond research labs, their success will depend on their ability to learn new skills and adapt to their dynamic, complex environments. If human users—without programming skills— can transfer their task knowledge to agents, learning can accelerate dramatically, reducing costly trials. The tamer framework guides the design of agents whose behavior can be shaped through signals of approval and disapproval, a natural form of human feedback. More recently, tamer+rl was introduced to enable human feedback to augment a traditional reinforcement learning (RL) agent that learns from a Markov decision process’s (MDP) reward signal. We address limitations of prior work on tamer and tamer+rl, contributing in two critical directions. First, the four successful techniques for combining human reward with RL from prior tamer+rl work are tested on a second task, and these techniques’ sensitivities to parameter changes are analyzed. Together, these examinations yield more general and prescriptive conclusions to guide others who wish to incorporate human knowledge into an RL algorithm. Second, tamer+rl has thus far been limited to a sequential setting, in which training occurs before learning from MDP reward. In this paper, we introduce a novel algorithm that shares the same spirit as tamer+rl but learns simultaneously from both reward sources, enabling the human feedback to come at any time during the reinforcement learning process. We call this algorithm simultaneous tamer+rl. To enable simultaneous learning, we introduce a new technique that appropriately determines the magnitude of the human model’s influence on the RL algorithm throughout time and state-action space.", "title": "" }, { "docid": "129795afe433742efcff2757508965fe", "text": "Introduction Healthcare in the United States (U.S.) is important in the lives of many citizens, but unfortunately the high costs of health-related services leave many patients with limited medical care. In response, the U.S. government has established and funded programs, such as Medicare [1], that provide financial assistance for qualifying people to receive needed medical services [2]. There are a number of issues facing healthcare and Abstract", "title": "" }, { "docid": "0c975acb5ab3f413078171840b17b232", "text": "We have analysed associated factors in 164 patients with acute compartment syndrome whom we treated over an eight-year period. In 69% there was an associated fracture, about half of which were of the tibial shaft. Most patients were men, usually under 35 years of age. Acute compartment syndrome of the forearm, with associated fracture of the distal end of the radius, was again seen most commonly in young men. Injury to soft tissues, without fracture, was the second most common cause of the syndrome and one-tenth of the patients had a bleeding disorder or were taking anticoagulant drugs. We found that young patients, especially men, were at risk of acute compartment syndrome after injury. When treating such injured patients, the diagnosis should be made early, utilising measurements of tissue pressure.", "title": "" } ]
scidocsrr
1c192091a05cb35d73d836c681da8f93
A Deep Cascade of Convolutional Neural Networks for MR Image Reconstruction
[ { "docid": "6987e20daf52bcf25afe6a7f0a95a730", "text": "Compressed sensing (CS) utilizes the sparsity of magnetic resonance (MR) images to enable accurate reconstruction from undersampled k-space data. Recent CS methods have employed analytical sparsifying transforms such as wavelets, curvelets, and finite differences. In this paper, we propose a novel framework for adaptively learning the sparsifying transform (dictionary), and reconstructing the image simultaneously from highly undersampled k-space data. The sparsity in this framework is enforced on overlapping image patches emphasizing local structure. Moreover, the dictionary is adapted to the particular image instance thereby favoring better sparsities and consequently much higher undersampling rates. The proposed alternating reconstruction algorithm learns the sparsifying dictionary, and uses it to remove aliasing and noise in one step, and subsequently restores and fills-in the k-space data in the other step. Numerical experiments are conducted on MR images and on real MR data of several anatomies with a variety of sampling schemes. The results demonstrate dramatic improvements on the order of 4-18 dB in reconstruction error and doubling of the acceptable undersampling factor using the proposed adaptive dictionary as compared to previous CS methods. These improvements persist over a wide range of practical data signal-to-noise ratios, without any parameter tuning.", "title": "" }, { "docid": "a5b0bf255205527c699c0cf3f7ee5270", "text": "This paper proposes a deep learning approach for accelerating magnetic resonance imaging (MRI) using a large number of existing high quality MR images as the training datasets. An off-line convolutional neural network is designed and trained to identify the mapping relationship between the MR images obtained from zero-filled and fully-sampled k-space data. The network is not only capable of restoring fine structures and details but is also compatible with online constrained reconstruction methods. Experimental results on real MR data have shown encouraging performance of the proposed method for efficient and accurate imaging.", "title": "" }, { "docid": "8b581e9ae50ed1f1aa1077f741fa4504", "text": "Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds computational complexity. In this paper, we present the first convolutional neural network (CNN) capable of real-time SR of 1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN architecture where the feature maps are extracted in the LR space. In addition, we introduce an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. By doing so, we effectively replace the handcrafted bicubic filter in the SR pipeline with more complex upscaling filters specifically trained for each feature map, whilst also reducing the computational complexity of the overall SR operation. We evaluate the proposed approach using images and videos from publicly available datasets and show that it performs significantly better (+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster than previous CNN-based methods.", "title": "" } ]
[ { "docid": "b29e373d536b254af56d10348d3afd78", "text": "The keyterm-based approach is arguably intuitive for users to direct text-clustering processes and adapt results to various applications in text analysis. Its way of markedly influencing the results, for instance, by expressing important terms in relevance order, requires little knowledge of the algorithm and has predictable effect, speeding up the task. This article first presents a text-clustering algorithm that can easily be extended into an interactive algorithm. We evaluate its performance against state-of-the-art clustering algorithms in unsupervised mode. Next, we propose three interactive versions of the algorithm based on keyterm labeling, document labeling, and hybrid labeling. We then demonstrate that keyterm labeling is more effective than document labeling in text clustering. Finally, we propose a visual approach to support the keyterm-based version of the algorithm. Visualizations are provided for the whole collection as well as for detailed views of document and cluster relationships. We show the effectiveness and flexibility of our framework, Vis-Kt, by presenting typical clustering cases on real text document collections. A user study is also reported that reveals overwhelmingly positive acceptance toward keyterm-based clustering.", "title": "" }, { "docid": "f95ac9c90ad4f5a3c08924f9aa24ca20", "text": "The Semantic Web is an extension of the current web in which information is given well-defined meaning. The perspective of Semantic Web is to promote the quality and intelligence of the current web by changing its contents into machine understandable form. Therefore, semantic level information is one of the cornerstones of the Semantic Web. The process of adding semantic metadata to web resources is called Semantic Annotation. There are many obstacles against the Semantic Annotation, such as multilinguality, scalability, and issues which are related to diversity and inconsistency in content of different web pages. Due to the wide range of domains and the dynamic environments that the Semantic Annotation systems must be performed on, the problem of automating annotation process is one of the significant challenges in this domain. To overcome this problem, different machine learning approaches such as supervised learning, unsupervised learning and more recent ones like, semi-supervised learning and active learning have been utilized. In this paper we present an inclusive layered classification of Semantic Annotation challenges and discuss the most important issues in this field. Also, we review and analyze machine learning applications for solving semantic annotation problems. For this goal, the article tries to closely study and categorize related researches for better understanding and to reach a framework that can map machine learning techniques into the Semantic Annotation challenges and requirements.", "title": "" }, { "docid": "a3a9a3676874d5182f7d66b91a2d7387", "text": "There is considerable uncertainty about what properties, capabilities and motivations future AGIs will have. In some plausible scenarios, AGIs may pose security risks arising from accidents and defects. In order to mitigate these risks, prudent early AGI research teams will perform significant testing on their creations before use. Unfortunately, if an AGI has human-level or greater intelligence, testing itself may not be safe; some natural AGI goal systems create emergent incentives for AGIs to tamper with their test environments, make copies of themselves on the internet, or convince developers and operators to do dangerous things. In this paper, we survey the AGI containment problem – the question of how to build a container in which tests can be conducted safely and reliably, even on AGIs with unknown motivations and capabilities that could be dangerous. We identify requirements for AGI containers, available mechanisms, and weaknesses that need to be addressed.", "title": "" }, { "docid": "0653a241699070c8576c286178784ff1", "text": "The Smart Grid is among the most important and ambitious endeavors of our time. Deep integration of renewable energy sources is one component of the Smart Grid vision. A fundamental difficulty here is that renewable energy sources are highly variable – they are not dispatchable, are intermittent, and uncertain. The electricity grid must absorb this variability through a portfolio of solutions. These include aggregation of variable generation, curtailment, operating reserves, storage technologies, local generation, and distributed demand response. The various elements in this portfolio must be dynamically coordinated based on available information within the framework of electricity grid operations. This, in turn, will require critical technologies and methods drawn from optimization, modeling, and control, which are the core competencies of Systems and Control. This paper catalogues some of these systems and control research opportunities that arise in the deep integration of renewable energy sources.", "title": "" }, { "docid": "4423a606fa4dd3093e801160cd72b6b2", "text": "High voltage insulating bushing is an important component of GIS of the high potential and ground potential insulation. The even electric field distribution and reasonable structure of bushing will guarantee the safe operation of GIS. In order to solve the problem of structural design and electric field distribution of 126 (kV) GIS bushing, a mathematical model to calculate the electric field distribution of high voltage insulating bushing was established in this study, and numerical simulation and visualization processing for electric field distribution of bushing was made by ANSYS. Furthermore, the insulation size was determined and verified. Consequently, the numerical foundation of insulation structural design and the development of 126 (kV) GIS bushing are provided.", "title": "" }, { "docid": "ae934792ef7756244f7be8633dd0fed2", "text": "Combining modern high-power semiconductor devices with constantly improving magnetic materials opens up the possibility to replace bulky low frequency transformers with a new medium voltage medium frequency conversion structures. While there are still challenges to be addressed related to these so called power electronic transformers, a steadily increasing development effort is evident and considered in various contexts. Traction applications seem to be the first ones where proliferation of these new galvanic isolated power electronic converters is expected. In this particular application field, substantial weight and volume reduction could be achieved while providing additional functionality at the same time. In this paper a survey of recent R&D efforts in this field is presented.", "title": "" }, { "docid": "c5c5d56d2db769996d8164a0d0a5e00a", "text": "This paper presents the development of a polymer-based tendon-driven wearable robotic hand, Exo-Glove Poly. Unlike the previously developed Exo-Glove, a fabric-based tendon-driven wearable robotic hand, Exo-Glove Poly was developed using silicone to allow for sanitization between users in multiple-user environments such as hospitals. Exo-Glove Poly was developed to use two motors, one for the thumb and the other for the index/middle finger, and an under-actuation mechanism to grasp various objects. In order to realize Exo-Glove Poly, design features and fabrication processes were developed to permit adjustment to different hand sizes, to protect users from injury, to enable ventilation, and to embed Teflon tubes for the wire paths. The mechanical properties of Exo-Glove Poly were verified with a healthy subject through a wrap grasp experiment using a mat-type pressure sensor and an under-actuation performance experiment with a specialized test set-up. Finally, performance of the Exo-Glove Poly for grasping various shapes of object was verified, including objects needing under-actuation.", "title": "" }, { "docid": "c438965615449efd728acec42be0b6d1", "text": "Human adults generally find fast tempos more arousing than slow tempos, with tempo frequently manipulated in music to alter tension and emotion. We used a previously published method [McDermott, J., & Hauser, M. (2004). Are consonant intervals music to their ears? Spontaneous acoustic preferences in a nonhuman primate. Cognition, 94(2), B11-B21] to test cotton-top tamarins and common marmosets, two new-World primates, for their spontaneous responses to stimuli that varied systematically with respect to tempo. Across several experiments, we found that both tamarins and marmosets preferred slow tempos to fast. It is possible that the observed preferences were due to arousal, and that this effect is homologous to the human response to tempo. In other respects, however, these two monkey species showed striking differences compared to humans. Specifically, when presented with a choice between slow tempo musical stimuli, including lullabies, and silence, tamarins and marmosets preferred silence whereas humans, when similarly tested, preferred music. Thus despite the possibility of homologous mechanisms for tempo perception in human and nonhuman primates, there appear to be motivational ties to music that are uniquely human.", "title": "" }, { "docid": "5c9124859874e20cd8f6f7b79aeecf4d", "text": "Earl Stevick has always been interested in improving language teaching methodology, and he has never been afraid of innovation. His seminal work, Teaching Languages: A Way and Ways (Stevick 1980), introduced many of us to Counselling-Learning and Suggestopedia for the first time, and in Memory, Meaning and Method: A View of Language Teaching. (Stevick 1996) he discussed a wide range of theoretical and practical considerations to help us better understand the intricate cognitive and interpersonal processes whereby a language is acquired and then used for meaningful communication. The proposal in this chapter to revitalize Communicative Language Teaching (CLT) in the light of contemporary scholarly advances is fully within the spirit of Earl's approach.' By the turn of the new millennium, CLT had become a real buzzword in language teaching methodology, but the extent to which the term covers a well-defined and uniform teaching method is highly questionable. In fact, since the genesis of CLT in the early 1970s, its proponents have developed a very wide range of variants that were only loosely related to each other (for overviews, see Savignon 2005; Spada 2007). In this chapter I first look at the core characteristics of CLT to explore the roots of the diverse interpretations and then argue that in order for CLT to fulfil all the expectations attached to it in the twenty-first century, the method needs to be revised according to the latest findings of psycholinguistic research. I will conclude the chapter by outlining the main principles of a proposed revised approach that I have termed the `Principled Communicative Approach' (PCA).", "title": "" }, { "docid": "1625a3acd2780b74818cde6b1fafecd4", "text": "MACsec provides authenticity and integrity for data frame on data link layer by implementing data encryption. For these advantages, MACsec is highlighted as a solution to protect and transmit data safely from the various security threats in the Wired and Wireless LAN. Until now, MACsec can not guarantee stable ARP management for itself [1]. This paper will propose a design of enhanced ARP to protect IP and MAC address from external threats and the enhanced ARP will encrypt ARP Packet by SAK (Secure Association Key) for the authentication. It will help to keep the address system safe from various security threats on data link layer.", "title": "" }, { "docid": "e887653429edaefd4ef08c9b15feb872", "text": "The level of presence, or immersion, a person feels with media influences the effect media has on them. This project examines both the causes and consequences of presence in the context of violent video game play. In a between subjects design, 227 participants were randomly assigned to play either a violent or a non violent video game. Causal modeling techniques revealed two separate paths to presence. First, individual differences predicted levels of presence: men felt more presence while playing the video game, as did those who play video games more frequently. Secondly, those who perceived the game to be more violent felt more presence. Those who felt more presence, felt more resentment, were more verbally aggressive, and that led to increased physically aggressive intentions. Keywords--Presence as immersion, video games, aggressive affect, violence, aggression, and social learning theory.", "title": "" }, { "docid": "1571fbb923755323e32ac7d023bd1025", "text": "Natural language generation (NLG) is an important component in spoken dialogue systems. This paper presents a model called Encoder-Aggregator-Decoder which is an extension of an Recurrent Neural Network based Encoder-Decoder architecture. The proposed Semantic Aggregator consists of two components: an Aligner and a Refiner. The Aligner is a conventional attention calculated over the encoded input information, while the Refiner is another attention or gating mechanism stacked over the attentive Aligner in order to further select and aggregate the semantic elements. The proposed model can be jointly trained both sentence planning and surface realization to produce natural language utterances. The model was extensively assessed on four different NLG domains, in which the experimental results showed that the proposed generator consistently outperforms the previous methods on all the NLG domains.", "title": "" }, { "docid": "c839542db0e80ce253a170a386d91bab", "text": "Description\nThe American College of Physicians (ACP) developed this guideline to present the evidence and provide clinical recommendations on the management of gout.\n\n\nMethods\nUsing the ACP grading system, the committee based these recommendations on a systematic review of randomized, controlled trials; systematic reviews; and large observational studies published between January 2010 and March 2016. Clinical outcomes evaluated included pain, joint swelling and tenderness, activities of daily living, patient global assessment, recurrence, intermediate outcomes of serum urate levels, and harms.\n\n\nTarget Audience and Patient Population\nThe target audience for this guideline includes all clinicians, and the target patient population includes adults with acute or recurrent gout.\n\n\nRecommendation 1\nACP recommends that clinicians choose corticosteroids, nonsteroidal anti-inflammatory drugs (NSAIDs), or colchicine to treat patients with acute gout. (Grade: strong recommendation, high-quality evidence).\n\n\nRecommendation 2\nACP recommends that clinicians use low-dose colchicine when using colchicine to treat acute gout. (Grade: strong recommendation, moderate-quality evidence).\n\n\nRecommendation 3\nACP recommends against initiating long-term urate-lowering therapy in most patients after a first gout attack or in patients with infrequent attacks. (Grade: strong recommendation, moderate-quality evidence).\n\n\nRecommendation 4\nACP recommends that clinicians discuss benefits, harms, costs, and individual preferences with patients before initiating urate-lowering therapy, including concomitant prophylaxis, in patients with recurrent gout attacks. (Grade: strong recommendation, moderate-quality evidence).", "title": "" }, { "docid": "56b706edc6d1b6a2ff64770cb3f79c2e", "text": "The ancient oriental game of Go has long been considered a grand challenge for artificial intelligence. For decades, computer Go has defied the classical methods in game tree search that worked so successfully for chess and checkers. However, recent play in computer Go has been transformed by a new paradigm for tree search based on Monte-Carlo methods. Programs based on Monte-Carlo tree search now play at human-master levels and are beginning to challenge top professional players. In this paper, we describe the leading algorithms for Monte-Carlo tree search and explain how they have advanced the state of the art in computer Go.", "title": "" }, { "docid": "dca37efd0882f29521356d27d3abf88f", "text": "During rest, envelopes of band-limited on-going MEG signals co-vary across the brain in consistent patterns, which have been related to resting-state networks measured with fMRI. To investigate the genesis of such envelope correlations, we consider a whole-brain network model assuming two distinct fundamental scenarios: one where each brain area generates oscillations in a single frequency, and a novel one where each brain area can generate oscillations in multiple frequency bands. The models share, as a common generator of damped oscillations, the normal form of a supercritical Hopf bifurcation operating at the critical border between the steady state and the oscillatory regime. The envelopes of the simulated signals are compared with empirical MEG data using new methods to analyse the envelope dynamics in terms of their phase coherence and stability across the spectrum of carrier frequencies. Considering the whole-brain model with a single frequency generator in each brain area, we obtain the best fit with the empirical MEG data when the fundamental frequency is tuned at 12Hz. However, when multiple frequency generators are placed at each local brain area, we obtain an improved fit of the spatio-temporal structure of on-going MEG data across all frequency bands. Our results indicate that the brain is likely to operate on multiple frequency channels during rest, introducing a novel dimension for future models of large-scale brain activity.", "title": "" }, { "docid": "a6e18aa7f66355fb8407798a37f53f45", "text": "We review some of the recent advances in level-set methods and their applications. In particular, we discuss how to impose boundary conditions at irregular domains and free boundaries, as well as the extension of level-set methods to adaptive Cartesian grids and parallel architectures. Illustrative applications are taken from the physical and life sciences. Fast sweeping methods are briefly discussed.", "title": "" }, { "docid": "5447d3fe8ed886a8792a3d8d504eaf44", "text": "Glucose-responsive delivery of insulin mimicking the function of pancreatic β-cells to achieve meticulous control of blood glucose (BG) would revolutionize diabetes care. Here the authors report the development of a new glucose-responsive insulin delivery system based on the potential interaction between the glucose derivative-modified insulin (Glc-Insulin) and glucose transporters on erythrocytes (or red blood cells, RBCs) membrane. After being conjugated with the glucosamine, insulin can efficiently bind to RBC membranes. The binding is reversible in the setting of hyperglycemia, resulting in fast release of insulin and subsequent drop of BG level in vivo. The delivery vehicle can be further simplified utilizing injectable polymeric nanocarriers coated with RBC membrane and loaded with Glc-Insulin. The described work is the first demonstration of utilizing RBC membrane to achieve smart insulin delivery with fast responsiveness.", "title": "" }, { "docid": "a457545baa59e39e6ef6d7e0d2f9c02e", "text": "The domain adaptation problem in machine learning occurs when the test data generating distribution differs from the one that generates the training data. It is clear that the success of learning under such circumstances depends on similarities between the two data distributions. We study assumptions about the relationship between the two distributions that one needed for domain adaptation learning to succeed. We analyze the assumptions in an agnostic PAC-style learning model for a the setting in which the learner can access a labeled training data sample and an unlabeled sample generated by the test data distribution. We focus on three assumptions: (i) similarity between the unlabeled distributions, (ii) existence of a classifier in the hypothesis class with low error on both training and testing distributions, and (iii) the covariate shift assumption. I.e., the assumption that the conditioned label distribution (for each data point) is the same for both the training and test distributions. We show that without either assumption (i) or (ii), the combination of the remaining assumptions is not sufficient to guarantee successful learning. Our negative results hold with respect to any domain adaptation learning algorithm, as long as it does not have access to target labeled examples. In particular, we provide formal proofs that the popular covariate shift assumption is rather weak and does not relieve the necessity of the other assumptions. We also discuss the intuitively appealing Appearing in Proceedings of the 13 International Conference on Artificial Intelligence and Statistics (AISTATS) 2010, Chia Laguna Resort, Sardinia, Italy. Volume 9 of JMLR: W&CP 9. Copyright 2010 by the authors. paradigm of re-weighting the labeled training sample according to the target unlabeled distribution and show that, somewhat counter intuitively, we show that paradigm cannot be trusted in the following sense. There are DA tasks that are indistinguishable as far as the training data goes but in which re-weighting leads to significant improvement in one task while causing dramatic deterioration of the learning success in the other.", "title": "" }, { "docid": "6b7a1ec7fe105dc7e83291e39e8664ec", "text": "The clustering problem is well known in the database literature for its numerous applications in problems such as customer segmentation, classification and trend analysis. Unfortunately, all known algorithms tend to break down in high dimensional spaces because of the inherent sparsity of the points. In such high dimensional spaces not all dimensions may be relevant to a given cluster. One way of handling this is to pick the closely correlated dimensions and find clusters in the corresponding subspace. Traditional feature selection algorithms attempt to achieve this. The weakness of this approach is that in typical high dimensional data mining applications different sets of points may cluster better for different subsets of dimensions. The number of dimensions in each such cluster-specific subspace may also vary. Hence, it may be impossible to find a single small subset of dimensions for all the clusters. We therefore discuss a generalization of the clustering problem, referred to as the projected clustering problem, in which the subsets of dimensions selected are specific to the clusters themselves. We develop an algorithmic framework for solving the projected clustering problem, and test its performance on synthetic data.", "title": "" }, { "docid": "46ad3ffba69ccf8f41fa598891e571d8", "text": "Spectral band selection is a fundamental problem in hyperspectral data processing. In this letter, a new band-selection method based on mutual information (MI) is proposed. MI measures the statistical dependence between two random variables and can therefore be used to evaluate the relative utility of each band to classification. A new strategy is described to estimate the MI using a priori knowledge of the scene, reducing reliance on a \"ground truth\" reference map, by retaining bands with high associated MI values (subject to the so-called \"complementary\" conditions). Simulations of classification performance on 16 classes of vegetation from the AVIRIS 92AV3C data set show the effectiveness of the method, which outperforms an MI-based method using the associated reference map, an entropy-based method, and a correlation-based method. It is also competitive with the steepest ascent algorithm at much lower computational cost", "title": "" } ]
scidocsrr
3c873fe84f598471dde2ed6ce8fb0e78
Identifying the characteristics of vulnerable code changes: an empirical study
[ { "docid": "6cf18bea11ea8e95f24b7db69d3924e2", "text": "Experimentation in software engineering is necessar y but difficult. One reason is that there are a lar ge number of context variables, and so creating a cohesive under standing of experimental results requires a mechani sm for motivating studies and integrating results. It requ ires a community of researchers that can replicate studies, vary context variables, and build models that represent the common observations about the discipline. This paper discusses the experience of the authors, based upon a c llection of experiments, in terms of a framewo rk f r organizing sets of related studies. With such a fra mework, experiments can be viewed as part of common families of studies, rather than being isolated events. Common families of studies can contribute to important and relevant hypotheses that may not be suggested by individual experiments. A framework also facilitates building knowledge in an incremental manner through the replication of experiments within families of studies. To support the framework, this paper discusses the exp riences of the authors in carrying out empirica l studies, with specific emphasis on persistent problems encountere d in xperimental design, threats to validity, crit eria for evaluation, and execution of experiments in the dom ain of software engineering.", "title": "" } ]
[ { "docid": "f29d0ea5ff5c96dadc440f4d4aa229c6", "text": "Wikipedia infoboxes are a valuable source of structured knowledge for global knowledge sharing. However, infobox information is very incomplete and imbalanced among the Wikipedias in different languages. It is a promising but challenging problem to utilize the rich structured knowledge from a source language Wikipedia to help complete the missing infoboxes for a target language. In this paper, we formulate the problem of cross-lingual knowledge extraction from multilingual Wikipedia sources, and present a novel framework, called WikiCiKE, to solve this problem. An instancebased transfer learning method is utilized to overcome the problems of topic drift and translation errors. Our experimental results demonstrate that WikiCiKE outperforms the monolingual knowledge extraction method and the translation-based method.", "title": "" }, { "docid": "d2a89459ca4a0e003956d6fe4871bb34", "text": "In this paper, a high-efficiency high power density LLC resonant converter with a matrix transformer is proposed. A matrix transformer can help reduce leakage inductance and the ac resistance of windings so that the flux cancellation method can then be utilized to reduce core size and loss. Synchronous rectifier (SR) devices and output capacitors are integrated into the secondary windings to eliminate termination-related winding losses, via loss and reduce leakage inductance. A 1 MHz 390 V/12 V 1 kW LLC resonant converter prototype is built to verify the proposed structure. The efficiency can reach as high as 95.4%, and the power density of the power stage is around 830 W/in3.", "title": "" }, { "docid": "8c36e881f03a1019158cdae2e5de876c", "text": "The projects with embedded systems are used for many different purposes, being a major challenge for the community of developers of such systems. As we benefit from technological advances the complexity of designing an embedded system increases significantly. This paper presents GERSE, a guideline to requirements elicitation for embedded systems. Despite of advances in the area of embedded systems, there is a shortage of requirements elicitation techniques that meet the particularities of this area. The contribution of GERSE is to improve the capture process and organization of the embedded systems requirements.", "title": "" }, { "docid": "5fd97a266042ba119976c43e47dbe2ab", "text": "The increasing availability of heterogeneous XML sources has raised a number of issues concerning how to represent and manage these semi-structured data. In recent years due to the importance of managing these resources and extracting knowledge from them, lots of methods have been proposed in order to represent and cluster them in different ways. Different similarity measures have been extended and also in some context semantic issues have been taken into account. In this context, we review different XML clustering methods with considering different representation methods such as tree based and vector based with use of different similarity measures. We also propose taxonomy for these proposed methods.", "title": "" }, { "docid": "dcd9a430a69fc3a938ea1068273627ff", "text": "Background Nursing theory should provide the principles that underpin practice and help to generate further nursing knowledge. However, a lack of agreement in the professional literature on nursing theory confuses nurses and has caused many to dismiss nursing theory as irrelevant to practice. This article aims to identify why nursing theory is important in practice. Conclusion By giving nurses a sense of identity, nursing theory can help patients, managers and other healthcare professionals to recognise the unique contribution that nurses make to the healthcare service ( Draper 1990 ). Providing a definition of nursing theory also helps nurses to understand their purpose and role in the healthcare setting.", "title": "" }, { "docid": "b56f65fd08c8b6a9fe9ff05441ff8734", "text": "While symbolic parsers can be viewed as deduction systems, t his view is less natural for probabilistic parsers. We present a view of parsing as directed hypergraph analysis which naturally covers both symbolic and probabilistic parsing. We illustrate the approach by showing how a dynamic extension of Dijkstra’s algorithm can be used to construct a probabilistic chart parser with an O(n3) time bound for arbitrary PCFGs, while preserving as much of t he flexibility of symbolic chart parsers as allowed by the inher ent ordering of probabilistic dependencies.", "title": "" }, { "docid": "41e03f4540a090a9dc4e9551aad99fb6", "text": "• Unlabeled: Context constructed without dependency labels • Simplified: Functionally similar dependency labels are collapsed • Basic: Standard dependency parse • Enhanced and Enhanced++: Dependency trees augmented (e.g., new edges between modifiers and conjuncts with parents’ labels) • Universal Dependencies (UD): Cross-lingual • Stanford Dependencies (SD): English-tailored • Prior work [1] has shown that embeddings trained using dependency contexts distinguish related words better than similar words. • What effects do decisions made with embeddings have on the characteristics of the word embeddings? • Do Universal Dependency (UD) embeddings capture different characteristics than English-tailored Stanford Dependency (SD) embeddings?", "title": "" }, { "docid": "020fe2e94d306482399b4d1aaa083e5f", "text": "A key analytical task across many domains is model building and exploration for predictive analysis. Data is collected, parsed and analyzed for relationships, and features are selected and mapped to estimate the response of a system under exploration. As social media data has grown more abundant, data can be captured that may potentially represent behavioral patterns in society. In turn, this unstructured social media data can be parsed and integrated as a key factor for predictive intelligence. In this paper, we present a framework for the development of predictive models utilizing social media data. We combine feature selection mechanisms, similarity comparisons and model cross-validation through a variety of interactive visualizations to support analysts in model building and prediction. In order to explore how predictions might be performed in such a framework, we present results from a user study focusing on social media data as a predictor for movie box-office success.", "title": "" }, { "docid": "0f9cc52899c7e25a17bb372977d46834", "text": "In modeling and rendering of complex procedural terrains the extraction of isosurfaces is an important part. In this paper we introduce an approach to generate high-quality isosurfaces from regular grids at interactive frame rates. The surface extraction is a variation of Dual Marching Cubes and designed as a set of well-balanced parallel computation kernels. In contrast to a straightforward parallelization we generate a quadrilateral mesh with full connectivity information and 1-ring vertex neighborhood. We use this information to smooth the extracted mesh and to approximate the smooth subdivision surface for detail tessellation. Both improve the visual fidelity when modeling procedural terrains interactively. Moreover, our extraction approach is generally applicable, for example in the field of volume visualization.", "title": "" }, { "docid": "5804eb5389b02f2f6c5692fe8f427501", "text": "reflection-type phase shifter with constant insertion loss over a wide relative phase-shift range is presented. This important feature is attributed to the salient integration of an impedance-transforming quadrature coupler with equalized series-resonated varactors. The impedance-transforming quadrature coupler is used to increase the maximal relative phase shift for a given varactor with a limited capacitance range. When the phase is tuned, the typical large insertion-loss variation of the phase shifter due to the varactor parasitic effect is minimized by shunting the series-resonated varactor with a resistor Rp. A set of closed-form equations for predicting the relative phase shift, insertion loss, and insertion-loss variation with respect to the quadrature coupler and varactor parameters is derived. Three phase shifters were implemented with a silicon varactor of a restricted capacitance range of Cv,min = 1.4 pF and Cv,max = 8 pF, wherein the parasitic resistance is close to 2 Omega. The measured insertion-loss variation is 0.1 dB over the relative phase-shift tuning range of 237deg at 2 GHz and the return losses are better than 20 dB, excellently agreeing with the theoretical and simulated results.", "title": "" }, { "docid": "ad8a727d0e3bd11cd972373451b90fe7", "text": "The loss functions of deep neural networks are complex and their geometric properties are not well understood. We show that the optima of these complex loss functions are in fact connected by simple curves over which training and test accuracy are nearly constant. We introduce a training procedure to discover these high-accuracy pathways between modes. Inspired by this new geometric insight, we also propose a new ensembling method entitled Fast Geometric Ensembling (FGE). Using FGE we can train high-performing ensembles in the time required to train a single model. We achieve improved performance compared to the recent state-of-the-art Snapshot Ensembles, on CIFAR-10, CIFAR-100, and ImageNet.", "title": "" }, { "docid": "51f9661061bf69f8d9303101c00558ec", "text": "In this paper we introduce an architecture maturity model for the domain of enterprise architecture. The model differs from other existing models in that it departs from the standard 5-level approach. It distinguishes 18 factors, called key areas, which are relevant to developing an architectural practice. Each key area has its own maturity development path that is balanced against the maturity development paths of the other key areas. Two real-life case studies are presented to illustrate the use of the model. Usage of the model in these cases shows that the model delivers recognizable results, that the results can be traced back to the basic approach to architecture taken by the organizations investigated and that the key areas chosen bear relevance to the architectural practice of the organizations. 1 MATURITY IN ENTERPRISE", "title": "" }, { "docid": "869ad7b6bf74f283c8402958a6814a21", "text": "In this paper, we make a move to build a dialogue system for automatic diagnosis. We first build a dataset collected from an online medical forum by extracting symptoms from both patients’ self-reports and conversational data between patients and doctors. Then we propose a taskoriented dialogue system framework to make the diagnosis for patients automatically, which can converse with patients to collect additional symptoms beyond their self-reports. Experimental results on our dataset show that additional symptoms extracted from conversation can greatly improve the accuracy for disease identification and our dialogue system is able to collect these symptoms automatically and make a better diagnosis.", "title": "" }, { "docid": "d8c45560377ac2774b1bbe8b8a61b1fb", "text": "Markov logic uses weighted formulas to compactly encode a probability distribution over possible worlds. Despite the use of logical formulas, Markov logic networks (MLNs) can be difficult to interpret, due to the often counter-intuitive meaning of their weights. To address this issue, we propose a method to construct a possibilistic logic theory that exactly captures what can be derived from a given MLN using maximum a posteriori (MAP) inference. Unfortunately, the size of this theory is exponential in general. We therefore also propose two methods which can derive compact theories that still capture MAP inference, but only for specific types of evidence. These theories can be used, among others, to make explicit the hidden assumptions underlying an MLN or to explain the predictions it makes.", "title": "" }, { "docid": "0f659ff5414e75aefe23bb85127d93dd", "text": "Important information is captured in medical documents. To make use of this information and intepret the semantics, technologies are required for extracting, analysing and interpreting it. As a result, rich semantics including relations among events, subjectivity or polarity of events, become available. The First Workshop on Extraction and Processing of Rich Semantics from Medical Texts, is devoted to the technologies for dealing with clinical documents for medical information gathering and application in knowledge based systems. New approaches for identifying and analysing rich semantics are presented. In this paper, we introduce the topic and summarize the workshop contributions.", "title": "" }, { "docid": "56a072fc480c64e6a288543cee9cd5ac", "text": "The performance of object detection has recently been significantly improved due to the powerful features learnt through convolutional neural networks (CNNs). Despite the remarkable success, there are still several major challenges in object detection, including object rotation, within-class diversity, and between-class similarity, which generally degenerate object detection performance. To address these issues, we build up the existing state-of-the-art object detection systems and propose a simple but effective method to train rotation-invariant and Fisher discriminative CNN models to further boost object detection performance. This is achieved by optimizing a new objective function that explicitly imposes a rotation-invariant regularizer and a Fisher discrimination regularizer on the CNN features. Specifically, the first regularizer enforces the CNN feature representations of the training samples before and after rotation to be mapped closely to each other in order to achieve rotation-invariance. The second regularizer constrains the CNN features to have small within-class scatter but large between-class separation. We implement our proposed method under four popular object detection frameworks, including region-CNN (R-CNN), Fast R- CNN, Faster R- CNN, and R- FCN. In the experiments, we comprehensively evaluate the proposed method on the PASCAL VOC 2007 and 2012 data sets and a publicly available aerial image data set. Our proposed methods outperform the existing baseline methods and achieve the state-of-the-art results.", "title": "" }, { "docid": "ac5f518cbd783060af1cf6700b994469", "text": "Scalable evolutionary computation has. become an intensively studied research topic in recent years. The issue of scalability is predominant in any field of algorithmic design, but it became particularly relevant for the design of competent genetic algorithms once the scalability problems of simple genetic algorithms were understood. Here we present some of the work that has aided in getting a clear insight in the scalability problems of simple genetic algorithms. Particularly, we discuss the important issue of building block mixing. We show how the need for mixing places a boundary in the GA parameter space that, together with the boundary from the schema theorem, delimits the region where the GA converges reliably to the optimum in problems of bounded difficulty. This region shrinks rapidly with increasing problem size unless the building blocks are tightly linked in the problem coding structure. In addition, we look at how straightforward extensions of the simple genetic algorithmnamely elitism, niching, and restricted mating are not significantly improving the scalability problems.", "title": "" }, { "docid": "e6d9dac0995f9cf711ee50b736a9832d", "text": "Reversing with a dolly steered trailer configuration is a hard task for any driver without extensive training. In this work we present a motion planning and control framework that can be used to automatically plan and execute complicated manoeuvres. The unstable dynamics of the reversing general 2-trailer configuration with off-axle hitching is first stabilised by an LQ-controller and then a pure pursuit path tracker is used on a higher level giving a cascaded controller that can track piecewise linear reference paths. This controller together with a kinematic model of the trailer configuration is then used for forward simulations within a Closed-Loop Rapidly Exploring Random Tree framework to generate motion plans that are not only kinematically feasible but also include the limitations of the controller's tracking performance when reversing. The approach is evaluated over a series of Monte Carlo simulations on three different scenarios and impressive success rates are achieved. Finally the approach is successfully tested on a small scale test platform where the motion plan is calculated and then sent to the platform for execution.", "title": "" }, { "docid": "fe62e3a9acfe5009966434aa1f39099d", "text": "Previous studies have found a subgroup of people with autism or Asperger Syndrome who pass second-order tests of theory of mind. However, such tests have a ceiling in developmental terms corresponding to a mental age of about 6 years. It is therefore impossible to say if such individuals are intact or impaired in their theory of mind skills. We report the performance of very high functioning adults with autism or Asperger Syndrome on an adult test of theory of mind ability. The task involved inferring the mental state of a person just from the information in photographs of a person's eyes. Relative to age-matched normal controls and a clinical control group (adults with Tourette Syndrome), the group with autism and Asperger Syndrome were significantly impaired on this task. The autism and Asperger Syndrome sample was also impaired on Happé's strange stories tasks. In contrast, they were unimpaired on two control tasks: recognising gender from the eye region of the face, and recognising basic emotions from the whole face. This provides evidence for subtle mindreading deficits in very high functioning individuals on the autistic continuum.", "title": "" }, { "docid": "b6012b1b5e74825269f9cf16e2f3e6f0", "text": "GPS enables a management to maintain Staff attendance and employee registration through mobile application, this application facilitates the staffs to login through mobile phone and track other staff members' whereabouts through mobile phone. In the present scenario manual registration through biometric systems is commonly in practice. The staff will be kept on informed about their attendance constantly by the admin when they login and log out so that the staff can keep a track on their attendance by using this application. The admin can track the location of any staff member using latitude, longitude and IMSI number.", "title": "" } ]
scidocsrr
c30cf761e7e620c057aa7ef49cdcb6bd
Performance Analysis of Multi-Hop Underwater Wireless Optical Communication Systems (Extended Version)
[ { "docid": "54b88e4c9e0bc31667e720f5f04c7f83", "text": "In clean ocean water, the performance of a underwater optical communication system is limited mainly by oceanic turbulence, which is defined as the fluctuations in the index of refraction resulting from temperature and salinity fluctuations. In this paper, using the refractive index spectrum of oceanic turbulence under weak turbulence conditions, we carry out, for a horizontally propagating plane wave and spherical wave, analysis of the aperture-averaged scintillation index, the associated probability of fade, mean signal-to-noise ratio, and mean bit error rate. Our theoretical results show that for various values of the rate of dissipation of mean squared temperature and the temperature-salinity balance parameter, the large-aperture receiver leads to a remarkable decrease of scintillation and consequently a significant improvement on the system performance. Such an effect is more noticeable in the plane wave case than in the spherical wave case.", "title": "" } ]
[ { "docid": "32be4be9baf522ff542107a4fd3340f8", "text": "One of the major challenges that cloud providers face is minimizing power consumption of their data centers. To this point, majority of current research focuses on energy efficient management of resources in the Infrastructure as a Service model and through virtual machine consolidation. However, containers are increasingly gaining popularity and going to be major deployment model in cloud environment and specifically in Platform as a Service. This paper focuses on improving the energy efficiency of servers for this new deployment model by proposing a framework that consolidates containers on virtual machines. We first formally present the container consolidation problem and then we compare a number of algorithms and evaluate their performance against metrics such as energy consumption, Service Level Agreement violations, average container migrations rate, and average number of created virtual machines. Our proposed framework and algorithms can be utilized in a private cloud to minimize energy consumption, or alternatively in a public cloud to minimize the total number of hours the virtual machines leased.", "title": "" }, { "docid": "9b6ef205d9697f8ee4958858c0fde651", "text": "Considerable literature has accumulated over the years regarding the combination of forecasts. The primary conclusion of this line of research is that forecast accuracy can be substantially improved through the combination of multiple individual forecasts. Furthermore, simple combination methods often work reasonably well relative to more complex combinations. This paper provides a review and annotated bibliography of that literature, including contributions from the forecasting, psychology, statistics, and management science literatures. The objectives are to provide a guide to the literature for students and researchers and to help researchers locate contributions in specific areas, both theoretical and applied. Suggestions for future research directions include (1) examination of simple combining approaches to determine reasons for their robustness, (2) development of alternative uses of multiple forecasts in order to make better use of the information they contain, (3) use of combined forecasts as benchmarks for forecast evaluation, and (4) study of subjective combination procedures. Finally, combining forecasts should become part of the mainstream of forecasting practice. In order to achieve this, practitioners should be encouraged to combine forecasts, and software to produce combined forecasts easily should be made available.", "title": "" }, { "docid": "648d6d316e9f9328f528ddc0c365db50", "text": "This paper presents a collaborative partitioning algorithm—a novel ensemblebased approach to coreference resolution. Starting from the all-singleton partition, we search for a solution close to the ensemble’s outputs in terms of a task-specific similarity measure. Our approach assumes a loose integration of individual components of the ensemble and can therefore combine arbitrary coreference resolvers, regardless of their models. Our experiments on the CoNLL dataset show that collaborative partitioning yields results superior to those attained by the individual components, for ensembles of both strong and weak systems. Moreover, by applying the collaborative partitioning algorithm on top of three state-of-the-art resolvers, we obtain the second-best coreference performance reported so far in the literature (MELA v08 score of 64.47).", "title": "" }, { "docid": "f91a9214409df84c4a53c92b2a14bbe3", "text": "OBJECTIVE\nwe performed the first systematic review with meta-analyses of the existing studies that examined mindfulness-based Baduanjin exercise for its therapeutic effects for individuals with musculoskeletal pain or insomnia.\n\n\nMETHODS\nBoth English- (PubMed, Web of Science, Elsevier, and Google Scholar) and Chinese-language (CNKI and Wangfang) electronic databases were used to search relevant articles. We used a modified PEDro scale to evaluate risk of bias across studies selected. All eligible RCTS were considered for meta-analysis. The standardized mean difference was calculated for the pooled effects to determine the magnitude of the Baduanjin intervention effect. For the moderator analysis, we performed subgroup meta-analysis for categorical variables and meta-regression for continuous variables.\n\n\nRESULTS\nThe aggregated result has shown a significant benefit in favour of Baduanjin at alleviating musculoskeletal pain (SMD = -0.88, 95% CI -1.02 to -0.74, p < 0.001, I² = 10.29%) and improving overall sleep quality (SMD = -0.48, 95% CI -0.95 to -0.01, p = 004, I² = 84.42%).\n\n\nCONCLUSIONS\nMindfulness-based Baduanjin exercise may be effective for alleviating musculoskeletal pain and improving overall sleep quality in people with chronic illness. Large, well-designed RCTs are needed to confirm these findings.", "title": "" }, { "docid": "fed9defe1a4705390d72661f96b38519", "text": "Multivariate resultants generalize the Sylvester resultant of two polynomials and characterize the solvability of a polynomial system. They also reduce the computation of all common roots to a problem in linear algebra. We propose a determinantal formula for the sparse resultant of an arbitrary system of n + 1 polynomials in n variables. This resultant generalizes the classical one and has significantly lower degree for polynomials that are sparse in the sense that their mixed volume is lower than their Bézout number. Our algorithm uses a mixed polyhedral subdivision of the Minkowski sum of the Newton polytopes in order to construct a Newton matrix. Its determinant is a nonzero multiple of the sparse resultant and the latter equals the GCD of at most n + 1 such determinants. This construction implies a restricted version of an effective sparse Nullstellensatz. For an arbitrary specialization of the coefficients, there are two methods that use one extra variable and yield the sparse resultant. This is the first algorithm to handle the general case with complexity polynomial in the resultant degree and simply exponential in n. We conjecture its extension to producing an exact rational expression for the sparse resultant.", "title": "" }, { "docid": "e84b6bbb2eaee0edb6ac65d585056448", "text": "As memory accesses become slower with respect to the processor and consume more power with increasing memory size, the focus of memory performance and power consumption has become increasingly important. With the trend to develop multi-threaded, multi-core processors, the demands on the memory system will continue to scale. However, determining the optimal memory system configuration is non-trivial. The memory system performance is sensitive to a large number of parameters. Each of these parameters take on a number of values and interact in fashions that make overall trends difficult to discern. A comparison of the memory system architectures becomes even harder when we add the dimensions of power consumption and manufacturing cost. Unfortunately, there is a lack of tools in the public-domain that support such studies. Therefore, we introduce DRAMsim, a detailed and highly-configurable C-based memory system simulator to fill this gap. DRAMsim implements detailed timing models for a variety of existing memories, including SDRAM, DDR, DDR2, DRDRAM and FB-DIMM, with the capability to easily vary their parameters. It also models the power consumption of SDRAM and its derivatives. It can be used as a standalone simulator or as part of a more comprehensive system-level model. We have successfully integrated DRAMsim into a variety of simulators including MASE [15], Sim-alpha [14], BOCHS[2] and GEMS[13]. The simulator can be downloaded from www.ece.umd.edu/dramsim.", "title": "" }, { "docid": "40b4a9b3a594e2a9cb7d489a3f44c328", "text": "The present article integrates findings from diverse studies on the generalized role of perceived coping self-efficacy in recovery from different types of traumatic experiences. They include natural disasters, technological catastrophes, terrorist attacks, military combat, and sexual and criminal assaults. The various studies apply multiple controls for diverse sets of potential contributors to posttraumatic recovery. In these different multivariate analyses, perceived coping self-efficacy emerges as a focal mediator of posttraumatic recovery. Verification of its independent contribution to posttraumatic recovery across a wide range of traumas lends support to the centrality of the enabling and protective function of belief in one's capability to exercise some measure of control over traumatic adversity.", "title": "" }, { "docid": "c4ea83bc1fbddbf13dbe96175a6aec4c", "text": "Recent work in machine learning and NLP has developed spectral algorithms for many learning tasks involving latent variables. Spectral algorithms rely on singular value decomposition as a basic operation, usually followed by some simple estimation method based on the method of moments. From a theoretical point of view, these methods are appealing in that they offer consistent estimators (and PAC-style guarantees of sample complexity) for several important latent-variable models. This is in contrast to the EM algorithm, which is an extremely successful approach, but which only has guarantees of reaching a local maximum of the likelihood function. From a practical point of view, the methods (unlike EM) have no need for careful initialization, and have recently been shown to be highly efficient (as one example, in work under submission by the authors on learning of latent-variable PCFGs, a spectral algorithm performs at identical accuracy to EM, but is around 20 times faster).", "title": "" }, { "docid": "0792abb24552f04c8b8c7cb71a4357ea", "text": "Deformable part-based models [1, 2] achieve state-of-the-art performance for object detection, but rely on heuristic initialization during training due to the optimization of non-convex cost function. This paper investigates limitations of such an initialization and extends earlier methods using additional supervision. We explore strong supervision in terms of annotated object parts and use it to (i) improve model initialization, (ii) optimize model structure, and (iii) handle partial occlusions. Our method is able to deal with sub-optimal and incomplete annotations of object parts and is shown to benefit from semi-supervised learning setups where part-level annotation is provided for a fraction of positive examples only. Experimental results are reported for the detection of six animal classes in PASCAL VOC 2007 and 2010 datasets. We demonstrate significant improvements in detection performance compared to the LSVM [1] and the Poselet [3] object detectors.", "title": "" }, { "docid": "38a5b1d2e064228ec498cf64d29d80e5", "text": "Model-free deep reinforcement learning (RL) algorithms have been successfully applied to a range of challenging sequential decision making and control tasks. However, these methods typically suffer from two major challenges: high sample complexity and brittleness to hyperparameters. Both of these challenges limit the applicability of such methods to real-world domains. In this paper, we describe Soft Actor-Critic (SAC), our recently introduced off-policy actor-critic algorithm based on the maximum entropy RL framework. In this framework, the actor aims to simultaneously maximize expected return and entropy. That is, to succeed at the task while acting as randomly as possible. We extend SAC to incorporate a number of modifications that accelerate training and improve stability with respect to the hyperparameters, including a constrained formulation that automatically tunes the temperature hyperparameter. We systematically evaluate SAC on a range of benchmark tasks, as well as real-world challenging tasks such as locomotion for a quadrupedal robot and robotic manipulation with a dexterous hand. With these improvements, SAC achieves state-of-the-art performance, outperforming prior on-policy and off-policy methods in sample-efficiency and asymptotic performance. Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving similar performance across different random seeds. These results suggest that SAC is a promising candidate for learning in real-world robotics tasks.", "title": "" }, { "docid": "79c5513abeb58c8735f823258f0bd3e7", "text": "Putting feelings into words (affect labeling) has long been thought to help manage negative emotional experiences; however, the mechanisms by which affect labeling produces this benefit remain largely unknown. Recent neuroimaging studies suggest a possible neurocognitive pathway for this process, but methodological limitations of previous studies have prevented strong inferences from being drawn. A functional magnetic resonance imaging study of affect labeling was conducted to remedy these limitations. The results indicated that affect labeling, relative to other forms of encoding, diminished the response of the amygdala and other limbic regions to negative emotional images. Additionally, affect labeling produced increased activity in a single brain region, right ventrolateral prefrontal cortex (RVLPFC). Finally, RVLPFC and amygdala activity during affect labeling were inversely correlated, a relationship that was mediated by activity in medial prefrontal cortex (MPFC). These results suggest that affect labeling may diminish emotional reactivity along a pathway from RVLPFC to MPFC to the amygdala.", "title": "" }, { "docid": "3580abbef7daf44d743b0175b2eda509", "text": "Cloud-based software applications are designed to change often and rapidly during operations to provide constant quality of service. As a result the boundary between development and operations is becoming increasingly blurred. DevOps provides a set of practices for the integrated consideration of developing and operating software. Software architecture is a central artifact in DevOps practices. Existing architectural models used in the development phase differ from those used in the operation phase in terms of purpose, abstraction, and content. In this chapter, we present the iObserve approach to address these differences and allow for phase-spanning usage of architectural models.", "title": "" }, { "docid": "3faeedfe2473dc837ab0db9eb4aefc4b", "text": "The spacing effect—that is, the benefit of spacing learning events apart rather than massing them together—has been demonstrated in hundreds of experiments, but is not well known to educators or learners. I investigated the spacing effect in the realistic context of flashcard use. Learners often divide flashcards into relatively small stacks, but compared to a large stack, small stacks decrease the spacing between study trials. In three experiments, participants used a web-based study programme to learn GRE-type word pairs. Studying one large stack of flashcards (i.e. spacing) was more effective than studying four smaller stacks of flashcards separately (i.e. massing). Spacing was also more effective than cramming—that is, massing study on the last day before the test. Across experiments, spacing was more effective than massing for 90% of the participants, yet after the first study session, 72% of the participants believed that massing had been more effective than spacing. Copyright # 2009 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "597c3e1762b0eb8558b72963f25d4b27", "text": "Animals are widespread in nature and the analysis of their shape and motion is important in many fields and industries. Modeling 3D animal shape, however, is difficult because the 3D scanning methods used to capture human shape are not applicable to wild animals or natural settings. Consequently, we propose a method to capture the detailed 3D shape of animals from images alone. The articulated and deformable nature of animals makes this problem extremely challenging, particularly in unconstrained environments with moving and uncalibrated cameras. To make this possible, we use a strong prior model of articulated animal shape that we fit to the image data. We then deform the animal shape in a canonical reference pose such that it matches image evidence when articulated and projected into multiple images. Our method extracts significantly more 3D shape detail than previous methods and is able to model new species, including the shape of an extinct animal, using only a few video frames. Additionally, the projected 3D shapes are accurate enough to facilitate the extraction of a realistic texture map from multiple frames.", "title": "" }, { "docid": "de48850e635e5a15f8574a0022cbb1e5", "text": "People use various social media for different purposes. The information on an individual site is often incomplete. When sources of complementary information are integrated, a better profile of a user can be built to improve online services such as verifying online information. To integrate these sources of information, it is necessary to identify individuals across social media sites. This paper aims to address the cross-media user identification problem. We introduce a methodology (MOBIUS) for finding a mapping among identities of individuals across social media sites. It consists of three key components: the first component identifies users' unique behavioral patterns that lead to information redundancies across sites; the second component constructs features that exploit information redundancies due to these behavioral patterns; and the third component employs machine learning for effective user identification. We formally define the cross-media user identification problem and show that MOBIUS is effective in identifying users across social media sites. This study paves the way for analysis and mining across social media sites, and facilitates the creation of novel online services across sites.", "title": "" }, { "docid": "15b26ceb3a81f4af6233ab8a36f66d3f", "text": "The number of web images has been explosively growing due to the development of network and storage technology. These images make up a large amount of current multimedia data and are closely related to our daily life. To efficiently browse, retrieve and organize the web images, numerous approaches have been proposed. Since the semantic concepts of the images can be indicated by label information, automatic image annotation becomes one effective technique for image management tasks. Most existing annotation methods use image features that are often noisy and redundant. Hence, feature selection can be exploited for a more precise and compact representation of the images, thus improving the annotation performance. In this paper, we propose a novel feature selection method and apply it to automatic image annotation. There are two appealing properties of our method. First, it can jointly select the most relevant features from all the data points by using a sparsity-based model. Second, it can uncover the shared subspace of original features, which is beneficial for multi-label learning. To solve the objective function of our method, we propose an efficient iterative algorithm. Extensive experiments are performed on large image databases that are collected from the web. The experimental results together with the theoretical analysis have validated the effectiveness of our method for feature selection, thus demonstrating its feasibility of being applied to web image annotation.", "title": "" }, { "docid": "ebe91d4e3559439af5dd729e7321883d", "text": "Performance of data analytics in Internet of Things (IoTs) depends on effective transport services offered by the underlying network. Fog computing enables independent data-plane computational features at the edge-switches, which serves as a platform for performing certain critical analytics required at the IoT source. To this end, in this paper, we implement a working prototype of Fog computing node based on Software-Defined Networking (SDN). Message Queuing Telemetry Transport (MQTT) is chosen as the candidate IoT protocol that transports data generated from IoT devices (a:k:a: MQTT publishers) to a remote host (called MQTT broker). We implement the MQTT broker functionalities integrated at the edge-switches, that serves as a platform to perform simple message-based analytics at the switches, and also deliver messages in a reliable manner to the end-host for post-delivery analytics. We mathematically validate the improved delivery performance as offered by the proposed switch-embedded brokers.", "title": "" }, { "docid": "820f67fa3521ee4af7da0e022a8d0be3", "text": "The visual appearance of rain is highly complex. Unlike the particles that cause other weather conditions such as haze and fog, rain drops are large and visible to the naked eye. Each drop refracts and reflects both scene radiance and environmental illumination towards an observer. As a result, a spatially distributed ensemble of drops moving at high velocities (rain) produces complex spatial and temporal intensity fluctuations in images and videos. To analyze the effects of rain, it is essential to understand the visual appearance of a single rain drop. In this paper, we develop geometric and photometric models for the refraction through, and reflection (both specular and internal) from, a rain drop. Our geometric and photometric models show that each rain drop behaves like a wide-angle lens that redirects light from a large field of view towards the observer. From this, we observe that in spite of being a transparent object, the brightness of the drop does not depend strongly on the brightness of the background. Our models provide the fundamental tools to analyze the complex effects of rain. Thus, we believe our work has implications for vision in bad weather as well as for efficient rendering of rain in computer graphics.", "title": "" }, { "docid": "6681faaf76fe5637f1af7eeb383181c2", "text": "There are many methods for detecting and mitigating software errors but few generic methods for automatically repairing errors once they are discovered. This paper highlights recent work combining program analysis methods with evolutionary computation to automatically repair bugs in off-the-shelf legacy C programs. The method takes as input the buggy C source code, a failed test case that demonstrates the bug, and a small number of other test cases that encode the required functionality of the program. The repair procedure does not rely on formal specifications, making it applicable to a wide range of extant software for which formal specifications rarely exist.", "title": "" } ]
scidocsrr
f530fbbd1d9b73d451b2b1b5dc9282d8
Applying Quantitative Marketing Techniques to the Internet
[ { "docid": "1e18be7d7e121aa899c96cbcf5ea906b", "text": "Internet-based technologies such as micropayments increasingly enable the sale and delivery of small units of information. This paper draws attention to the opposite strategy of bundling a large number of information goods, such as those increasingly available on the Internet, for a fixed price that does not depend on how many goods are actually used by the buyer. We analyze the optimal bundling strategies for a multiproduct monopolist, and we find that bundling very large numbers of unrelated information goods can be surprisingly profitable. The reason is that the law of large numbers makes it much easier to predict consumers' valuations for a bundle of goods than their valuations for the individual goods when sold separately. As a result, this \"predictive value of bundling\" makes it possible to achieve greater sales, greater economic efficiency and greater profits per good from a bundle of information goods than can be attained when the same goods are sold separately. Our results do not extend to most physical goods, as the marginal costs of production typically negate any benefits from the predictive value of bundling. While determining optimal bundling strategies for more than two goods is a notoriously difficult problem, we use statistical techniques to provide strong asymptotic results and bounds on profits for bundles of any arbitrary size. We show how our model can be used to analyze the bundling of complements and substitutes, bundling in the presence of budget constraints and bundling of goods with various types of correlations. We find that when different market segments of consumers differ systematically in their valuations for goods, simple bundling will no longer be optimal. However, by offering a menu of different bundles aimed at each market segment, a monopolist can generally earn substantially higher profits than would be possible without bundling. The predictions of our analysis appear to be consistent with empirical observations of the markets for Internet and on-line content, cable television programming, and copyrighted music. ________________________________________ We thank Timothy Bresnahan, Hung-Ken Chien, Frank Fisher, Michael Harrison, Paul Kleindorfer, Thomas Malone, Robert Pindyck, Nancy Rose, Richard Schmalensee, John Tsitsiklis, Hal Varian, Albert Wenger, Birger Wernerfelt, four anonymous reviewers and seminar participants at the University of California at Berkeley, MIT, New York University, Stanford University, University of Rochester, the Wharton School, the 1995 Workshop on Information Systems and Economics and the 1998 Workshop on Marketing Science and the Internet for many helpful suggestions. Any errors that remain are only our responsibility. BUNDLING INFORMATION GOODS Page 1", "title": "" }, { "docid": "f7562e0540e65fdfdd5738d559b4aad1", "text": "An important aspect of marketing practice is the targeting of consumer segments for differential promotional activity. The premise of this activity is that there exist distinct segments of homogeneous consumers who can be identified by readily available demographic information. The increased availability of individual consumer panel data open the possibility of direct targeting of individual households. The goal of this paper is to assess the information content of various information sets available for direct marketing purposes. Information on the consumer is obtained from the current and past purchase history as well as demographic characteristics. We consider the situation in which the marketer may have access to a reasonably long purchase history which includes both the products purchased and information on the causal environment. Short of this complete purchase history, we also consider more limited information sets which consist of only the current purchase occasion or only information on past product choice without causal variables. Proper evaluation of this information requires a flexible model of heterogeneity which can accommodate observable and unobservable heterogeneity as well as produce household level inferences for targeting purposes. We develop new econometric methods to imple0732-2399/96/1504/0321$01.25 Copyright C 1996, Institute for Operations Research and the Management Sciences ment a random coefficient choice model in which the heterogeneity distribution is related to observable demographics. We couple this approach to modeling heterogeneity with a target couponing problem in which coupons are customized to specific households on the basis of various information sets. The couponing problem allows us to place a monetary value on the information sets. Our results indicate there exists a tremendous potential for improving the profitability of direct marketing efforts by more fully utilizing household purchase histories. Even rather short purchase histories can produce a net gain in revenue from target couponing which is 2.5 times the gain from blanket couponing. The most popular current electronic couponing trigger strategy uses only one observation to customize the delivery of coupons. Surprisingly, even the information contained in observing one purchase occasion boasts net couponing revenue by 50% more than that which would be gained by the blanket strategy. This result, coupled with increased competitive pressures, will force targeted marketing strategies to become much more prevalent in the future than they are today. (Target Marketing; Coupons; Heterogeneity; Bayesian Hierarchical Models) MARKETING SCIENCE/Vol. 15, No. 4, 1996 pp. 321-340 THE VALUE OF PURCHASE HISTORY DATA IN TARGET MARKETING", "title": "" } ]
[ { "docid": "3e62ac4e3476cc2999808f0a43a24507", "text": "We present a detailed description of a new Bioconductor package, phyloseq, for integrated data and analysis of taxonomically-clustered phylogenetic sequencing data in conjunction with related data types. The phyloseq package integrates abundance data, phylogenetic information and covariates so that exploratory transformations, plots, and confirmatory testing and diagnostic plots can be carried out seamlessly. The package is built following the S4 object-oriented framework of the R language so that once the data have been input the user can easily transform, plot and analyze the data. We present some examples that highlight the methods and the ease with which we can leverage existing packages.", "title": "" }, { "docid": "56444dce712e313c0c014a260f97a6b3", "text": "Ecology and historical (phylogeny-based) biogeography have much to offer one another, but exchanges between these fields have been limited. Historical biogeography has become narrowly focused on using phylogenies to discover the history of geological connections among regions. Conversely, ecologists often ignore historical biogeography, even when its input can be crucial. Both historical biogeographers and ecologists have more-or-less abandoned attempts to understand the processes that determine the large-scale distribution of clades. Here, we describe the chasm that has developed between ecology and historical biogeography, some of the important questions that have fallen into it and how it might be bridged. To illustrate the benefits of an integrated approach, we expand on a model that can help explain the latitudinal gradient of species richness.", "title": "" }, { "docid": "d13ddbafa8f0774aec3bf0f491b89c0c", "text": "Dust explosions always claim lives and cause huge financial losses. Dust explosion risk can be prevented by inherently safer design or mitigated by engineering protective system. Design of explosion prevention and protection needs comprehensive knowledge and data on the process, workshop, equipment, and combustible materials. The knowledge includes standards, expertise of experts, and practical experience. The database includes accidents, dust explosion characteristics, inherently safer design methods, and protective design methods. Integration of such a comprehensive knowledge system is very helpful. The developed system has the following functions: risk assessment, accident analysis, recommendation of prevention and protection solution, and computer aided design of explosion protection. The software was based on Browser/Server architecture and was developed using mixed programming of ASP.Net and Prolog. The developed expert system can be an assistant to explosion design engineers and safety engineers of combustible dust handling plants.", "title": "" }, { "docid": "7b7c418cefcd571b03e5c0a002a5e923", "text": "A loop antenna having a gap has been investigated in the presence of a ground plane. The antenna configuration is optimized for the CP radiation, using the method of moments. It is found that, as the loop height above the ground plane is reduced, the optimized gap width approaches zero. Further antenna height reduction is found to be possible for an antenna whose wire radius is increased. On the basis of these results, we design an open-loop array antenna using a microstrip comb line as the feed network. It is demonstrated that an array antenna composed of eight open loop elements can radiate a CP wave with an axial ratio of 0.1 dB. The bandwidth for a 3-dB axial-ratio criterion is 4%, where the gain is almost constant at 15 dBi.", "title": "" }, { "docid": "d2c021f8d8eecfab43af79585823f407", "text": "Swallowing and feeding disorder (dysphagia) have high incidence and prevalence in children and adults with developmental disability. Standardized screening and clinical assessments are needed to identify and describe the disorder. The aim of this study was to describe the psychometric properties of the Dysphagia Disorder Survey (DDS), a screening and clinical assessment of swallowing and feeding function for eating and drinking developed specifically for this population. The statistical analysis was performed on a sample of 654 individuals (age range 8-82) with intellectual and developmental disability living in two residential settings in the United States that served somewhat different populations. The two samples had similar factor structures. Internal consistency of the DDS and subscales was confirmed using Chronbach's coefficient alpha. The DDS demonstrated convergent validity when compared to judgments of swallowing and feeding disorder severity made by clinical swallowing specialists. Discriminative validity for severity of disorder was tested by comparing the two samples. The results of the study suggest that the DDS is a reliable and valid test for identifying and describing swallowing and feeding disorder in children and adults with developmental disability.", "title": "" }, { "docid": "78d7c61f7ca169a05e9ae1393712cd69", "text": "Designing an automatic solver for math word problems has been considered as a crucial step towards general AI, with the ability of natural language understanding and logical inference. The state-of-the-art performance was achieved by enumerating all the possible expressions from the quantities in the text and customizing a scoring function to identify the one with the maximum probability. However, it incurs exponential search space with the number of quantities and beam search has to be applied to trade accuracy for efficiency. In this paper, we make the first attempt of applying deep reinforcement learning to solve arithmetic word problems. The motivation is that deep Q-network has witnessed success in solving various problems with big search space and achieves promising performance in terms of both accuracy and running time. To fit the math problem scenario, we propose our MathDQN that is customized from the general deep reinforcement learning framework. Technically, we design the states, actions, reward function, together with a feed-forward neural network as the deep Q-network. Extensive experimental results validate our superiority over state-ofthe-art methods. Our MathDQN yields remarkable improvement on most of datasets and boosts the average precision among all the benchmark datasets by 15%.", "title": "" }, { "docid": "c53f8e3d8ca800284ce22748d7afde59", "text": "With the expansion of software scale, effective approaches for automatic vulnerability mining have been in badly needed. This paper presents a novel approach which can generate test cases of high pertinence and reachability. Unlike standard fuzzing techniques which explore the test space blindly, our approach utilizes abstract interpretation based on intervals to locate the Frail-Points of program which may cause buffer over-flow in some special conditions and the technique of static taint trace to build mappings between the Frail-Points and program inputs. Moreover, acquire path constraints of each Frail-Point through symbolic execution. Finally, combine information of mappings and path constraints to propose a policy for guiding test case generation.", "title": "" }, { "docid": "48f06ed96714c2970550fef88d21d517", "text": "Support vector machines (SVMs) are becoming popular in a wide variety of biological applications. But, what exactly are SVMs and how do they work? And what are their most promising applications in the life sciences?", "title": "" }, { "docid": "1f1158ad55dc8a494d9350c5a5aab2f2", "text": "Individuals display a mathematics disability when their performance on standardized calculation tests or on numerical reasoning tasks is comparatively low, given their age, education and intellectual reasoning ability. Low performance due to cerebral trauma is called acquired dyscalculia. Mathematical learning difficulties with similar features but without evidence of cerebral trauma are referred to as developmental dyscalculia. This review identifies types of developmental dyscalculia, the neuropsychological processes that are linked with them and procedures for identifying dyscalculia. The concept of dyslexia is one with which professionals working in the areas of special education, learning disabilities are reasonably familiar. The concept of dyscalculia, on the other hand, is less well known. This article describes this condition and examines its implications for understanding mathematics learning disabilities. Individuals display a mathematics disability when their performance on standardized calculation tests or on numerical reasoning tasks is significantly depressed, given their age, education and intellectual reasoning ability ( Mental Disorders IV (DSM IV)). When this loss of ability to calculate is due to cerebral trauma, the condition is called acalculia or acquired dyscalculia. Mathematical learning difficulties that share features with acquired dyscalculia but without evidence of cerebral trauma are referred to as developmental dyscalculia (Hughes, Kolstad & Briggs, 1994). The focus of this review is on developmental dyscalculia (DD). Students who show DD have difficulty recalling number facts and completing numerical calculations. They also show chronic difficulties with numerical processing skills such recognizing number symbols, writing numbers or naming written numerals and applying procedures correctly (Gordon, 1992). They may have low self efficacy and selective attentional difficulties (Gross Tsur, Auerbach, Manor & Shalev, 1996). Not all students who display low mathematics achievement have DD. Mathematics underachievement can be due to a range of causes, for example, lack of motivation or interest in learning mathematics, low self efficacy, high anxiety, inappropriate earlier teaching or poor school attendance. It can also be due to generalised poor learning capacity, immature general ability, severe language disorders or sensory processing. Underachievement due to DD has a neuropsychological foundation. The students lack particular cognitive or information processing strategies necessary for acquiring and using arithmetic knowledge. They can learn successfully in most contexts and have relevant general language and sensory processing. They also have access to a curriculum from which their peers learn successfully. It is also necessary to clarify the relationship between DD and reading disabilities. Some aspects of both literacy and arithmetic learning draw on the same cognitive processes. Both, for example, 1 This article was published in Australian Journal of Learning Disabilities, 2003 8, (4).", "title": "" }, { "docid": "231f3a7d6ee769432c37b87df6f45c15", "text": "Common variable immunodeficiency (CVID) is the most common severe adult primary immunodeficiency and is characterized by a failure to produce antibodies leading to recurrent predominantly sinopulmonary infections. Improvements in the prevention and treatment of infection with immunoglobulin replacement and antibiotics have resulted in malignancy, autoimmune, inflammatory and lymphoproliferative disorders emerging as major clinical challenges in the management of patients who have CVID. In a proportion of CVID patients, inflammation manifests as granulomas that frequently involve the lungs, lymph nodes, spleen and liver and may affect almost any organ. Granulomatous lymphocytic interstitial lung disease (GLILD) is associated with a worse outcome. Its underlying pathogenic mechanisms are poorly understood and there is limited evidence to inform how best to monitor, treat or select patients to treat. We describe the use of combined 2-[(18)F]-fluoro-2-deoxy-d-glucose positron emission tomography and computed tomography (FDG PET-CT) scanning for the assessment and monitoring of response to treatment in a patient with GLILD. This enabled a synergistic combination of functional and anatomical imaging in GLILD and demonstrated a widespread and high level of metabolic activity in the lungs and lymph nodes. Following treatment with rituximab and mycophenolate there was almost complete resolution of the previously identified high metabolic activity alongside significant normalization in lymph node size and lung architecture. The results support the view that GLILD represents one facet of a multi-systemic metabolically highly active lymphoproliferative disorder and suggests potential utility of this imaging modality in this subset of patients with CVID.", "title": "" }, { "docid": "8014c32fa820e1e2c54e1004b62dc33e", "text": "Signature-based malicious code detection is the standard technique in all commercial anti-virus software. This method can detect a virus only after the virus has appeared and caused damage. Signature-based detection performs poorly whe n attempting to identify new viruses. Motivated by the standard signature-based technique for detecting viruses, and a recent successful text classification method, n-grams analysis, we explo re the idea of automatically detecting new malicious code. We employ n-grams analysis to automatically generate signatures from malicious and benign software collections. The n-gramsbased signatures are capable of classifying unseen benign and malicious code. The datasets used are large compared to earlier applications of n-grams analysis.", "title": "" }, { "docid": "8fe823702191b4a56defaceee7d19db6", "text": "We propose a method of stacking multiple long short-term memory (LSTM) layers for modeling sentences. In contrast to the conventional stacked LSTMs where only hidden states are fed as input to the next layer, our architecture accepts both hidden and memory cell states of the preceding layer and fuses information from the left and the lower context using the soft gating mechanism of LSTMs. Thus the proposed stacked LSTM architecture modulates the amount of information to be delivered not only in horizontal recurrence but also in vertical connections, from which useful features extracted from lower layers are effectively conveyed to upper layers. We dub this architecture Cell-aware Stacked LSTM (CAS-LSTM) and show from experiments that our models achieve state-of-the-art results on benchmark datasets for natural language inference, paraphrase detection, and sentiment classification.", "title": "" }, { "docid": "1a3b49298f6217cc8600e00886751f7f", "text": "A person's language use reveals much about the person's social identity, which is based on the social categories a person belongs to including age and gender. We discuss the development of TweetGenie, a computer program that predicts the age of Twitter users based on their language use. We explore age prediction in three different ways: classifying users into age categories, by life stages, and predicting their exact age. An automatic system achieves better performance than humans on these tasks. Both humans and the automatic systems tend to underpredict the age of older people. We find that most linguistic changes occur when people are young, and that after around 30 years the studied variables show little change, making it difficult to predict the ages of older Twitter users.", "title": "" }, { "docid": "b5feea2a9ef2ed18182964acd83cdaee", "text": "We consider the problem of learning general-purpose, paraphrastic sentence embeddings, revisiting the setting of Wieting et al. (2016b). While they found LSTM recurrent networks to underperform word averaging, we present several developments that together produce the opposite conclusion. These include training on sentence pairs rather than phrase pairs, averaging states to represent sequences, and regularizing aggressively. These improve LSTMs in both transfer learning and supervised settings. We also introduce a new recurrent architecture, the GATED RECURRENT AVERAGING NETWORK, that is inspired by averaging and LSTMs while outperforming them both. We analyze our learned models, finding evidence of preferences for particular parts of speech and dependency relations. 1", "title": "" }, { "docid": "8ead349d8495e3927df3f46a43b67ea4", "text": "146 women and 44 men (out- and inpatients; treatment sample) with Seasonal Affective Disorder (SAD; winter type) were tested for gender differences in demographic, clinical and seasonal characteristics. Sex ratio in prevalence was (women : men) 3.6 : 1 in unipolar depressives and 2.4 : 1 in bipolars (I and II). Sex ratios varied also between different birth cohorts and men seemed to underreport symptoms. There was no significant difference in symptom-profiles in both genders, however a preponderance of increased eating and different food selection on a trend level occured in women. The female group suffered significantly more often from thyroid disorders and from greater mood variations because of dark and cloudy weather. Women referred themselves to our clinic significantly more frequently as compared to men. In summary gender differences in SAD were similar to those of non-seasonal depression: the extent of gender differences in the prevalence of affective disorders appears to depend on case criteria such as diagnosis (unipolar vs. bipolar), birth cohort and number of symptoms as minimum threshold for diagnosis. We support the idea of applying sex-specific diagnostic criteria for diagnosing depression on the basis of our data and of the literature.", "title": "" }, { "docid": "40479536efec6311cd735f2bd34605d7", "text": "The vast quantity of information brought by big data as well as the evolving computer hardware encourages success stories in the machine learning community. In the meanwhile, it poses challenges for the Gaussian process (GP), a well-known non-parametric and interpretable Bayesian model, which suffers from cubic complexity to training size. To improve the scalability while retaining the desirable prediction quality, a variety of scalable GPs have been presented. But they have not yet been comprehensively reviewed and discussed in a unifying way in order to be well understood by both academia and industry. To this end, this paper devotes to reviewing state-of-theart scalable GPs involving two main categories: global approximations which distillate the entire data and local approximations which divide the data for subspace learning. Particularly, for global approximations, we mainly focus on sparse approximations comprising prior approximations which modify the prior but perform exact inference, and posterior approximations which retain exact prior but perform approximate inference; for local approximations, we highlight the mixture/product of experts that conducts model averaging from multiple local experts to boost predictions. To present a complete review, recent advances for improving the scalability and model capability of scalable GPs are reviewed. Finally, the extensions and open issues regarding the implementation of scalable GPs in various scenarios are reviewed and discussed to inspire novel ideas for future research avenues.", "title": "" }, { "docid": "ae534b0d19b95dcee87f06ed279fc716", "text": "In this paper, comparative study of p type and n type solar cells are described using two popular solar cell analyzing software AFORS HET and PC1D. We use SiNx layer as Antireflection Coating and a passivated layer Al2O3 .The variation of reflection, absorption, I-V characteristics, and internal and external quantum efficiency have been done by changing the thickness of passivated layer and ARC layer, and front and back surface recombination velocities. The same analysis is taken by imposing surface charge at front of n-type solar Cell and we get 20.13%-20.15% conversion efficiency.", "title": "" }, { "docid": "f282a0e666a2b2f3f323870fc07217bd", "text": "The cultivation of pepper has great importance in all regions of Brazil, due to its characteristics of profi tability, especially when the producer and processing industry add value to the product, or its social importance because it employs large numbers of skilled labor. Peppers require monthly temperatures ranging between 21 and 30 °C, with an average of 18 °C. At low temperatures, there is a decrease in germination, wilting of young parts, and slow growth. Plants require adequate level of nitrogen, favoring plants and fruit growth. Most the cultivars require large spacing for adequate growth due to the canopy of the plants. Proper insect, disease, and weed control prolong the harvest of fruits for longer periods, reducing losses. The crop cycle and harvest period are directly affected by weather conditions, incidence of pests and diseases, and cultural practices including adequate fertilization, irrigation, and adoption of phytosanitary control measures. In general for most cultivars, the fi rst harvest starts 90 days after sowing, which can be prolonged for a couple of months depending on the plant physiological condition.", "title": "" }, { "docid": "3d8df2c8fcbdc994007104b8d21d7a06", "text": "The purpose of this research was to analysis the efficiency of global strategies. This paper identified six key strategies necessary for firms to be successful when expanding globally. These strategies include differentiation, marketing, distribution, collaborative strategies, labor and management strategies, and diversification. Within this analysis, we chose to focus on the Coca-Cola Company because they have proven successful in their international operations and are one of the most recognized brands in the world. We performed an in-depth review of how effectively or ineffectively Coca-Cola has used each of the six strategies. The paper focused on Coca-Cola's operations in the United States, China, Belarus, Peru, and Morocco. The author used electronic journals from the various countries to determine how effective Coca-Cola was in these countries. The paper revealed that Coca-Cola was very successful in implementing strategies regardless of the country. However, the author learned that Coca-Cola did not effectively utilize all of the strategies in each country.", "title": "" }, { "docid": "a7336b4e1ba0846f45f6757b121a7d33", "text": "Recently, concerns have been raised that residues of glyphosate-based herbicides may interfere with the homeostasis of the intestinal bacterial community and thereby affect the health of humans or animals. The biochemical pathway for aromatic amino acid synthesis (Shikimate pathway), which is specifically inhibited by glyphosate, is shared by plants and numerous bacterial species. Several in vitro studies have shown that various groups of intestinal bacteria may be differently affected by glyphosate. Here, we present results from an animal exposure trial combining deep 16S rRNA gene sequencing of the bacterial community with liquid chromatography mass spectrometry (LC-MS) based metabolic profiling of aromatic amino acids and their downstream metabolites. We found that glyphosate as well as the commercial formulation Glyfonova®450 PLUS administered at up to fifty times the established European Acceptable Daily Intake (ADI = 0.5 mg/kg body weight) had very limited effects on bacterial community composition in Sprague Dawley rats during a two-week exposure trial. The effect of glyphosate on prototrophic bacterial growth was highly dependent on the availability of aromatic amino acids, suggesting that the observed limited effect on bacterial composition was due to the presence of sufficient amounts of aromatic amino acids in the intestinal environment. A strong correlation was observed between intestinal concentrations of glyphosate and intestinal pH, which may partly be explained by an observed reduction in acetic acid produced by the gut bacteria. We conclude that sufficient intestinal levels of aromatic amino acids provided by the diet alleviates the need for bacterial synthesis of aromatic amino acids and thus prevents an antimicrobial effect of glyphosate in vivo. It is however possible that the situation is different in cases of human malnutrition or in production animals.", "title": "" } ]
scidocsrr
6e21c57cdb1999e9ac735cb05d7484b2
HARMONIC MAASS FORMS , MOCK MODULAR FORMS , AND QUANTUM MODULAR FORMS
[ { "docid": "4dcc069e33f2831c7ccdd719c51607e1", "text": "We survey the progress that has been made on the arithmetic of elliptic curves in the past twenty-five years, with particular attention to the questions highlighted in Tate’s 1974 Inventiones paper.", "title": "" } ]
[ { "docid": "8733daeee2dd85345ce115cb1366f4b2", "text": "We propose an interactive model, RuleViz, for visualizing the entire process of knowledge discovery and data mining. The model consists of ve components according to the main ingredients of the knowledge discovery process: original data visualization, visual data reduction, visual data preprocess, visual rule discovery, and rule visualization. The RuleViz model for visualizing the process of knowledge discovery is introduced and each component is discussed. Two aspects are emphasized, human-machine interaction and process visualization. The interaction helps the KDD system navigate through the enormous search spaces and recognize the intentions of the user, and the visualization of the KDD process helps users gain better insight into the multidimensional data, understand the intermediate results, and interpret the discovered patterns. According to the RuleViz model, we implement an interactive system, CViz, which exploits \\parallel coordinates\" technique to visualize the process of rule induction. The original data is visualized on the parallel coordinates, and can be interactively reduced both horizontally and vertically. Three approaches for discretizing numerical attributes are provided in the visual data preprocessing. CViz learns classi cation rules on the basis of a rule induction algorithm and presents the result as the algorithm proceeds. The discovered rules are nally visualized on the parallel coordinates with each rule being displayed as a directed \\polygon\", and the rule accuracy and quality are used to render the \\polygons\" and control the choice of rules to be displayed to avoid clutter. The CViz system has been experimented with the UCI data sets and synthesis data sets, and the results demonstrate that the RuleViz model and the implemented visualization system are useful and helpful for understanding the process of knowledge discovery and interpreting the nal results.", "title": "" }, { "docid": "0188eb4ef8a87b6cee8657018360fa69", "text": "This paper presents a pattern division multiple access (PDMA) concept for cellular future radio access (FRA) towards the 2020s information society. Different from the current LTE radio access scheme (until Release 11), PDMA is a novel non-orthogonal multiple access technology based on the total optimization of multiple user communication system. It considers joint design from both transmitter and receiver. At the receiver, multiple users are detected by successive interference cancellation (SIC) detection method. Numerical results show that the PDMA system based on SIC improve the average sum rate of users over the orthogonal system with affordable complexity.", "title": "" }, { "docid": "40b5464ed1fae3e624d876aa819ec412", "text": "Standards of penile and clitoral sizes are useful for diagnosis of genital abnormalities. In order to verify whether ethnicity has an effect on the size of external genitalia in newborns, 570 full term infants, Jews (221) and Bedouins (349), at the neonatal department of the Soroka Medical Center were examined. Clitoral length, the distance between the center of the anus to the fourchette (AF) and the distance between the center of the anus and the base of the clitoris (AC) were measured, and the AF/AC ratio was calculated for the females. Penile length was measured in the males. Significant differences in clitoral length (12.6%) between the Jewish group (5.87 +/- 1.48 mm) and the Bedouin group (6.61 +/- 1.72 mm) (p < 0.01) and in the ratio of AF to AC between the two ethnic groups (p < 0.01) were found. To the best of our knowledge, our study is the first to report ethnic differences in genital sizes of newborns.", "title": "" }, { "docid": "164a1246119f8e7c230864ac5300da60", "text": "1,2,3,4,5 Department Of Computer Engineering, Smt .Kashibai Navale College of Engineering, Pune. ----------------------------------------------------------------------------***-------------------------------------------------------------------------ABSTRACTIn today’s world social networking platforms such as Instagram, Facebook, Google+ etc, have created the boon in our humanitarian society[1]. Along with these social networking platforms there comes a great responsibility of handling user privacy as well as user data. In most of these websites, data is stored on the centralized system called as the server. [1] The whole system crash down if the server goes down. One of the solutions for this problem is to use a decentralized system. Decentralized applications works on Blockchain. A Blockchain is a group of blocks connected sequentially to each other. The blockchains are designed so that transactions remain immutable i.e. unchanged hence provides security. The data can be distributed and no one can tampered that data. This paper presents a decentralized social media photo sharing web application which is based on blockchain technology where the user would be able to view, like, comment, share photos shared by different users.", "title": "" }, { "docid": "a009519d1ed930d40db593542e7c3e0d", "text": "With the increasing adoption of NoSQL data base systems like MongoDB or CouchDB more and more applications store structured data according to a non-relational, document oriented model. Exposing this structured data as Linked Data is currently inhibited by a lack of standards as well as tools and requires the implementation of custom solutions. While recent efforts aim at expressing transformations of such data models into RDF in a standardized manner, there is a lack of approaches which facilitate SPARQL execution over mapped non-relational data sources. With SparqlMap-M we show how dynamic SPARQL access to non-relational data can be achieved. SparqlMap-M is an extension to our SPARQL-to-SQL rewriter SparqlMap that performs a (partial) transformation of SPARQL queries by using a relational abstraction over a document store. Further, duplicate data in the document store is used to reduce the number of joins and custom optimizations are introduced. Our showcase scenario employs the Berlin SPARQL Benchmark (BSBM) with different adaptions to a document data model. We use this scenario to demonstrate the viability of our approach and compare it to different MongoDB setups and native SQL.", "title": "" }, { "docid": "1285bd50bb6462b9864d61a59e77435e", "text": "Precision Agriculture is advancing but not as fast as predicted 5 years ago. The development of proper decision-support systems for implementing precision decisions remains a major stumbling block to adoption. Other critical research issues are discussed, namely, insufficient recognition of temporal variation, lack of whole-farm focus, crop quality assessment methods, product tracking and environmental auditing. A generic research programme for precision agriculture is presented. A typology of agriculture countries is introduced and the potential of each type for precision agriculture discussed.", "title": "" }, { "docid": "a25145ff3cee8f8b3e590e803e651294", "text": "Search personalization aims to tailor search results to each specific user based on the user’s personal interests and preferences (i.e., the user profile). Recent research approaches to search personalization by modelling the potential 3-way relationship between the submitted query, the user and the search results (i.e., documents). That relationship is then used to personalize the search results to that user. In this paper, we introduce a novel embedding model based on capsule network, which recently is a breakthrough in deep learning, to model the 3-way relationships for search personalization. In the model, each user (submitted query or returned document) is embedded by a vector in the same vector space. The 3-way relationship is described as a triple of (query, user, document) which is then modeled as a 3-column matrix containing the three embedding vectors. After that, the 3-column matrix is fed into a deep learning architecture to re-rank the search results returned by a basis ranker. Experimental results on query logs from a commercial web search engine show that our model achieves better performances than the basis ranker as well as strong search personalization baselines.", "title": "" }, { "docid": "5f8ddaa9130446373a9a5d44c17ca604", "text": "Object detection is a crucial task for autonomous driving. In addition to requiring high accuracy to ensure safety, object detection for autonomous driving also requires realtime inference speed to guarantee prompt vehicle control, as well as small model size and energy efficiency to enable embedded system deployment.,,,,,, In this work, we propose SqueezeDet, a fully convolutional neural network for object detection that aims to simultaneously satisfy all of the above constraints. In our network we use convolutional layers not only to extract feature maps, but also as the output layer to compute bounding boxes and class probabilities. The detection pipeline of our model only contains a single forward pass of a neural network, thus it is extremely fast. Our model is fullyconvolutional, which leads to small model size and better energy efficiency. Finally, our experiments show that our model is very accurate, achieving state-of-the-art accuracy on the KITTI [10] benchmark. The source code of SqueezeDet is open-source released.", "title": "" }, { "docid": "8b5ad6c53d58feefe975e481e2352c52", "text": "Virtual machine (VM) live migration is a critical feature for managing virtualized environments, enabling dynamic load balancing, consolidation for power management, preparation for planned maintenance, and other management features. However, not all virtual machine live migration is created equal. Variants include memory migration, which relies on shared backend storage between the source and destination of the migration, and storage migration, which migrates storage state as well as memory state. We have developed an automated testing framework that measures important performance characteristics of live migration, including total migration time, the time a VM is unresponsive during migration, and the amount of data transferred over the network during migration. We apply this testing framework and present the results of studying live migration, both memory migration and storage migration, in various virtualization systems including KVM, XenServer, VMware, and Hyper-V. The results provide important data to guide the migration decisions of both system administrators and autonomic cloud management systems.", "title": "" }, { "docid": "699c280c87888a7c2ba9818d80528c9e", "text": "Wireless communication systems that include unmanned aerial vehicles promise to provide cost-effective wireless connectivity for devices without infrastructure coverage. Compared to terrestrial communications or those based on high-altitude platforms, on-demand wireless systems with low-altitude UAVs are in general faster to deploy, more flexibly reconfigured, and likely to have better communication channels due to the presence of short-range line-of-sight links. However, the utilization of highly mobile and energy-constrained UAVs for wireless communications also introduces many new challenges. In this article, we provide an overview of UAV-aided wireless communications, by introducing the basic networking architecture and main channel characteristics, highlighting the key design considerations as well as the new opportunities to be exploited.", "title": "" }, { "docid": "c464a5f086f09d39b15beb3b3fbfec54", "text": "Sweet cherry, a non-climacteric fruit, is usually cold-stored during post-harvest to prevent over-ripening. The aim of the study was to evaluate the role of abscisic acid (ABA) on fruit growth and ripening of this fruit, considering as well its putative implication in over-ripening and effects on quality. We measured the endogenous concentrations of ABA during the ripening of sweet cherries (Prunus avium L. var. Prime Giant) collected from orchard trees and in cherries exposed to 4°C and 23°C during 10 days of post-harvest. Furthermore, we examined to what extent endogenous ABA concentrations were related to quality parameters, such as fruit biomass, anthocyanin accumulation and levels of vitamins C and E. Endogenous concentrations of ABA in fruits increased progressively during fruit growth and ripening on the tree, to decrease later during post-harvest at 23°C. Cold treatment, however, increased ABA levels and led to an inhibition of over-ripening. Furthermore, ABA levels positively correlated with anthocyanin and vitamin E levels during pre-harvest, but not during post-harvest. We conclude that ABA plays a major role in sweet cherry development, stimulating its ripening process and positively influencing quality parameters during pre-harvest. The possible influence of ABA preventing over-ripening in cold-stored sweet cherries is also discussed.", "title": "" }, { "docid": "13ab66e00490becb8904653c87a142e4", "text": "In this paper, we propose an automatic system that recognizes both isolated and continuous gestures for Arabic numbers (0-9) in real-time based on hidden Markov model (HMM). To handle isolated gestures, HMM using ergodic, left-right (LR) and left-right banded (LRB) topologies with different number of states ranging from 3 to 10 is applied. Orientation dynamic features are obtained from spatio-temporal trajectories and then quantized to generate its codewords. The continuous gestures are recognized by our novel idea of zero-codeword detection with static velocity motion. Therefore, the LRB topology in conjunction with forward algorithm presents the best performance and achieves average rate recognition 98.94% and 95.7% for isolated and continuous gestures, respectively.", "title": "" }, { "docid": "61282d5ef37e5821a5a856f0bbe26cc2", "text": "Second language teachers are great consumers of grammar. They are mainly interested in pedagogical grammar, but they are generally unaware of the work of theoretical linguists, such as Chomsky and Halliday. Whereas Chomsky himself has never suggested in any way that his work might be of benefit to L2 teaching, Halliday and his many disciples, have. It seems odd that language teachers should choose to ignore the great gurus of grammar. Even if their work is deemed too technical and theoretical for classroom application, it may still shed light on pedagogical grammar and provide a rationale for the way one goes about teaching grammar. In order to make informed decisions about what grammar to teach and how best to teach it, one should take stock of the various schools of grammar that seem to speak in very different voices. In the article, the writer outlines the kinds of grammar that come out of five of these schools, and assesses their usefulness to the L2 teacher.", "title": "" }, { "docid": "59983d5fad254d7ae7de82189e6e9618", "text": "Collections of objects such as images are often presented visually in a grid because it is a compact representation that lends itself well for search and exploration. Most grid layouts are sorted using very basic criteria, such as date or filename. In this work we present a method to arrange collections of objects respecting an arbitrary distance measure. Pairwise distances are preserved as much as possible, while still producing the specific target arrangement which may be a 2D grid, the surface of a sphere, a hierarchy, or any other shape. We show that our method can be used for infographics, collection exploration, summarization, data visualization, and even for solving problems such as where to seat family members at a wedding. We present a fast algorithm that can work on large collections and quantitatively evaluate how well distances are preserved.", "title": "" }, { "docid": "ba57f271fbf1c6c93aa10ac51b760168", "text": "Abstract. In this paper, connected domination in fuzzy graphs using strong arcs is introduced. The strong connected domination number of different classes of fuzzy graphs is obtained. An upper bound for the strong connected domination number of fuzzy graphs is obtained. Strong connected domination in fuzzy trees is studied. It is established that the set of fuzzy cut nodes of a fuzzy tree is a strong connected dominating set. It is proved that in a fuzzy tree each node of a strong connected dominating set is incident on a fuzzy bridge. Also the characteristic properties of the existence of strong connected dominating set for a fuzzy graph and its complement are established.", "title": "" }, { "docid": "3f2aa41468e1d5679a6c12b51a92e810", "text": "G protein-coupled dopamine receptors (D1, D2, D3, D4, and D5) mediate all of the physiological functions of the catecholaminergic neurotransmitter dopamine, ranging from voluntary movement and reward to hormonal regulation and hypertension. Pharmacological agents targeting dopaminergic neurotransmission have been clinically used in the management of several neurological and psychiatric disorders, including Parkinson's disease, schizophrenia, bipolar disorder, Huntington's disease, attention deficit hyperactivity disorder (ADHD(1)), and Tourette's syndrome. Numerous advances have occurred in understanding the general structural, biochemical, and functional properties of dopamine receptors that have led to the development of multiple pharmacologically active compounds that directly target dopamine receptors, such as antiparkinson drugs and antipsychotics. Recent progress in understanding the complex biology of dopamine receptor-related signal transduction mechanisms has revealed that, in addition to their primary action on cAMP-mediated signaling, dopamine receptors can act through diverse signaling mechanisms that involve alternative G protein coupling or through G protein-independent mechanisms via interactions with ion channels or proteins that are characteristically implicated in receptor desensitization, such as β-arrestins. One of the future directions in managing dopamine-related pathologic conditions may involve a transition from the approaches that directly affect receptor function to a precise targeting of postreceptor intracellular signaling modalities either directly or through ligand-biased signaling pharmacology. In this comprehensive review, we discuss dopamine receptor classification, their basic structural and genetic organization, their distribution and functions in the brain and the periphery, and their regulation and signal transduction mechanisms. In addition, we discuss the abnormalities of dopamine receptor expression, function, and signaling that are documented in human disorders and the current pharmacology and emerging trends in the development of novel therapeutic agents that act at dopamine receptors and/or on related signaling events.", "title": "" }, { "docid": "420a6ef979920fd6a41706d0dc7386d6", "text": "The research is essentially to modularize the structure of utilities and develop a system for following up the activities electronically on the city scale. The GIS operational platform will be the base for managing the infrastructure development components with the systems interoperability for the available city infrastructure related systems. The concentration will be on the available utility networks in order to develop a comprehensive, common, standardized geospatial data models. The construction operations for the utility networks such as electricity, water, Gas, district cooling, irrigation, sewerage and communication networks; are need to be fully monitored on daily basis, in order to utilize the involved huge resources and man power. These resources are allocated only to convey the operational status for the construction and execution sections that used to do the required maintenance. The need for a system that serving the decision makers for following up these activities with a proper geographical representation will definitely reduce the operational cost for the long term.", "title": "" }, { "docid": "4d2e8924181d129e23f8b51eccd7e1ef", "text": "This paper presents the design, fabrication, and characterization of millimeter-scale rotary electromagnetic generators. The axial-flux synchronous machines consist of a three-phase microfabricated surface-wound copper coil and a multipole permanent-magnet (PM) rotor measuring 2 mm in diameter. Several machines with various geometries and numbers of magnetic poles and turns per pole are designed and compared. Moreover, the use of different PM materials is investigated. Multipole magnetic rotors are modeled using finite element analysis to analyze magnetic field distributions. In operation, the rotor is spun above the microfabricated stator coils using an off-the-shelf air-driven turbine. As a result of design choices, the generators present different levels of operating frequency and electrical output power. The four-pole six-turn/pole NdFeB generator exhibits up to 6.6 mWrms of ac electrical power across a resistive load at a rotational speed of 392 000 r/min. This milliwatt-scale power generation indicates the feasibility of such ultrasmall machines for low-power applications. [2008-0078].", "title": "" }, { "docid": "5048a090adfdd3ebe9d9253ca4f72644", "text": "Movement disorders or extrapyramidal symptoms (EPS) associated with selective serotonin reuptake inhibitors (SSRIs) have been reported. Although akathisia was found to be the most common EPS, and fluoxetine was implicated in the majority of the adverse reactions, there were also cases with EPS due to sertraline treatment. We present a child and an adolescent who developed torticollis (cervical dystonia) after using sertraline. To our knowledge, the child case is the first such report of sertraline-induced torticollis, and the adolescent case is the third in the literature.", "title": "" }, { "docid": "661b7615e660ae8e0a3b2a7294b9b921", "text": "In this paper, a very simple solution-based method is employed to coat amorphous MnO2 onto crystalline SnO2 nanowires grown on stainless steel substrate, which utilizes the better electronic conductivity of SnO2 nanowires as the supporting backbone to deposit MnO2 for supercapacitor electrodes. Cyclic voltammetry (CV) and galvanostatic charge/discharge methods have been carried out to study the capacitive properties of the SnO2/MnO2 composites. A specific capacitance (based on MnO2) as high as 637 F g(-1) is obtained at a scan rate of 2 mV s(-1) (800 F g(-1) at a current density of 1 A g(-1)) in 1 M Na2SO4 aqueous solution. The energy density and power density measured at 50 A g(-1) are 35.4 W h kg(-1) and 25 kW kg(-1), respectively, demonstrating the good rate capability. In addition, the SnO2/MnO2 composite electrode shows excellent long-term cyclic stability (less than 1.2% decrease of the specific capacitance is observed after 2000 CV cycles). The temperature-dependent capacitive behavior is also discussed. Such high-performance capacitive behavior indicates that the SnO2/MnO2 composite is a very promising electrode material for fabricating supercapacitors.", "title": "" } ]
scidocsrr
63047c1352896c5356b4c9d89f494d56
Motion-based counter-measures to photo attacks in face recognition
[ { "docid": "ac41f562b640acc26afaf5bd1bc459b9", "text": "In this paper, we use a general hill-climbing attack algorithm based on Bayesian adaption to test the vulnerability of two face recognition systems to indirect attacks. The attacking technique uses the scores provided by the matcher to adapt a global distribution computed from an independent set of users, to the local specificities of the client being attacked. The proposed attack is evaluated on an Eigenface-based and a Parts-base face verification system using the XM2VTS database. Experimental results demonstrate that the hill-climbing algorithm is very efficient and is able to bypass over 85% of the attacked accounts (for both face recognition systems). The security flaws flaws of the analyzed system are pointed out and possible countermeasures to avoid them are also proposed.", "title": "" }, { "docid": "ae86bbc8d1c489b0ed1a75a5d76ed6e2", "text": "Face recognition is an increasingly popular method for user authentication. However, face recognition is susceptible to playback attacks. Therefore, a reliable way to detect malicious attacks is crucial to the robustness of the system. We propose and validate a novel physics-based method to detect images recaptured from printed material using only a single image. Micro-textures present in printed paper manifest themselves in the specular component of the image. Features extracted from this component allows a linear SVM classifier to achieve 2.2% False Acceptance Rate and 13% False Rejection Rate (6.7% Equal Error Rate). We also show that the classifier can be generalizable to contrast enhanced recaptured images and LCD screen recaptured images without re-training, demonstrating the robustness of our approach.1", "title": "" }, { "docid": "a9b20ad74b3a448fbc1555b27c4dcac9", "text": "A new learning algorithm for multilayer feedforward networks, RPROP, is proposed. To overcome the inherent disadvantages of pure gradient-descent, RPROP performs a local adaptation of the weight-updates according to the behaviour of the errorfunction. In substantial difference to other adaptive techniques, the effect of the RPROP adaptation process is not blurred by the unforseeable influence of the size of the derivative but only dependent on the temporal behaviour of its sign. This leads to an efficient and transparent adaptation process. The promising capabilities of RPROP are shown in comparison to other wellknown adaptive techniques.", "title": "" } ]
[ { "docid": "adb64a513ab5ddd1455d93fc4b9337e6", "text": "Domain-invariant representations are key to addressing the domain shift problem where the training and test examples follow different distributions. Existing techniques that have attempted to match the distributions of the source and target domains typically compare these distributions in the original feature space. This space, however, may not be directly suitable for such a comparison, since some of the features may have been distorted by the domain shift, or may be domain specific. In this paper, we introduce a Domain Invariant Projection approach: An unsupervised domain adaptation method that overcomes this issue by extracting the information that is invariant across the source and target domains. More specifically, we learn a projection of the data to a low-dimensional latent space where the distance between the empirical distributions of the source and target examples is minimized. We demonstrate the effectiveness of our approach on the task of visual object recognition and show that it outperforms state-of-the-art methods on a standard domain adaptation benchmark dataset.", "title": "" }, { "docid": "17fd082aeebf148294a51bdefdce4403", "text": "The appearance of Agile methods has been the most noticeable change to software process thinking in the last fifteen years [16], but in fact many of the “Agile ideas” have been around since 70’s or even before. Many studies and reviews have been conducted about Agile methods which ascribe their emergence as a reaction against traditional methods. In this paper, we argue that although Agile methods are new as a whole, they have strong roots in the history of software engineering. In addition to the iterative and incremental approaches that have been in use since 1957 [21], people who criticised the traditional methods suggested alternative approaches which were actually Agile ideas such as the response to change, customer involvement, and working software over documentation. The authors of this paper believe that education about the history of Agile thinking will help to develop better understanding as well as promoting the use of Agile methods. We therefore present and discuss the reasons behind the development and introduction of Agile methods, as a reaction to traditional methods, as a result of people's experience, and in particular focusing on reusing ideas from history.", "title": "" }, { "docid": "d7ce50c1545f0b7233db7413486d6b76", "text": "In this paper, we present an analysis of low complexity signal processing algorithms capable of identifying special noises, such as the sounds of forest machinery (used for forestry, logging). Our objective is to find methods that are able to detect internal combustion engines in rural environment, and are also easy to implement on low power devices of WSNs (wireless sensor networks). In this context, we review different methods for detecting illegal logging, with an emphasis on autocorrelation and TESPAR audio techniques. The processing of extracted audio features is to be solved with limited memory and processor resources typical for low cost sensors modes. The representation of noise models is also considered with different archetypes. Implementations of the proposed methods were tested not by simulations but on sensor nodes equipped with an omnidirectional microphone and a low power microcontroller. Our results show that high recognition rate can be achieved using time domain algorithms and highly energy efficient and inexpensive architectures.", "title": "" }, { "docid": "1c4a17cc1d90ae9386678be6c43a5723", "text": "This article provides a comprehensive introduction to the design of the minimally expressive robot KASPAR which is particularly suitable for human-robot interaction studies. A low-cost design with off-the-shelf components has been used in a novel design inspired from a multi-disciplinary viewpoint, including comics design and Japanese Noh theatre. The design rationale of the robot and its technical features are described in detail. Three research studies will be presented that have been using KASPAR extensively. Firstly, we present its application in robot-assisted play and therapy for children with autism. Secondly, we illustrate its use in human-robot interaction studies investigating the role of interaction kinesics and gestures. Lastly, we describe a study in the field of developmental robotics into computational architectures based on interaction histories for robot ontogeny. The three areas differ in the way how the robot is being operated and its role in social interaction scenarios. Each will be introduced briefly and examples of the results are presented. Reflections on the specific design features of KASPAR that were important in these studies and lessons learnt from these studies concerning the design of humanoid robots for social interaction will be discussed. An assessment of the robot in terms of utility of the design for human-robot interaction experiments concludes the paper.", "title": "" }, { "docid": "1315349a48c402398c7c4164c92e95bf", "text": "Over the past years, the computing industry has started various initiatives announced to increase computer security by means of new hardware architectures. The most notable effort is the Trusted Computing Group (TCG) and the Next-Generation Secure Computing Base (NGSCB). This technology offers useful new functionalities as the possibility to verify the integrity of a platform (attestation) or binding quantities on a specific platform (sealing).In this paper, we point out the deficiencies of the attestation and sealing functionalities proposed by the existing specification of the TCG: we show that these mechanisms can be misused to discriminate certain platforms, i.e., their operating systems and consequently the corresponding vendors. A particular problem in this context is that of managing the multitude of possible configurations. Moreover, we highlight other shortcomings related to the attestation, namely system updates and backup. Clearly, the consequences caused by these problems lead to an unsatisfactory situation both for the private and business branch, and to an unbalanced market when such platforms are in wide use.To overcome these problems generally, we propose a completely new approach: the attestation of a platform should not depend on the specific software or/and hardware (configuration) as it is today's practice but only on the \"properties\" that the platform offers. Thus, a property-based attestation should only verify whether these properties are sufficient to fulfill certain (security) requirements of the party who asks for attestation. We propose and discuss a variety of solutions based on the existing Trusted Computing (TC) functionality. We also demonstrate, how a property-based attestation protocol can be realized based on the existing TC hardware such as a Trusted Platform Module (TPM).", "title": "" }, { "docid": "ad40625ae8500d8724523ae2e663eeae", "text": "The human hand is a masterpiece of mechanical complexity, able to perform fine motor manipulations and powerful work alike. Designing an animatable human hand model that features the abilities of the archetype created by Nature requires a great deal of anatomical detail to be modeled. In this paper, we present a human hand model with underlying anatomical structure. Animation of the hand model is controlled by muscle contraction values. We employ a physically based hybrid muscle model to convert these contraction values into movement of skin and bones. Pseudo muscles directly control the rotation of bones based on anatomical data and mechanical laws, while geometric muscles deform the skin tissue using a mass-spring system. Thus, resulting animations automatically exhibit anatomically and physically correct finger movements and skin deformations. In addition, we present a deformation technique to create individual hand models from photographs. A radial basis warping function is set up from the correspondence of feature points and applied to the complete structure of the reference hand model, making the deformed hand model instantly animatable.", "title": "" }, { "docid": "d71016d17677eeefb7bdfb66e6077885", "text": "Meaningless computer generated scientific texts can be used in several ways. For example, they have allowed Ike Antkare to become one of the most highly cited scientists of the modern world. Such fake publications are also appearing in real scientific conferences and, as a result, in the bibliographic services (Scopus, ISI-Web of Knowledge, Google Scholar,...). Recently, more than 120 papers have been withdrawn from subscription databases of two high-profile publishers, IEEE and Springer, because they were computer generated thanks to the SCIgen software. This software, based on a Probabilistic Context Free Grammar (PCFG), was designed to randomly generate computer science research papers. Together with PCFG, Markov Chains (MC) are the mains ways to generated Meaning-less texts. This paper presents the mains characteristic of texts generated by PCFG and MC. For the time being, PCFG generators are quite easy to spot by an automatic way, using intertextual distance combined with automatic clustering, because these generators are behaving like authors with specifics features such as a very low vocabulary richness and unusual sentence structures. This shows that quantitative tools are effective to characterize originality (or banality) of authors’ language. Cyril Labbé Univ. Grenoble Alpes, LIG, F-38000 Grenoble, France, e-mail: first.last@imag.fr CNRS, LIG, F-38000 Grenoble, France Dominique Labbé Univ. Grenoble Alpes, PACTE, F-38000 Grenoble, France, e-mail: first.last@pacte.fr CNRS, PACTE, F-38000 Grenoble, France François Portet Univ. Grenoble Alpes, LIG, F-38000 Grenoble, France, e-mail: first.last@imag.fr CNRS, LIG, F-38000 Grenoble, France", "title": "" }, { "docid": "1497c5ce53dec0c2d02981d01a419f4b", "text": "While image registration has been studied in different areas of computer vision, aligning images depicting different scenes remains a challenging problem, closer to recognition than to image matching. Analogous to optical flow, where an image is aligned to its temporally adjacent frame, we propose SIFT flow, a method to align an image to its neighbors in a large image collection consisting of a variety of scenes. For a query image, histogram intersection on a bag-of-visual-words representation is used to find the set of nearest neighbors in the database. The SIFT flow algorithm then consists of matching densely sampled SIFT features between the two images, while preserving spatial discontinuities. The use of SIFT features allows robust matching across different scene/object appearances and the discontinuity-preserving spatial model allows matching of objects located at different parts of the scene. Experiments show that the proposed approach is able to robustly align complicated scenes with large spatial distortions. We collect a large database of videos and apply the SIFT flow algorithm to two applications: (i) motion field prediction from a single static image and (ii) motion synthesis via transfer of moving objects.", "title": "" }, { "docid": "ab793edc212dc2a537dbcb4ac9736f9f", "text": "Much of the abusive supervision research has focused on the supervisor– subordinate dyad when examining the effects of abusive supervision on employee outcomes. Using data from a large multisource field study, we extend this research by testing a trickle-down model of abusive supervision across 3 hierarchical levels (i.e., managers, supervisors, and employees). Drawing on social learning theory and social information processing theory, we find general support for the study hypotheses. Specifically, we find that abusive manager behavior is positively related to abusive supervisor behavior, which in turn is positively related to work group interpersonal deviance. In addition, hostile climate moderates the relationship between abusive supervisor behavior and work group interpersonal deviance such that the relationship is stronger when hostile climate is high. The results provide support for our trickle-down model in that abusive manager behavior was not only related to abusive supervisor behavior but was also associated with employees’ behavior 2 hierarchical levels below the manager.", "title": "" }, { "docid": "0aa85d4ac0f2034351d5ba690929db19", "text": "The quantity of small scale solar photovoltaic (PV) arrays in the United States has grown rapidly in recent years. As a result, there is substantial interest in high quality information about the quantity, power capacity, and energy generated by such arrays, including at a high spatial resolution (e.g., cities, counties, or other small regions). Unfortunately, existing methods for obtaining this information, such as surveys and utility interconnection filings, are limited in their completeness and spatial resolution. This work presents a computer algorithm that automatically detects PV panels using very high resolution color satellite imagery. The approach potentially offers a fast, scalable method for obtaining accurate information on PV array location and size, and at much higher spatial resolutions than are currently available. The method is validated using a very large (135 km) collection of publicly available (Bradbury et al., 2016) aerial imagery, with over 2700 human annotated PV array locations. The results demonstrate the algorithm is highly effective on a per-pixel basis. It is likewise effective at object-level PV array detection, but with significant potential for improvement in estimating the precise shape/size of the PV arrays. These results are the first of their kind for the detection of solar PV in aerial imagery, demonstrating the feasibility of the approach and establishing a baseline performance for future investigations. 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "7a9419f17bcdfd2f6e361bd97d487d9f", "text": "2. Relations 4. Dataset and Evaluation Cause-Effect Smoking causes cancer. Instrument-Agency The murderer used an axe. Product-Producer Bees make honey. Content-Container The cat is in the hat. Entity-Origin Vinegar is made from wine. Entity-Destination The car arrived at the station. Component-Whole The laptop has a fast processor. Member-Collection There are ten cows in the herd. Communication-Topic You interrupted a lecture on maths.  Each example consists of two (base) NPs marked with tags <e1> and <e2>:", "title": "" }, { "docid": "3a723bb57dedaaf473384243fe6e1ab1", "text": "Objective\nWe explored whether use of deep learning to model temporal relations among events in electronic health records (EHRs) would improve model performance in predicting initial diagnosis of heart failure (HF) compared to conventional methods that ignore temporality.\n\n\nMaterials and Methods\nData were from a health system's EHR on 3884 incident HF cases and 28 903 controls, identified as primary care patients, between May 16, 2000, and May 23, 2013. Recurrent neural network (RNN) models using gated recurrent units (GRUs) were adapted to detect relations among time-stamped events (eg, disease diagnosis, medication orders, procedure orders, etc.) with a 12- to 18-month observation window of cases and controls. Model performance metrics were compared to regularized logistic regression, neural network, support vector machine, and K-nearest neighbor classifier approaches.\n\n\nResults\nUsing a 12-month observation window, the area under the curve (AUC) for the RNN model was 0.777, compared to AUCs for logistic regression (0.747), multilayer perceptron (MLP) with 1 hidden layer (0.765), support vector machine (SVM) (0.743), and K-nearest neighbor (KNN) (0.730). When using an 18-month observation window, the AUC for the RNN model increased to 0.883 and was significantly higher than the 0.834 AUC for the best of the baseline methods (MLP).\n\n\nConclusion\nDeep learning models adapted to leverage temporal relations appear to improve performance of models for detection of incident heart failure with a short observation window of 12-18 months.", "title": "" }, { "docid": "c0a75bf3a2d594fb87deb7b9f58a8080", "text": "For WikiText-103 we swept over LSTM hidden sizes {1024, 2048, 4096}, no. LSTM layers {1, 2}, embedding dropout {0, 0.1, 0.2, 0.3}, use of layer norm (Ba et al., 2016b) {True,False}, and whether to share the input/output embedding parameters {True,False} totalling 96 parameters. A single-layer LSTM with 2048 hidden units with tied embedding parameters and an input dropout rate of 0.3 was selected, and we used this same model configuration for the other language corpora. We trained the models on 8 P100 Nvidia GPUs by splitting the batch size into 8 sub-batches, sending them to each GPU and summing the resulting gradients. The total batch size used was 512 and a sequence length of 100 was chosen. Gradients were clipped to a maximum norm value of 0.1. We did not pass the state of the LSTM between sequences during training, however the state is passed during evaluation.", "title": "" }, { "docid": "1a7eed6c41824906f947aecbfb4a4a19", "text": "QoS routing is an important research issue in wireless sensor networks (WSNs), especially for mission-critical monitoring and surveillance systems which requires timely and reliable data delivery. Existing work exploits multipath routing to guarantee both reliability and delay QoS constraints in WSNs. However, the multipath routing approach suffers from a significant energy cost. In this work, we exploit the geographic opportunistic routing (GOR) for QoS provisioning with both end-to-end reliability and delay constraints in WSNs. Existing GOR protocols are not efficient for QoS provisioning in WSNs, in terms of the energy efficiency and computation delay at each hop. To improve the efficiency of QoS routing in WSNs, we define the problem of efficient GOR for multiconstrained QoS provisioning in WSNs, which can be formulated as a multiobjective multiconstraint optimization problem. Based on the analysis and observations of different routing metrics in GOR, we then propose an Efficient QoS-aware GOR (EQGOR) protocol for QoS provisioning in WSNs. EQGOR selects and prioritizes the forwarding candidate set in an efficient manner, which is suitable for WSNs in respect of energy efficiency, latency, and time complexity. We comprehensively evaluate EQGOR by comparing it with the multipath routing approach and other baseline protocols through ns-2 simulation and evaluate its time complexity through measurement on the MicaZ node. Evaluation results demonstrate the effectiveness of the GOR approach for QoS provisioning in WSNs. EQGOR significantly improves both the end-to-end energy efficiency and latency, and it is characterized by the low time complexity.", "title": "" }, { "docid": "ff60030158440bf358170234c39db77c", "text": "BACKGROUND: Light trappedat thenanoscale, deep below the optical wavelength, exhibits an increase in the associated electric field strength, which results in enhanced light-matter interaction. This leads to strong nonlinearities, large photonic forces, and enhanced emission and absorption probabilities. A practical approach toward nanoscale light trapping andmanipulation is offered by interfaces separating media with permittivities of opposite signs. Such interfaces sustainhybrid light-mattermodes involving collective oscillations of polarization charges in matter, hence the termpolaritons. Surface plasmon polaritons, supported by electrons in metals, constitute amost-studiedprominent example. Yet there are many other varieties of polaritons, including those formed by atomic vibrations in polar insulators, excitons in semiconductors, Cooper pairs in superconductors, and spin resonances in (anti) ferromagnets. Together, they span a broad region of the electromagnetic spectrum, ranging frommicrowave to ultraviolet wavelengths. We discuss polaritons in van der Waals (vdW) materials: layeredsystems inwhich individualatomic planes are bonded by weak vdW attraction (see the figure). This class of quantum materials includes graphene and other two-dimensional crystals. In artificial structures assembled from dissimilar vdW atomic layers, polaritons associated with different constituents can interact to produce unique optical effects by design.", "title": "" }, { "docid": "8b842fa60cd2b8ec22266f121b8599c6", "text": "The reference human genome sequence set the stage for studies of genetic variation and its association with human disease, but epigenomic studies lack a similar reference. To address this need, the NIH Roadmap Epigenomics Consortium generated the largest collection so far of human epigenomes for primary cells and tissues. Here we describe the integrative analysis of 111 reference human epigenomes generated as part of the programme, profiled for histone modification patterns, DNA accessibility, DNA methylation and RNA expression. We establish global maps of regulatory elements, define regulatory modules of coordinated activity, and their likely activators and repressors. We show that disease- and trait-associated genetic variants are enriched in tissue-specific epigenomic marks, revealing biologically relevant cell types for diverse human traits, and providing a resource for interpreting the molecular basis of human disease. Our results demonstrate the central role of epigenomic information for understanding gene regulation, cellular differentiation and human disease.", "title": "" }, { "docid": "1d563740864e5158132cd6c83efd5f4c", "text": "The Schola Medica Salernitana was an early medieval medical school in the south Italian city of Salerno and the most important native source of medical knowledge in Europe at the time. The school achieved its splendour between the 10th and 13th centuries, during the final decades of Longobard kingdom. In the school, women were involved as both teachers and students for medical learning. Among these women, there was Trotula de Ruggiero (11th century), a teacher whose main interest was to alleviate suffering of women. She was the author of many medical works, the most notable being De Passionibus Mulierum Curandarum (about women's diseases), also known as Trotula Major. Another important work she wrote was De Ornatu Mulierum (about women's cosmetics), also known as Trotula Minor, in which she teaches women to conserve and improve their beauty and treat skin diseases through a series of precepts, advices and natural remedies. She gives lessons about make-up, suggests the way to be unwrinkled, remove puffiness from face and eyes, remove unwanted hair from the body, lighten the skin, hide blemishes and freckles, wash teeth and take away bad breath, dying hair, wax, treat lips and gums chaps.", "title": "" }, { "docid": "da416ce58897f6f86d9cd7b0de422508", "text": "In linear representation based face recognition (FR), it is expected that a discriminative dictionary can be learned from the training samples so that the query sample can be better represented for classification. On the other hand, dimensionality reduction is also an important issue for FR. It can not only reduce significantly the storage space of face images, but also enhance the discrimination of face feature. Existing methods mostly perform dimensionality reduction and dictionary learning separately, which may not fully exploit the discriminative information in the training samples. In this paper, we propose to learn jointly the projection matrix for dimensionality reduction and the discriminative dictionary for face representation. The joint learning makes the learned projection and dictionary better fit with each other so that a more effective face classification can be obtained. The proposed algorithm is evaluated on benchmark face databases in comparison with existing linear representation based methods, and the results show that the joint learning improves the FR rate, particularly when the number of training samples per class is small.", "title": "" }, { "docid": "2b8ca8be8d5e468d4cd285ecc726eceb", "text": "These days, large-scale graph processing becomes more and more important. Pregel, inspired by Bulk Synchronous Parallel, is one of the highly used systems to process large-scale graph problems. In Pregel, each vertex executes a function and waits for a superstep to communicate its data to other vertices. Superstep is a very time-consuming operation, used by Pregel, to synchronize distributed computations in a cluster of computers. However, it may become a bottleneck when the number of communications increases in a graph with million vertices. Superstep works like a barrier in Pregel that increases the side effect of skew problem in distributed computing environment. ExPregel is a Pregel-like model that is designed to reduce the number of communication messages between two vertices resided on two different computational nodes. We have proven that ExPregel reduces the number of exchanged messages as well as the number of supersteps for all graph topologies. Enhancing parallelism in our new computational model is another important feature that manifolds the speed of graph analysis programs. More interestingly, ExPregel uses the same model of programming as Pregel. Our experiments on large-scale real-world graphs show that ExPregel can reduce network traffic as well as number of supersteps from 45% to 96%. Runtime speed up in the proposed model varies from 1.2× to 30×. Copyright © 2015 John Wiley & Sons, Ltd.", "title": "" } ]
scidocsrr
b4829e7840180b1dd7ab4c63ee8f3a70
Secure Deduplication of Encrypted Data without Additional Independent Servers
[ { "docid": "b01bd9cf54485a7ee646ba5c338c6543", "text": "Nowadays, more and more corporate and private users outsource their data to cloud storage providers. At the same time, recent data breach incidents make end-to-end encryption an increasingly prominent requirement. Unfortunately, semantically secure encryption schemes render various cost-effective storage optimization techniques, such as data deduplication, completely ineffective. In this paper, we present a novel encryption scheme that guarantees semantic security for unpopular data and provides weaker security and better storage and bandwidth benefits for popular data. This way, data deduplication can be effective for popular data, whilst semantically secure encryption protects unpopular content, preventing its deduplication. Transitions from one mode to the other take place seamlessly at the storage server side if and only if a file becomes popular. We show that our scheme is secure under the Symmetric External Decisional Diffie-Hellman Assumption in the random oracle model, and evaluate its performance with benchmarks and simulations.", "title": "" } ]
[ { "docid": "689d09822d1ac86a173cde6a6018a8fe", "text": "Novelty detection in time series is an important problem with application in a number of different domains such as machine failure detection and fraud detection in financial systems. One of the methods for detecting novelties in time series consists of building a forecasting model that is later used to predict future values. Novelties are assumed to take place if the difference between predicted and observed values is above a certain threshold. The problem with this method concerns the definition of a suitable value for the threshold. This paper proposes a method based on forecasting with robust confidence intervals for defining the thresholds for detecting novelties. Experiments with six real-world time series are reported and the results show that the method is able to correctly define the thresholds for novelty detection. r 2006 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "4b988535edefeb3ff7df89bcb900dd1c", "text": "Context: As a result of automated software testing, large amounts of software test code (script) are usually developed by software teams. Automated test scripts provide many benefits, such as repeatable, predictable, and efficient test executions. However, just like any software development activity, development of test scripts is tedious and error prone. We refer, in this study, to all activities that should be conducted during the entire lifecycle of test-code as Software Test-Code Engineering (STCE). Objective: As the STCE research area has matured and the number of related studies has increased, it is important to systematically categorize the current state-of-the-art and to provide an overview of the trends in this field. Such summarized and categorized results provide many benefits to the broader community. For example, they are valuable resources for new researchers (e.g., PhD students) aiming to conduct additional secondary studies. Method: In this work, we systematically classify the body of knowledge related to STCE through a systematic mapping (SM) study. As part of this study, we pose a set of research questions, define selection and exclusion criteria, and systematically develop and refine a systematic map. Results: Our study pool includes a set of 60 studies published in the area of STCE between 1999 and 2012. Our mapping data is available through an online publicly-accessible repository. We derive the trends for various aspects of STCE. Among our results are the following: (1) There is an acceptable mix of papers with respect to different contribution facets in the field of STCE and the top two leading facets are tool (68%) and method (65%). The studies that presented new processes, however, had a low rate (3%), which denotes the need for more process-related studies in this area. (2) Results of investigation about research facet of studies and comparing our result to other SM studies shows that, similar to other fields in software engineering, STCE is moving towards more rigorous validation approaches. (3) A good mixture of STCE activities has been presented in the primary studies. Among them, the two leading activities are quality assessment and co-maintenance of test-code with production code. The highest growth rate for co-maintenance activities in recent years shows the importance and challenges involved in this activity. (4) There are two main categories of quality assessment activity: detection of test smells and oracle assertion adequacy. (5) JUnit is the leading test framework which has been used in about 50% of the studies. (6) There is a good mixture of SUT types used in the studies: academic experimental systems (or simple code examples), real open-source and commercial systems. (7) Among 41 tools that are proposed for STCE, less than half of the tools (45%) were available for download. It is good to have this percentile of tools to be available, although not perfect, since the availability of tools can lead to higher impact on research community and industry. Conclusion: We discuss the emerging trends in STCE, and discuss the implications for researchers and practitioners in this area. The results of our systematic mapping can help researchers to obtain an overview of existing STCE approaches and spot areas in the field that require more attention from the", "title": "" }, { "docid": "073e3296fc2976f0db2f18a06b0cb816", "text": "Nowadays spoofing detection is one of the priority research areas in the field of automatic speaker verification. The success of Automatic Speaker Verification Spoofing and Countermeasures (ASVspoof) Challenge 2015 confirmed the impressive perspective in detection of unforeseen spoofing trials based on speech synthesis and voice conversion techniques. However, there is a small number of researches addressed to replay spoofing attacks which are more likely to be used by non-professional impersonators. This paper describes the Speech Technology Center (STC) anti-spoofing system submitted for ASVspoof 2017 which is focused on replay attacks detection. Here we investigate the efficiency of a deep learning approach for solution of the mentioned-above task. Experimental results obtained on the Challenge corpora demonstrate that the selected approach outperforms current state-of-the-art baseline systems in terms of spoofing detection quality. Our primary system produced an EER of 6.73% on the evaluation part of the corpora which is 72% relative improvement over the ASVspoof 2017 baseline system.", "title": "" }, { "docid": "9c62202750cf7e85ba5719fd219d7b0f", "text": "Psychosocial interventions often aim to alleviate negative emotional states. However, there is growing interest in cultivating positive emotional states and qualities. One particular target is compassion, but it is not yet clear whether compassion can be trained. A community sample of 100 adults were randomly assigned to a 9-week compassion cultivation training (CCT) program (n = 60) or a waitlist control condition (n = 40). Before and after this 9-week period, participants completed self-report inventories that measured compassion for others, receiving compassion from others, and selfcompassion. Compared to the waitlist control condition, CCT resulted in significant improvements in all three domains of compassion—compassion for others, receiving compassion from others, and self-compassion. The amount of formal meditation practiced during CCT was associated with increased compassion for others. Specific domains of compassion can be intentionally cultivated in a training program. These findings may have important implications for mental health and well-being.", "title": "" }, { "docid": "63830f82c3acd0e3ff3a12eeed8801e0", "text": "We have developed a novel approach using source analysis for classifying motor imagery tasks. Two-equivalent-dipoles analysis was proposed to aid classification of motor imagery tasks for brain-computer interface (BCI) applications. By solving the electroencephalography (EEG) inverse problem of single trial data, it is found that the source analysis approach can aid classification of motor imagination of left- or right-hand movement without training. In four human subjects, an averaged accuracy of classification of 80% was achieved. The present study suggests the merits and feasibility of applying EEG inverse solutions to BCI applications from noninvasive EEG recordings.", "title": "" }, { "docid": "44448b4443174155f6eed2b2e15b2592", "text": "OBJECTIVE\nTo assess agreement between ultrasonography (transcutaneous and transrectal) and standing radiography in horses with fractures in the pelvic region and disorders of the coxofemoral joint.\n\n\nSTUDY DESIGN\nCase series.\n\n\nANIMALS\nWarmblood horses (n=23) and 2 ponies.\n\n\nMETHODS\nMedical records (1999-2008) of equids with pelvic or coxofemoral disorders that had pelvic radiography and ultrasonography were retrieved and results of both techniques compared.\n\n\nRESULTS\nRadiography and ultrasonography each identified equal numbers of fractures of the tuber coxa (n=4), ilial shaft (2), ischium (3), femoral neck (2), and osteoarthritis/osis of the coxofemoral joint (6). Fractures of the ilial wing (4) were only identified by ultrasonography not by standing radiography. Of 9 acetabular fractures, 3 were identified on radiographs only, 5 were identified with both modalities. One pubic fracture was identified using ultrasonography and radiography. One acetabular and 1 pubic fracture were only diagnosed on necropsy.\n\n\nCONCLUSIONS\nWe found reasonable agreement (73%; 24/33) between ultrasonography and standing radiography for diagnosis of pelvic-femoral disorders. Ultrasonography was more useful for ilial wing fractures and radiography for acetabular fractures.\n\n\nCLINICAL RELEVANCE\nUltrasonography is a rapid, safe imaging technique for detecting disorders of the pelvic region with a high diagnostic yield and is a preferred initial approach in horses with severe hindlimb lameness.", "title": "" }, { "docid": "f2d6b0e6e0b6b8046b54779f0d922843", "text": "As mass production has migrated to developing countries, European and US companies are forced to rapidly switch towards low volume production of more innovative, customised and sustainable products with high added value. To compete in this turbulent environment, manufacturers have sought new fabrication techniques to provide the necessary tools to support the need for increased flexibility and enable economic low volume production. One such emerging technique is Additive Manufacturing (AM). AM is a method of manufacture which involves the joining of materials, usually layer-upon-layer, to create objects from 3D model data. The benefits of this methodology include new design freedom, removal of tooling requirements, and economic low volumes. AM consists various technologies to process versatile materials, and for many years its dominant application has been the manufacture of prototypes, or Rapid Prototyping. However, the recent growth in applications for direct part manufacture, or Rapid Manufacturing, has resulted in much research effort focusing on development of new processes and materials. This study focuses on the implementation process of AM and is motivated by the lack of socio-technical studies in this area. It addresses the need for existing and potential future AM project managers to have an implementation framework to guide their efforts in adopting this new and potentially disruptive technology class to produce high value products and generate new business opportunities. Based on a review of prior works and through qualitative case study analysis, we construct and test a normative structural model of implementation factors related to AM technology, supply chain, organisation, operations and strategy.", "title": "" }, { "docid": "2c0a4b5c819a8fcfd5a9ab92f59c311e", "text": "Line starting capability of Synchronous Reluctance Motors (SynRM) is a crucial challenge in their design that if solved, could lead to a valuable category of motors. In this paper, the so-called crawling effect as a potential problem in Line-Start Synchronous Reluctance Motors (LS-SynRM) is analyzed. Two interfering scenarios on LS-SynRM start-up are introduced and one of them is treated in detail by constructing the asynchronous model of the motor. In the third section, a definition of this phenomenon is given utilizing a sample cage configuration. The LS-SynRM model and characteristics are compared with that of a reference induction motor (IM) in all sections of this work to convey a better perception of successful and unsuccessful synchronization consequences to the reader. Several important post effects of crawling on motor performance are discussed in the rest of the paper to evaluate how it would influence the motor operation. All simulations have been performed using Finite Element Analysis (FEA).", "title": "" }, { "docid": "2dc69fff31223cd46a0fed60264b2de1", "text": "The authors offer a framework for conceptualizing collective identity that aims to clarify and make distinctions among dimensions of identification that have not always been clearly articulated. Elements of collective identification included in this framework are self-categorization, evaluation, importance, attachment and sense of interdependence, social embeddedness, behavioral involvement, and content and meaning. For each element, the authors take note of different labels that have been used to identify what appear to be conceptually equivalent constructs, provide examples of studies that illustrate the concept, and suggest measurement approaches. Further, they discuss the potential links between elements and outcomes and how context moderates these relationships. The authors illustrate the utility of the multidimensional organizing framework by analyzing the different configuration of elements in 4 major theories of identification.", "title": "" }, { "docid": "dbd8c2e36deb9c17818b2031502857ba", "text": "This paper presents the mechanical design for a new five fingered, twenty degree-of-freedom dexterous hand patterned after human anatomy and actuated by Shape Memory Alloy artificial muscles. Two experimental prototypes of a finger, one fabricated by traditional means and another fabricated by rapid prototyping techniques, are described and used to evaluate the design. An important aspect of the Rapid Prototype technique used here is that this multi-articulated hand will be fabricated in one step, without requiring assembly, while maintaining its desired mobility. The use of Shape Memory Alloy actuators combined with the rapid fabrication of the non-assembly type hand, reduce considerably its weight and fabrication time. Therefore, the focus of this paper is the mechanical design of a dexterous hand that combines Rapid Prototype techniques and smart actuators. The type of robotic hand described in this paper can be utilized for applications requiring low weight, compactness, and dexterity such as prosthetic devices, space and planetary exploration.", "title": "" }, { "docid": "0d83d1dc97d65d9aa4969e016a360451", "text": "This paper proposes and evaluates a novel analytical performance model to study the efficiency and scalability of software-defined infrastructure (SDI) to host adaptive applications. The SDI allows applications to communicate their adaptation requirements at run-time. Adaptation scenarios require computing and networking resources to be provided to applications in a timely manner to facilitate seamless service delivery. Our analytical model yields the response time of realizing adaptations on the SDI and reveals the scalability limitations. We conduct extensive testbed experiments on a cloud environment to verify the accuracy and fidelity of the model. Cloud service providers can leverage the proposed model to perform capacity planning and bottleneck analysis when they accommodate adaptive applications.", "title": "" }, { "docid": "45475cd9bd2e71699590bbdbebd83829", "text": "Very little is known about computer gamers' playing experience. Most social scientific research has treated gaming as an undifferentiated activity associated with various factors outside the gaming context. This article considers computer games as behavior settings worthy of social scientific investigation in their own right and contributes to a better understanding of computer gaming as a complex, context-dependent, goal-directed activity. The results of an exploratory interview-based study of computer gaming within the \"first-person shooter\" (FPS) game genre are reported. FPS gaming is a fast-paced form of goal-directed activity that takes place in complex, dynamic behavioral environments where players must quickly make sense of changes in their immediate situation and respond with appropriate actions. Gamers' perceptions and evaluations of various aspects of the FPS gaming situation are documented, including positive and negative aspects of game interfaces, map environments, weapons, computer-generated game characters (bots), multiplayer gaming on local area networks (LANs) or the internet, and single player gaming. The results provide insights into the structure of gamers' mental models of the FPS genre by identifying salient categories of their FPS gaming experience. It is proposed that aspects of FPS games most salient to gamers were those perceived to be most behaviorally relevant to goal attainment, and that the evaluation of various situational stimuli depended on the extent to which they were perceived either to support or to hinder goal attainment. Implications for the design of FPS games that players experience as challenging, interesting, and fun are discussed.", "title": "" }, { "docid": "736ee2bed70510d77b1f9bb13b3bee68", "text": "Yes, they do. This work investigates a perspective for deep learning: whether different normalization layers in a ConvNet require different normalizers. This is the first step towards understanding this phenomenon. We allow each convolutional layer to be stacked before a switchable normalization (SN) that learns to choose a normalizer from a pool of normalization methods. Through systematic experiments in ImageNet, COCO, Cityscapes, and ADE20K, we answer three questions: (a) Is it useful to allow each normalization layer to select its own normalizer? (b) What impacts the choices of normalizers? (c) Do different tasks and datasets prefer different normalizers? Our results suggest that (1) using distinct normalizers improves both learning and generalization of a ConvNet; (2) the choices of normalizers are more related to depth and batch size, but less relevant to parameter initialization, learning rate decay, and solver; (3) different tasks and datasets have different behaviors when learning to select normalizers.", "title": "" }, { "docid": "5bb15e64e7e32f3a0b1b99be8b8ab2bf", "text": "Breast cancer is one of the major causes of death in women when compared to all other cancers. Breast cancer has become the most hazardous types of cancer among women in the world. Early detection of breast cancer is essential in reducing life losses. This paper presents a comparison among the different Data mining classifiers on the database of breast cancer Wisconsin Breast Cancer (WBC), by using classification accuracy. This paper aims to establish an accurate classification model for Breast cancer prediction, in order to make full use of the invaluable information in clinical data, especially which is usually ignored by most of the existing methods when they aim for high prediction accuracies. We have done experiments on WBC data. The dataset is divided into training set with 499 and test set with 200 patients. In this experiment, we compare six classification techniques in Weka software and comparison results show that Support Vector Machine (SVM) has higher prediction accuracy than those methods. Different methods for breast cancer detection are explored and their accuracies are compared. With these results, we infer that the SVM are more suitable in handling the classification problem of breast cancer prediction, and we recommend the use of these approaches in similar classification problems. Keywords—breast cancer; classification; Decision tree, Naïve Bayes, MLP, Logistic Regression SVM, KNN and weka;", "title": "" }, { "docid": "4fa7f7f723c2f2eee4c0e2c294273c74", "text": "Tracking human vital signs of breathing and heart rates during sleep is important as it can help to assess the general physical health of a person and provide useful clues for diagnosing possible diseases. Traditional approaches (e.g., Polysomnography (PSG)) are limited to clinic usage. Recent radio frequency (RF) based approaches require specialized devices or dedicated wireless sensors and are only able to track breathing rate. In this work, we propose to track the vital signs of both breathing rate and heart rate during sleep by using off-the-shelf WiFi without any wearable or dedicated devices. Our system re-uses existing WiFi network and exploits the fine-grained channel information to capture the minute movements caused by breathing and heart beats. Our system thus has the potential to be widely deployed and perform continuous long-term monitoring. The developed algorithm makes use of the channel information in both time and frequency domain to estimate breathing and heart rates, and it works well when either individual or two persons are in bed. Our extensive experiments demonstrate that our system can accurately capture vital signs during sleep under realistic settings, and achieve comparable or even better performance comparing to traditional and existing approaches, which is a strong indication of providing non-invasive, continuous fine-grained vital signs monitoring without any additional cost.", "title": "" }, { "docid": "61f77a2ce189b9be9d4b8c7cc392f361", "text": "In this paper, a Wilkinson power divider operating at two arbitrary different frequencies is presented. The structure of this power divider and the formulas used to determine the design parameters have been given. Experimental results show that all the features of a conventional Wilkinson power divider, such as an equal power split, impedance matching at all ports, and a good isolation between the two output ports can be fulfilled at two arbitrary given frequencies simultaneously", "title": "" }, { "docid": "46ff9e58e6a67d46934161aaf5f5c6b8", "text": "Using shared disk architecture for relational cloud DBMSs enhances their performance and throughput and increases the scalability. In such architectures, transactions are not distributed between database instances and data are not migrated, whereas any database instance can read and access any database object. Lock technology control for concurrent transactions ensures their consistency especially in shared disk architecture, using traditional granularity database locks for cloud database, can cause numerous problems. This paper proposes an optimistic concurrency control algorithm that uses soft locks and minimizes the number of accessed database instances for validating a transaction. It creates a lock manager for all database objects and distributes it over database instances until it does not have to validate the transaction, neither with single database instance if it is owned by only a database instance nor with all database instances if it is replicated on all database instances. The proposed algorithm is evaluated against other cloud concurrency control algorithms and the results confirms its effectiveness.", "title": "" }, { "docid": "61e75fb597438712098c2b6d4b948558", "text": "Impact of occupational stress on employee performance has been recognized as an important area of concern for organizations. Negative stress affects the physical and mental health of the employees that in turn affects their performance on job. Research into the relationship between stress and job performance has been neglected in the occupational stress literature (Jex, 1998). It is therefore significant to understand different Occupational Stress Inducers (OSI) on one hand and their impact on different aspects of job performance on the other. This article reviews the available literature to understand the phenomenon so as to develop appropriate stress management strategies to not only save the employees from variety of health problems but to improve their performance and the performance of the organization. 35 Occupational Stress Inducers (OSI) were identified through a comprehensive review of articles and reports published in the literature of management and allied disciplines between 1990 and 2014. A conceptual model is proposed towards the end to study the impact of stress on employee job performance. The possible data analysis techniques are also suggested providing direction for future research.", "title": "" } ]
scidocsrr
fcc8d5418ae5dfe7417e55c308a1fa6f
Measuring Meaningful Work : The Work and Meaning Inventory ( WAMI )
[ { "docid": "63e123277918d08ed6ee497dd6e7e588", "text": "This study provided a comprehensive examination of the full range of transformational, transactional, and laissez-faire leadership. Results (based on 626 correlations from 87 sources) revealed an overall validity of .44 for transformational leadership, and this validity generalized over longitudinal and multisource designs. Contingent reward (.39) and laissez-faire (-.37) leadership had the next highest overall relations; management by exception (active and passive) was inconsistently related to the criteria. Surprisingly, there were several criteria for which contingent reward leadership had stronger relations than did transformational leadership. Furthermore, transformational leadership was strongly correlated with contingent reward (.80) and laissez-faire (-.65) leadership. Transformational and contingent reward leadership generally predicted criteria controlling for the other leadership dimensions, although transformational leadership failed to predict leader job performance.", "title": "" } ]
[ { "docid": "f63374051d4826ad55549d22260d0835", "text": "Interest has been growing in opportunities to build and deploy statistical models that can infer a computer user's current interruptability from computer activity and relevant contextual information. We describe a system that intermittently asks users to assess their perceived interruptability during a training phase and that builds decision-theoretic models with the ability to predict the cost of interrupting the user. The models are used at run-time to compute the expected cost of interruptions, providing a mediator for incoming notifications, based on a consideration of a user's current and recent history of computer activity, meeting status, location, time of day, and whether a conversation is detected.", "title": "" }, { "docid": "3a9d3a285c6828510e3c57d13b8648db", "text": "Predicting system failures can be of great benefit to managers that get a better command over system performance. Data that systems generate in the form of logs is a valuable source of information to predict system reliability. As such, there is an increasing demand of tools to mine logs and provide accurate predictions. However, interpreting information in logs poses some challenges. This study discusses how to effectively mining sequences of logs and provide correct predictions. The approach integrates different machine learning techniques to control for data brittleness, provide accuracy of model selection and validation, and increase robustness of classification results. We apply the proposed approach to log sequences of 25 different applications of a software system for telemetry and performance of cars. On this system, we discuss the ability of three well-known support vector machines - multilayer perceptron, radial basis function and linear kernels - to fit and predict defective log sequences. Our results show that a good analysis strategy provides stable, accurate predictions. Such strategy must at least require high fitting ability of models used for prediction. We demonstrate that such models give excellent predictions both on individual applications - e.g., 1 % false positive rate, 94 % true positive rate, and 95 % precision - and across system applications - on average, 9 % false positive rate, 78 % true positive rate, and 95 % precision. We also show that these results are similarly achieved for different degree of sequence defectiveness. To show how good are our results, we compare them with recent studies in system log analysis. We finally provide some recommendations that we draw reflecting on our study.", "title": "" }, { "docid": "243502b2b8ed80764a2f37cabd968300", "text": "We describe the design, development, and API of ODIN (Open Domain INformer), a domainindependent, rule-based event extraction (EE) framework. The proposed EE approach is: simple (most events are captured with simple lexico-syntactic patterns), powerful (the language can capture complex constructs, such as events taking other events as arguments, and regular expressions over syntactic graphs), robust (to recover from syntactic parsing errors, syntactic patterns can be freely mixed with surface, token-based patterns), and fast (the runtime environment processes 110 sentences/second in a real-world domain with a grammar of over 200 rules). We used this framework to develop a grammar for the biochemical domain, which approached human performance. Our EE framework is accompanied by a web-based user interface for the rapid development of event grammars and visualization of matches. The ODIN framework and the domain-specific grammars are available as open-source code.", "title": "" }, { "docid": "e95dbb00015ee42b650d0c5088675293", "text": "To study the Lombard reflex, more realistic databases representing real world conditions need to be recorded and analyzed. In this paper we 1) propose a procedure to record Lombard data which provides a good approximation of realistic conditions and 2) present a comparison between two sets of experiments where subjects are in communication with a device while listening to noise through open-ear headphones and where subjects are reading a list. By studying acoustic correlates of the Lombard reflex and performing off-line speakerindependent recognition experiments it is shown that the communication factor affects the Lombard reflex . We also show evidence that several types of noise differing mainly by their spectral tilt induce different acoustic changes . This result reinforces the notion that it is difficult to separate the speaker from the environment stressor (in this case the noise) when studying the Lombard reflex.", "title": "" }, { "docid": "dafbf7e96bb0d05abfa16b3917322bb9", "text": "Human adenoviruses (HAdV) are responsible for a wide spectrum of diseases. The neutralization epsilon determinant (loops 1 and 2) and the hemagglutination gamma determinant are relevant for the taxonomy of HAdV. Precise type identification of HAdV prototypes is crucial for detection of infection chains and epidemiology. epsilon and gamma determinant sequences of all 51 HAdV were generated to propose molecular classification criteria. Phylogenetic analysis of epsilon determinant sequences demonstrated sufficient genetic divergence for molecular classification, with the exception of HAdV-15 and HAdV-29, which also cannot be differentiated by classical cross-neutralization. Precise sequence divergence criteria for typing (<2.5% from loop 2 prototype sequence and <2.4% from loop 1 sequence) were deduced from phylogenetic analysis. These criteria may also facilitate identification of new HAdV prototypes. Fiber knob (gamma determinant) phylogeny indicated a two-step model of species evolution and multiple intraspecies recombination events in the origin of HAdV prototypes. HAdV-29 was identified as a recombination variant of HAdV-15 (epsilon determinant) and a speculative, not-yet-isolated HAdV prototype (gamma determinant). Subanalysis of molecular evolution in hypervariable regions 1 to 6 of the epsilon determinant indicated different selective pressures in subclusters of species HAdV-D. Additionally, gamma determinant phylogenetic analysis demonstrated that HAdV-8 did not cluster with -19 and -37 in spite of their having the same tissue tropism. The phylogeny of HAdV-E4 suggested origination by interspecies recombination between HAdV-B (hexon) and HAdV-C (fiber), as in simian adenovirus 25, indicating additional zoonotic transfer. In conclusion, molecular classification by systematic sequence analysis of immunogenic determinants yields new insights into HAdV phylogeny and evolution.", "title": "" }, { "docid": "c64cc935b0a898f66d8fd34bbbbb6832", "text": "Zinc oxide (ZnO) appears as a promising preservative for pharmaceutical or cosmetic formulations. The other ingredients of the formulations may have specific interactions with ZnO that alter its antimicrobial properties. The influence of common formulation excipients on the antimicrobial efficacy of ZnO has been investigated in simple model systems and in typical topical products containing a complex formulation. A wide variety of formulation excipients have been investigated for their interactions with ZnO: antioxidants, chelating agents, electrolytes, titanium dioxide pigment. The antimicrobial activity of ZnO against Escherichia coli was partially inhibited by NaCl and MgSO4 salts. A synergistic influence of uncoated titanium dioxide has been observed. The interference effects of antioxidants and chelating agents were quite specific. The interactions of these substances with ZnO particles and with the soluble species released by ZnO were discussed so as to reach scientific guidelines for the choice of the ingredients. The preservative efficacy of ZnO was assessed by challenge testing in three different formulations: an oil-in-water emulsion; a water-in-oil emulsion and a dry powder. The addition of ZnO in complex formulations significantly improved the microbiological quality of the products, in spite of the presence of other ingredients that modulate the antimicrobial activity.", "title": "" }, { "docid": "eee687e5c110bbfdd447b7a58444f34e", "text": "We present a \"scale-and-stretch\" warping method that allows resizing images into arbitrary aspect ratios while preserving visually prominent features. The method operates by iteratively computing optimal local scaling factors for each local region and updating a warped image that matches these scaling factors as closely as possible. The amount of deformation of the image content is guided by a significance map that characterizes the visual attractiveness of each pixel; this significance map is computed automatically using a novel combination of gradient and salience-based measures. Our technique allows diverting the distortion due to resizing to image regions with homogeneous content, such that the impact on perceptually important features is minimized. Unlike previous approaches, our method distributes the distortion in all spatial directions, even when the resizing operation is only applied horizontally or vertically, thus fully utilizing the available homogeneous regions to absorb the distortion. We develop an efficient formulation for the nonlinear optimization involved in the warping function computation, allowing interactive image resizing.", "title": "" }, { "docid": "848dd074e4615ea5ecb164c96fac6c63", "text": "A simultaneous analytical method for etizolam and its main metabolites (alpha-hydroxyetizolam and 8-hydroxyetizolam) in whole blood was developed using solid-phase extraction, TMS derivatization and ion trap gas chromatography tandem mass spectrometry (GC-MS/MS). Separation of etizolam, TMS derivatives of alpha-hydroxyetizolam and 8-hydroxyetizolam and fludiazepam as internal standard was performed within about 17 min. The inter-day precision evaluated at the concentration of 50 ng/mL etizolam, alpha-hydroxyetizolam and 8-hydroxyetizolam was evaluated 8.6, 6.4 and 8.0% respectively. Linearity occurred over the range in 5-50 ng/mL. This method is satisfactory for clinical and forensic purposes. This method was applied to two unnatural death cases suspected to involve etizolam. Etizolam and its two metabolites were detected in these cases.", "title": "" }, { "docid": "e9bc802e8ce6a823526084c82aa89c95", "text": "Non-orthogonal multiple access (NOMA) is a promising radio access technique for further cellular enhancements toward 5G. Single-user multiple-input multiple-output (SU-MIMO) is one of the key technologies in LTE /LTE-Advanced systems. Thus, it is of great interest to study how to efficiently and effectively combine NOMA and SU-MIMO techniques together for further system performance improvement. This paper investigates the combination of NOMA with open-loop and closed-loop SU-MIMO. The key issues involved in the combination are presented and discussed, including scheduling algorithm, successive interference canceller (SIC) order determination, transmission power assignment and feedback design. The performances of NOMA with SU-MIMO are investigated by system-level simulations with very practical assumptions. Simulation results show that compared to orthogonal multiple access system, NOMA can achieve large performance gains both open-loop and closed-loop SU-MIMO, which are about 23% for cell average throughput and 33% for cell-edge user throughput.", "title": "" }, { "docid": "1351b9d778da2821362a1b4caa35e7e4", "text": "Though designing a data warehouse requires techniques completely different from those adopted for operational systems, no significant effort has been made so far to develop a complete and consistent design methodology for data warehouses. In this paper we outline a general methodological framework for data warehouse design, based on our Dimensional Fact Model (DFM). After analyzing the existing information system and collecting the user requirements, conceptual design is carried out semi-automatically starting from the operational database scheme. A workload is then characterized in terms of data volumes and expected queries, to be used as the input of the logical and physical design phases whose output is the final scheme for the data warehouse.", "title": "" }, { "docid": "3727ee51255d85a9260e1e92cc5b7ca7", "text": "Electing a leader is a classical problem in distributed computing system. Synchronization between processes often requires one process acting as a coordinator. If an elected leader node fails, the other nodes of the system need to elect another leader without much wasting of time. The bully algorithm is a classical approach for electing a leader in a synchronous distributed computing system, which is used to determine the process with highest priority number as the coordinator. In this paper, we have discussed the limitations of Bully algorithm and proposed a simple and efficient method for the Bully algorithm which reduces the number of messages during the election. Our analytical simulation shows that, our proposed algorithm is more efficient than the Bully algorithm with fewer messages passing and fewer stages.", "title": "" }, { "docid": "2ccb76e0cda888491ebb37bb316c5490", "text": "For any Software Process Improvement (SPI) initiative to succeed human factors, in particular, motivation and commitment of the people involved should be kept in mind. In fact, Organizational Change Management (OCM) has been identified as an essential knowledge area for any SPI initiative. However, enough attention is still not given to the human factors and therefore, the high degree of failures in the SPI initiatives is directly linked to a lack of commitment and motivation. Gamification discipline allows us to define mechanisms that drive people’s motivation and commitment towards the development of tasks in order to encourage and accelerate the acceptance of an SPI initiative. In this paper, a gamification framework oriented to both organization needs and software practitioners groups involved in an SPI initiative is defined. This framework tries to take advantage of the transverse nature of gamification in order to apply its Critical Success Factors (CSF) to the organizational change management of an SPI. Gamification framework guidelines have been validated by some qualitative methods. Results show some limitations that threaten the reliability of this validation. These require further empirical validation of a software organization.", "title": "" }, { "docid": "e7e9d6054a61a1f4a3ab7387be28538a", "text": "Next generation deep neural networks for classification hosted on embedded platforms will rely on fast, efficient, and accurate learning algorithms. Initialization of weights in learning networks has a great impact on the classification accuracy. In this paper we focus on deriving good initial weights by modeling the error function of a deep neural network as a high-dimensional landscape. We observe that due to the inherent complexity in its algebraic structure, such an error function may conform to general results of the statistics of large systems. To this end we apply some results from Random Matrix Theory to analyse these functions. We model the error function in terms of a Hamiltonian in N-dimensions and derive some theoretical results about its general behavior. These results are further used to make better initial guesses of weights for the learning algorithm.", "title": "" }, { "docid": "f996b9911692cc835e55e561c3a501db", "text": "This study proposes a clustering-based Wi-Fi fingerprinting localization algorithm. The proposed algorithm first presents a novel support vector machine based clustering approach, namely SVM-C, which uses the margin between two canonical hyperplanes for classification instead of using the Euclidean distance between two centroids of reference locations. After creating the clusters of fingerprints by SVM-C, our positioning system embeds the classification mechanism into a positioning task and compensates for the large database searching problem. The proposed algorithm assigns the matched cluster surrounding the test sample and locates the user based on the corresponding cluster's fingerprints to reduce the computational complexity and remove estimation outliers. Experimental results from realistic Wi-Fi test-beds demonstrated that our approach apparently improves the positioning accuracy. As compared to three existing clustering-based methods, K-means, affinity propagation, and support vector clustering, the proposed algorithm reduces the mean localization errors by 25.34%, 25.21%, and 26.91%, respectively.", "title": "" }, { "docid": "515c003d4e636cd85d9920e765cb73d5", "text": "Islanding phenomenon is a problem for Distributed Generator (DG) based networks leading to troubles in voltage and frequency control and other power quality issues. This paper describes the development of a method based on Discrete Wavelet Transform (DWT), in order to detect islanding for DGs. This method detects the islanding state only by evaluating the terminal current of DGs. Hence, different types of current signal in the case of fault occurrence, load and capacitor switching and motor starting have been investigated. In order to increase the accuracy of the wavelet analysis, the proposed algorithm is executed for each type of current signals using different types of mother wavelets, decomposition levels, length of data window and moving size of window to decide on the best assumption of each parameter. The results of the study show that the proposed method can detect the islanding state within a time less than one third of a cycle with a good accuracy.", "title": "" }, { "docid": "a451a351d50c3441d4ca8a964bf7312e", "text": "With the growing complexity and scale of high performance computing (HPC) systems, application performance variation has become a significant challenge in efficient and resilient system management. Application performance variation can be caused by resource contention as well as softwareand firmware-related problems, and can lead to premature job termination, reduced performance, and wasted compute platform resources. To effectively alleviate this problem, system administrators must detect and identify the anomalies that are responsible for performance variation and take preventive actions. However, diagnosing anomalies is often a difficult task given the vast amount of noisy and high-dimensional data being collected via a variety of system monitoring infrastructures. In this paper, we present a novel framework that uses machine learning to automatically diagnose previously encountered performance anomalies in HPC systems. Our framework leverages resource usage and performance counter data collected during application runs. We first convert the collected time series data into statistical features that retain application characteristics to significantly reduce the computational overhead of our technique. We then use machine learning algorithms to learn anomaly characteristics from this historical data and to identify the types of anomalies observed while running applications. We evaluate our framework both on an HPC cluster and on a public cloud, and demonstrate that our approach outperforms current state-of-the-art techniques in detecting anomalies, reaching an F-score over 0.97.", "title": "" }, { "docid": "381c02fb1ce523ddbdfe3acdde20abf1", "text": "Domain-specific accelerators (DSAs), which sacrifice programmability for efficiency, are a reaction to the waning benefits of device scaling. This article demonstrates that there are commonalities between DSAs that can be exploited with programmable mechanisms. The goals are to create a programmable architecture that can match the benefits of a DSA and to create a platform for future accelerator investigations.", "title": "" }, { "docid": "60fb532b3d22b5f598a0aebabc616de4", "text": "Introduction Vision is the primary sensory modality for humans—and most other mammals—by which they perceive the world. In humans, vision-related areas occupy about 30% of the neocortex. Light rays are projected upon the retina, and the brain tries to make sense of the world by means of interpreting the visual input pattern. The sensitivity and specificity with which the brain solves this computationally complex problem cannot yet be replicated on a computer. The most imposing of these problems is that of invariant visual pattern recognition. Recently it has been said that the prediction of future sensory input from salient features of current input is the keystone of intelligence. The neocortex is the structure in the brain which is assumed to be responsible for the evolution of intelligence. Current sensory input patterns activate stored traces of previous inputs which then generate top-down expectations, which are verified against the bottom-up input signals. If the verification succeeds, the predicted pattern is recognised. This theory explains how humans, and mammals in general, can recognise images despite changes in location, size and lighting conditions, and in the presence of deformations and large amounts of noise. Parts of this theory, known as the memory-prediction theory (MPT), are modelled in the Hierarchical Temporal Memory or HTM technology developed by a company called Numenta; the model is an attempt to replicate the structural and algorithmic properties of the neocortex. Spatial and temporal relations between features of the sensory signals are formed in an hierarchical memory architecture during a learning process. When a new pattern arrives, the recognition process can be viewed as choosing the stored representation that best predicts the pattern. Hierarchical Temporal Memory has been successfully applied to the recognition of relatively simple images, showing invariance across several transformations and robustness with respect to noisy patterns. We have applied the concept of HTM, as implemented by Numenta, to land-use recognition, by building and testing a system to learn to recognise five different types of land use. Overview of the HTM learning algorithm Hierarchical Temporal Memory can be considered a form of a Bayesian network, where the network consists of a collection of nodes arranged in a tree-shaped hierarchy. Each node in the hierarchy self-discovers a set of causes in its input, through a process of finding common spatial patterns and then detecting common temporal patterns. Unlike many Bayesian networks, HTMs are self-training, have a well-defined parent/child relationship between each node, inherently handle time-varying data and afford mechanisms for covert attention. Sensory data are presented at the bottom of the hierarchy. To train an HTM, it is necessary to present continuous, time-varying, sensory inputs while the causes underlying the same sensory data persist in the environment. In other words, you either move the senses of the HTM through the world, or the objects in the world move relative to the HTM’s senses. Time is the fundamental component of an HTM, and can be thought of as a learning supervisor. Hierarchical Temporal Memory networks are made of nodes; each node receives as input a temporal sequence of patterns. The goal of each node is to group input patterns that are likely to have the same cause, thereby forming invariant representations of extrinsic causes. An HTM node uses two grouping mechanisms to form invariants (Fig. 1). The first mechanism is called spatial pooling, in which raw data are received by the sensor; spatial poolers of higher nodes receive the outputs from their child nodes. The input of the spatial pooler in higher layers is the fixed-order concatenation of the output of its children. This input is represented by row vectors, and the role of the spatial pooler is to build a matrix (the coincidence matrix) from input vectors that occur frequently. There are multiple spatial pooler algorithms, e.g. Gaussian and Product. The Gaussian spatial pooler algorithm is used for nodes at the input layer, whereas the nodes higher up the hierarchy use the Product spatial pooler. The Gaussian spatial pooler algorithm compares the raw input vectors with the existing coincidences in the coincidence matrix. If the Euclidean distance between an input vector and an existing coincidence is small enough, the input is considered to be the same coincidence, and the count for that coincidence is incremented and stored in memory. 370 South African Journal of Science 105, September/October 2009 Research Articles", "title": "" }, { "docid": "0c12178e7c7d5c66343bb5a152b42fca", "text": "This study was a randomized controlled trial to investigate the effect of treating women with stress or mixed urinary incontinence (SUI or MUI) by diaphragmatic, deep abdominal and pelvic floor muscle (PFM) retraining. Seventy women were randomly allocated to the training (n = 35) or control group (n = 35). Women in the training group received 8 individual clinical visits and followed a specific exercise program. Women in the control group performed self-monitored PFM exercises at home. The primary outcome measure was self-reported improvement. Secondary outcome measures were 20-min pad test, 3-day voiding diary, maximal vaginal squeeze pressure, holding time and quality of life. After a 4-month intervention period, more participants in the training group reported that they were cured or improved (p < 0.01). The cure/improved rate was above 90%. Both amount of leakage and number of leaks were significantly lower in the training group (p < 0.05) but not in the control group. More aspects of quality of life improved significantly in the training group than in the control group. Maximal vaginal squeeze pressure, however, decreased slightly in both groups. Coordinated retraining diaphragmatic, deep abdominal and PFM function could improve symptoms and quality of life. It may be an alternative management for women with SUI or MUI.", "title": "" }, { "docid": "a52fce0b7419d745a85a2bba27b34378", "text": "Playing action video games enhances several different aspects of visual processing; however, the mechanisms underlying this improvement remain unclear. Here we show that playing action video games can alter fundamental characteristics of the visual system, such as the spatial resolution of visual processing across the visual field. To determine the spatial resolution of visual processing, we measured the smallest distance a distractor could be from a target without compromising target identification. This approach exploits the fact that visual processing is hindered as distractors are brought close to the target, a phenomenon known as crowding. Compared with nonplayers, action-video-game players could tolerate smaller target-distractor distances. Thus, the spatial resolution of visual processing is enhanced in this population. Critically, similar effects were observed in non-video-game players who were trained on an action video game; this result verifies a causative relationship between video-game play and augmented spatial resolution.", "title": "" } ]
scidocsrr
d2ff2a320df180a0665972a0751de2bb
SMS spam detection for Indian messages
[ { "docid": "4de4ab2be955c318ffbd58924af9271f", "text": "The amount of Short Message Service (SMS) spam is increasing. Various solutions to filter SMS spam on mobile phones have been proposed. Most of these use Text Classification techniques that consist of training, filtering, and updating processes. However, they require a computer or a large amount of SMS data in advance to filter SMS spam, especially for the training. This increases hardware maintenance and communication costs. Thus, we propose to filter SMS spamon independentmobile phones using Text Classification techniques. The training, filtering, and updating processes are performed on an independent mobile phone. The mobile phone has storage, memory and CPU limitations compared with a computer. As such, we apply a probabilistic Naïve Bayes classifier using word occurrences for screening because of its simplicity and fast performance. Our experiment on an Android mobile phone shows that it can filter SMS spamwith reasonable accuracy, minimum storage consumption, and acceptable processing time without support from a computer or using a large amount of SMS data for training. Thus, we conclude that filtering SMS spam can be performed on independent mobile phones. We can reduce the number of word attributes by almost 50% without reducing accuracy significantly, using our usability-based approach. Copyright © 2012 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "c3566f9addba75542296f41be2bd604e", "text": "We consider the problem of content-based spam filtering for short text messages that arise in three contexts: mobile (SMS) communication, blog comments, and email summary information such as might be displayed by a low-bandwidth client. Short messages often consist of only a few words, and therefore present a challenge to traditional bag-of-words based spam filters. Using three corpora of short messages and message fields derived from real SMS, blog, and spam messages, we evaluate feature-based and compression-model-based spam filters. We observe that bag-of-words filters can be improved substantially using different features, while compression-model filters perform quite well as-is. We conclude that content filtering for short messages is surprisingly effective.", "title": "" }, { "docid": "1e934aef7999b592971b393e40395994", "text": "Over recent years, as the popularity of mobile phone devices has increased, Short Message Service (SMS) has grown into a multi-billion dollars industry. At the same time, reduction in the cost of messaging services has resulted in growth in unsolicited commercial advertisements (spams) being sent to mobile phones. In parts of Asia, up to 30% of text messages were spam in 2012. Lack of real databases for SMS spams, short length of messages and limited features, and their informal language are the factors that may cause the established email filtering algorithms to underperform in their classification. In this project, a database of real SMS Spams from UCI Machine Learning repository is used, and after preprocessing and feature extraction, different machine learning techniques are applied to the database. Finally, the results are compared and the best algorithm for spam filtering for text messaging is introduced. Final simulation results using 10-fold cross validation shows the best classifier in this work reduces the overall error rate of best model in original paper citing this dataset by more than half.", "title": "" }, { "docid": "05a5e3849c9fca4d788aa0210d8f7294", "text": "The growth of mobile phone users has lead to a dramatic increasing of SMS spam messages. Recent reports clearly indicate that the volume of mobile phone spam is dramatically increasing year by year. In practice, fighting such plague is difficult by several factors, including the lower rate of SMS that has allowed many users and service providers to ignore the issue, and the limited availability of mobile phone spam-filtering software. Probably, one of the major concerns in academic settings is the scarcity of public SMS spam datasets, that are sorely needed for validation and comparison of different classifiers. Moreover, traditional content-based filters may have their performance seriously degraded since SMS messages are fairly short and their text is generally rife with idioms and abbreviations. In this paper, we present details about a new real, public and non-encoded SMS spam collection that is the largest one as far as we know. Moreover, we offer a comprehensive analysis of such dataset in order to ensure that there are no duplicated messages coming from previously existing datasets, since it may ease the task of learning SMS spam classifiers and could compromise the evaluation of methods. Additionally, we compare the performance achieved by several established machine learning techniques. In summary, the results indicate that the procedure followed to build the collection does not lead to near-duplicates and, regarding the classifiers, the Support Vector Machines outperforms other evaluated techniques and, hence, it can be used as a good baseline for further comparison. Keywords—Mobile phone spam; SMS spam; spam filtering; text categorization; classification.", "title": "" }, { "docid": "7d5289d1b3f8bb9ce21bdc1bea3f1782", "text": "In the recent years, we have witnessed a dramatic increment in the volume of spam email. Other related forms of spam are increasingly revealing as a problem of importance, specially the spam on Instant Messaging services (the so called SPIM), and Short Message Service (SMS) or mobile spam.Like email spam, the SMS spam problem can be approached with legal, economic or technical measures. Among the wide range of technical measures, Bayesian filters are playing a key role in stopping email spam. In this paper, we analyze to what extent Bayesian filtering techniques used to block email spam, can be applied to the problem of detecting and stopping mobile spam. In particular, we have built two SMS spam test collections of significant size, in English and Spanish. We have tested on them a number of messages representation techniques and Machine Learning algorithms, in terms of effectiveness. Our results demonstrate that Bayesian filtering techniques can be effectively transferred from email to SMS spam.", "title": "" } ]
[ { "docid": "18140fdf4629a1c7528dcd6060f427c3", "text": "Central to many text analysis methods is the notion of a concept: a set of semantically related keywords characterizing a specific object, phenomenon, or theme. Advances in word embedding allow building a concept from a small set of seed terms. However, naive application of such techniques may result in false positive errors because of the polysemy of natural language. To mitigate this problem, we present a visual analytics system called ConceptVector that guides a user in building such concepts and then using them to analyze documents. Document-analysis case studies with real-world datasets demonstrate the fine-grained analysis provided by ConceptVector. To support the elaborate modeling of concepts, we introduce a bipolar concept model and support for specifying irrelevant words. We validate the interactive lexicon building interface by a user study and expert reviews. Quantitative evaluation shows that the bipolar lexicon generated with our methods is comparable to human-generated ones.", "title": "" }, { "docid": "351c8772471518f305ab0b327632d59d", "text": "Image classification is one of classical problems of concern in image processing. There are various approaches for solving this problem. The aim of this paper is bring together two areas in which are Artificial Neural Network (ANN) and Support Vector Machine (SVM) applying for image classification. Firstly, we separate the image into many sub-images based on the features of images. Each sub-image is classified into the responsive class by an ANN. Finally, SVM has been compiled all the classify result of ANN. Our proposal classification model has brought together many ANN and one SVM. Let it denote ANN_SVM. ANN_SVM has been applied for Roman numerals recognition application and the precision rate is 86%. The experimental results show the feasibility of our proposal model.", "title": "" }, { "docid": "d6e76bfeeb127addcbe2eb77b1b0ad7e", "text": "The choice of modeling units is critical to automatic speech recognition (ASR) tasks. Conventional ASR systems typically choose context-dependent states (CD-states) or contextdependent phonemes (CD-phonemes) as their modeling units. However, it has been challenged by sequence-to-sequence attention-based models, which integrate an acoustic, pronunciation and language model into a single neural network. On English ASR tasks, previous attempts have already shown that the modeling unit of graphemes can outperform that of phonemes by sequence-to-sequence attention-based model. In this paper, we are concerned with modeling units on Mandarin Chinese ASR tasks using sequence-to-sequence attention-based models with the Transformer. Five modeling units are explored including context-independent phonemes (CI-phonemes), syllables, words, sub-words and characters. Experiments on HKUST datasets demonstrate that the lexicon free modeling units can outperform lexicon related modeling units in terms of character error rate (CER). Among five modeling units, character based model performs best and establishes a new state-of-the-art CER of 26.64% on HKUST datasets without a hand-designed lexicon and an extra language model integration, which corresponds to a 4.8% relative improvement over the existing best CER of 28.0% by the joint CTC-attention based encoder-decoder network.", "title": "" }, { "docid": "4d89f159fab9da57bb83a2f68d2f7606", "text": "Label noise-tolerant machine learning techniques address datasets which are affected by mislabelling of the instances. Since labelling quality is a severe issue in particular for large or streaming data sets, this setting becomes more and more relevant in the context of life-long learning, big data and crowd sourcing. In this contribution, we extend a powerful online learning method, soft robust learning vector quantisation, by a probabilistic model for noise tolerance, which is applicable for streaming data, including label-noise drift. The superiority of the technique is demonstrated in several benchmark problems.", "title": "" }, { "docid": "fe8f4f987a28d3e7bff01db3263a740b", "text": "BACKGROUND\nPeople whose chronic pain limits their independence are especially likely to become anxious and depressed. Mindfulness training has shown promise for stress-related disorders.\n\n\nMETHODS\nChronic pain patients who complained of anxiety and depression and who scored higher than moderate in Hamilton Depression Rating Scale (HDRS) and Hospital Anxiety and Depression Scale (HADS) as well as moderate in Quality of Life Scale (QOLS) were observed for eight weeks, three days a week for an hour of Mindfulness Meditation training with an hour daily home Mindfulness Meditation practice. Pain was evaluated on study entry and completion, and patients were given the Patients' Global Impression of Change (PGIC) to score at the end of the training program.\n\n\nRESULTS\nForty-seven patients (47) completed the Mindfulness Meditation Training program. Over the year-long observation, patients demonstrated noticeable improvement in depression, anxiety, pain, and global impression of change.\n\n\nCONCLUSION\nChronic pain patients who suffer with anxiety and depression may benefit from incorporating Mindfulness Meditation into their treatment plans.", "title": "" }, { "docid": "44ff9580f0ad6321827cf3f391a61151", "text": "This paper aims to evaluate the aesthetic visual quality of a special type of visual media: digital images of paintings. Assessing the aesthetic visual quality of paintings can be considered a highly subjective task. However, to some extent, certain paintings are believed, by consensus, to have higher aesthetic quality than others. In this paper, we treat this challenge as a machine learning problem, in order to evaluate the aesthetic quality of paintings based on their visual content. We design a group of methods to extract features to represent both the global characteristics and local characteristics of a painting. Inspiration for these features comes from our prior knowledge in art and a questionnaire survey we conducted to study factors that affect human's judgments. We collect painting images and ask human subjects to score them. These paintings are then used for both training and testing in our experiments. Experimental results show that the proposed work can classify high-quality and low-quality paintings with performance comparable to humans. This work provides a machine learning scheme for the research of exploring the relationship between aesthetic perceptions of human and the computational visual features extracted from paintings.", "title": "" }, { "docid": "c8ffa511ba6aa4a5b93678b2cc32815d", "text": "Many long-held practices surrounding newborn injections lack evidence and have unintended consequences. The choice of needles, injection techniques, and pain control methods are all factors for decreasing pain and improving the safety of intramuscular injections. Using practices founded on the available best evidence, nurses can reduce pain, improve the quality and safety of care, and set the stage for long-term compliance with vaccination schedules.", "title": "" }, { "docid": "d27ed8fd2acd0dad6436b7e98853239d", "text": "a r t i c l e i n f o What are the psychological mechanisms that trigger habits in daily life? Two studies reveal that strong habits are influenced by context cues associated with past performance (e.g., locations) but are relatively unaffected by current goals. Specifically, performance contexts—but not goals—automatically triggered strongly habitual behaviors in memory (Experiment 1) and triggered overt habit performance (Experiment 2). Nonetheless, habits sometimes appear to be linked to goals because people self-perceive their habits to be guided by goals. Furthermore, habits of moderate strength are automatically influenced by goals, yielding a curvilinear, U-shaped relation between habit strength and actual goal influence. Thus, research that taps self-perceptions or moderately strong habits may find habits to be linked to goals. Introduction Having cast off the strictures of behaviorism, psychologists are showing renewed interest in the psychological processes that guide This interest is fueled partly by the recognition that automaticity is not a unitary construct. Hence, different kinds of automatic responses may be triggered and controlled in different ways (Bargh, 1994; Moors & De Houwer, 2006). However, the field has not yet converged on a common understanding of the psychological mechanisms that underlie habits. Habits can be defined as psychological dispositions to repeat past behavior. They are acquired gradually as people repeatedly respond in a recurring context (e.g., performance settings, action sequences, Wood & Neal, 2007, 2009). Most researchers agree that habits often originate in goal pursuit, given that people are likely to repeat actions that are rewarding or yield desired outcomes. In addition, habit strength is a continuum, with habits of weak and moderate strength performed with lower frequency and/or in more variable contexts than strong habits This consensus aside, it remains unclear how goals and context cues influence habit automaticity. Goals are motivational states that (a) define a valued outcome that (b) energizes and directs action (e.g., the goal of getting an A in class energizes late night studying; Förster, Liberman, & Friedman, 2007). In contrast, context cues for habits reflect features of the performance environment in which the response typically occurs (e.g., the college library as a setting for late night studying). Some prior research indicates that habits are activated automatically by goals (e.g., Aarts & Dijksterhuis, 2000), whereas others indicate that habits are activated directly by context cues, with minimal influence of goals In the present experiments, we first test the cognitive associations …", "title": "" }, { "docid": "19f3720d0077783554b6d9cd71e95c48", "text": "Radical prostatectomy is performed on approximately 40% of men with organ-confined prostate cancer. Pathologic information obtained from the prostatectomy specimen provides important prognostic information and guides recommendations for adjuvant treatment. The current pathology protocol in most centers involves primarily qualitative assessment. In this paper, we describe and evaluate our system for automatic prostate cancer detection and grading on hematoxylin & eosin-stained tissue images. Our approach is intended to address the dual challenges of large data size and the need for high-level tissue information about the locations and grades of tumors. Our system uses two stages of AdaBoost-based classification. The first provides high-level tissue component labeling of a superpixel image partitioning. The second uses the tissue component labeling to provide a classification of cancer versus noncancer, and low-grade versus high-grade cancer. We evaluated our system using 991 sub-images extracted from digital pathology images of 50 whole-mount tissue sections from 15 prostatectomy patients. We measured accuracies of 90% and 85% for the cancer versus noncancer and high-grade versus low-grade classification tasks, respectively. This system represents a first step toward automated cancer quantification on prostate digital histopathology imaging, which could pave the way for more accurately informed postprostatectomy patient care.", "title": "" }, { "docid": "90e5eaa383c00a0551a5161f07c683e7", "text": "The importance of the Translation Lookaside Buffer (TLB) on system performance is well known. There have been numerous prior efforts addressing TLB design issues for cutting down access times and lowering miss rates. However, it was only recently that the first exploration [26] on prefetching TLB entries ahead of their need was undertaken and a mechanism called Recency Prefetching was proposed. There is a large body of literature on prefetching for caches, and it is not clear how they can be adapted (or if the issues are different) for TLBs, how well suited they are for TLB prefetching, and how they compare with the recency prefetching mechanism.This paper presents the first detailed comparison of different prefetching mechanisms (previously proposed for caches) - arbitrary stride prefetching, and markov prefetching - for TLB entries, and evaluates their pros and cons. In addition, this paper proposes a novel prefetching mechanism, called Distance Prefetching, that attempts to capture patterns in the reference behavior in a smaller space than earlier proposals. Using detailed simulations of a wide variety of applications (56 in all) from different benchmark suites and all the SPEC CPU2000 applications, this paper demonstrates the benefits of distance prefetching.", "title": "" }, { "docid": "c61877099eddc31a281fa82fd942072e", "text": "The trend of bring your own device (BYOD) has been rapidly adopted by organizations. Despite the pros and cons of BYOD adoption, this trend is expected to inevitably keep increasing. Yet, BYOD has raised significant concerns about information system security as employees use their personal devices to access organizational resources. This study aims to examine employees' intention to comply with an organization’s IS security policy in the context of BYOD. We derived our research model from reactance, protection motivation and organizational justice theories. The results of this study demonstrate that an employee’s perceived response efficacy and perceived justice positively affect an employee’s intention to comply with BYOD security policy. Perceived security threat appraisal was found to marginally promote the intention to comply. Conversely, perceived freedom threat due to imposed security policy negatively affects an employee’s intention to comply with the security policy. We also found that an employee’s perceived cost associated with compliance behavior positively affects an employee’s perceptions of threat to an individual freedom. An interesting double-edged sword effect of a security awareness program was confirmed by the results. BYOD security awareness program increases an employee’s response efficacy (a positive effect) and response cost (a negative effect). The study also demonstrates the importance of having an IT support team for BYOD, as it increases an employee’s response-efficacy and perceived justice.", "title": "" }, { "docid": "23de1573e532223f4158f69f0e889793", "text": "BACKGROUND/AIMS\nAtherosclerosis is a chronic inflammatory disease. Intracellular adhesion molecule-1 (ICAM-1), vascular cellular adhesion molecule-1 (VCAM-1), and monocyte chemoattractant protein-1 (MCP-1) play important roles in inflammatory processes. P38 mitogen-activated protein kinase (MAPK) and nuclear factor (NF)-κB signaling regulate ICAM-1, VCAM-1, and MCP-1 expression. Angiotensin (Ang) II upregulates ICAM-1, VCAM-1, and MCP-1 expression through the P38 MAPK and NF-κB pathways. Ang-(1-7) may oppose the actions of Ang II. We investigated whether Ang-(1-7) prevents Ang II-induced ICAM-1, VCAM-1, and MCP-1 expression in human umbilical vein endothelial cells (HUVECs).\n\n\nMETHODS\nICAM-1, VCAM-1, and MCP-1 expression was estimated by real-time polymerase chain reaction (PCR) and enzyme-linked immunosorbent assay (ELISA); P38, NF-κB, and p-IκB-α expression was estimated by western blotting.\n\n\nRESULTS\nAng-(1-7) inhibited Ang II-induced ICAM-1, VCAM-1, and MCP-1 expression and secretion in HUVECs. Ang II sharply increased P38 MAPK phosphorylation, which was inhibited by pretreatment with Ang-(1-7). Moreover, Ang-(1-7) significantly inhibited Ang II-induced IκB-α phosphorylation and NF-κB P65 nuclear translocation. The MAS receptor antagonist A-779 abolished the suppressive effects of Ang-(1-7).\n\n\nCONCLUSION\nAng-(1-7) attenuates Ang II-induced ICAM-1, VCAM-1, and MCP-1 expression via the MAs receptor by suppressing the P38 and NF-κB pathways in HUVECs. Ang-(1-7) might delay the progression of inflammatory diseases, including atherosclerosis.", "title": "" }, { "docid": "851de4b014dfeb6f470876896b0416b3", "text": "The design of bioinspired systems for chemical sensing is an engaging line of research in machine olfaction. Developments in this line could increase the lifetime and sensitivity of artificial chemo-sensory systems. Such approach is based on the sensory systems known in live organisms, and the resulting developed artificial systems are targeted to reproduce the biological mechanisms to some extent. Sniffing behaviour, sampling odours actively, has been studied recently in neuroscience, and it has been suggested that the respiration frequency is an important parameter of the olfactory system, since the odour perception, especially in complex scenarios such as novel odourants exploration, depends on both the stimulus identity and the sampling method. In this work we propose a chemical sensing system based on an array of 16 metal-oxide gas sensors that we combined with an external mechanical ventilator to simulate the biological respiration cycle. The tested gas classes formed a relatively broad combination of two analytes, acetone and ethanol, in binary mixtures. Two sets of lowfrequency and high-frequency features were extracted from the acquired signals to show that the high-frequency features contain information related to the gas class. In addition, such information is available at early stages of the measurement, which could make the technique ∗Corresponding author. Email address: andrey.ziyatdinov@upc.edu (Andrey Ziyatdinov) Preprint submitted to Sensors and Actuators B: Chemical August 15, 2014 suitable in early detection scenarios. The full data set is made publicly available to the community.", "title": "" }, { "docid": "9c507a2b1f57750d1b4ffeed6979a06f", "text": "Once considered provocative, the notion that the wisdom of the crowd is superior to any individual has become itself a piece of crowd wisdom, leading to speculation that online voting may soon put credentialed experts out of business. Recent applications include political and economic forecasting, evaluating nuclear safety, public policy, the quality of chemical probes, and possible responses to a restless volcano. Algorithms for extracting wisdom from the crowd are typically based on a democratic voting procedure. They are simple to apply and preserve the independence of personal judgment. However, democratic methods have serious limitations. They are biased for shallow, lowest common denominator information, at the expense of novel or specialized knowledge that is not widely shared. Adjustments based on measuring confidence do not solve this problem reliably. Here we propose the following alternative to a democratic vote: select the answer that is more popular than people predict. We show that this principle yields the best answer under reasonable assumptions about voter behaviour, while the standard ‘most popular’ or ‘most confident’ principles fail under exactly those same assumptions. Like traditional voting, the principle accepts unique problems, such as panel decisions about scientific or artistic merit, and legal or historical disputes. The potential application domain is thus broader than that covered by machine learning and psychometric methods, which require data across multiple questions.", "title": "" }, { "docid": "c128546da3d777d52185afbdca8afbe3", "text": "Compared with labeled data, unlabeled data are significantly easier to obtain. Currently, classification of unlabeled data is an open issue. In this paper a novel SVMKNN classification methodology based on Semi-supervised learning is proposed, we consider the problem of using a large number of unlabeled data to boost performance of the classifier when only a small set of labeled examples is available. We use the few labeled data to train a weaker SVM classifier and make use of the boundary vectors to improve the weaker SVM iteratively by introducing KNN. Using KNN classifier doesn’t enlarge the number of training examples only, but also improves the quality of the new training examples which are transformed from the boundary vectors. Experiments on UCI data sets show that the proposed methodology can evidently improve the accuracy of the final SVM classifier by tuning the parameters and can reduce the cost of labeling unlabeled examples.", "title": "" }, { "docid": "94d66ffd9d9c2ccb08be7059075cd018", "text": "Query expansion is generally a useful technique in improving search performance. However, some expanded query terms obtained by traditional statistical methods (e.g., pseudo-relevance feedback) may not be relevant to the user’s information need, while some relevant terms may not be contained in the feedback documents at all. Recent studies utilize external resources to detect terms that are related to the query, and then adopt these terms in query expansion. In this paper, we present a study in the use of Freebase [6], which is an open source general-purpose ontology, as a source for deriving expansion terms. FreeBase provides a graphbased model of human knowledge, from which a rich and multi-step structure of instances related to the query concept can be extracted, as a complement to the traditional statistical approaches to query expansion. We propose a novel method, based on the well-principled DempsterShafer’s (D-S) evidence theory, to measure the certainty of expansion terms from the Freebase structure. The expanded query model is then combined with a state of the art statistical query expansion model – the Relevance Model (RM3). Experiments show that the proposed method achieves significant improvements over RM3.", "title": "" }, { "docid": "6a0d404aff5059fc482671b497b2b8d0", "text": "OBJECTIVE\nTo identify the effects of laryngeal surgical treatment in the voice of transgender women, especially on the fundamental frequency (f0).\n\n\nSTUDY DESIGN\nWe performed a systematic review in PubMed and Scopus in July 2016, covering the period between 2005 and 2016.\n\n\nMETHODS\nInclusion criteria were studies in English or Portuguese about the laryngeal surgical treatment in transgender women, featuring experimental design, title, year of publication, country of origin, journal of publication, participants, intervention, results. For the meta-analysis, only studies that had control group were selected. Exclusion criteria were articles that mentioned the use of surgical techniques but did not use the procedure in research, animal studies, studies of revision, and postmortem studies.\n\n\nRESULTS\nFour hundred and twenty-three articles were identified in the initial search; 94 were selected for analysis by two referees, independently. After applying all the selection criteria, five studies remained in the meta-analysis. The surgical procedures that were identified included laryngoplasty with or without thyrohyoid approximation, Wendler glottoplasty, cricothyroid approximation, laser glottoplasty reduction and the vocal fold shortening and retrodisplacement of anterior commissure. There was no significant difference between the experimental group and the control group in relation to f0.\n\n\nCONCLUSION\nNo randomized clinical trials and prospective cohort studies are available, and a small number of retrospective cohort and case-control studies of surgical techniques reveal an increase in the f0. The evidence produced is not conclusive regarding which surgical technique would be better for vocal treatment of transgender women.\n\n\nLEVEL OF EVIDENCE\nNA Laryngoscope, 127:2596-2603, 2017.", "title": "" }, { "docid": "5654bea8e2fe999fe52ec7536edd0f52", "text": "Mobile app developers constantly monitor feedback in user reviews with the goal of improving their mobile apps and better meeting user expectations. Thus, automated approaches have been proposed in literature with the aim of reducing the effort required for analyzing feedback contained in user reviews via automatic classification/prioritization according to specific topics. In this paper, we introduce SURF (Summarizer of User Reviews Feedback), a novel approach to condense the enormous amount of information that developers of popular apps have to manage due to user feedback received on a daily basis. SURF relies on a conceptual model for capturing user needs useful for developers performing maintenance and evolution tasks. Then it uses sophisticated summarisation techniques for summarizing thousands of reviews and generating an interactive, structured and condensed agenda of recommended software changes. We performed an end-to-end evaluation of SURF on user reviews of 17 mobile apps (5 of them developed by Sony Mobile), involving 23 developers and researchers in total. Results demonstrate high accuracy of SURF in summarizing reviews and the usefulness of the recommended changes. In evaluating our approach we found that SURF helps developers in better understanding user needs, substantially reducing the time required by developers compared to manually analyzing user (change) requests and planning future software changes.", "title": "" }, { "docid": "3f103ae85438617e950bdc0cea72cd8b", "text": "In this paper, we implement a novel parallelized approach of Local Binary Pattern (LBP) based face recognition algorithm on GPU. High performance rates have been achieved through maximizing the resource exploitation available in the GPU. The launch of GPU programming tools like Open source Computation Language (OpenCL) and (CUDA) have boosted the development of various applications on GPU. In this paper we implement a parallelized LBP algorithm on GPU using OpenCL programming tools. Programs developed under the OpenCL enable us to utilize GPU for general purpose computation with increased performance efficiency in terms of execution time. The experimental results based on the implementation on AMD 6500 GPU processor are observed to increase the computational performance of the system by to 30 folds in case of 1024×1024 images. The relative computational efficiency increases with increase in the size of the Image. This paper addresses several parallelization problems related to memory access and updating, divergent execution paths, understanding and realizing the OpenCL's concurrency and Execution models.", "title": "" }, { "docid": "218f0827d1f49150389b6a08cddc1b1c", "text": "In nature, microorganisms prefer to reside in structured microbial communities, termed biofilms, rather than as free-floating planktonic cells. Advantageous for the microorganisms, but disadvantageous for human health, is the increased resistance/tolerance of the biofilm cells to antimicrobial treatment. In clinically relevant biofilms, Candida albicans is one of the most frequently isolated microorganisms in biofilms. This review primarily elaborates on the activity of the currently used antimycotics against Candida biofilms, the potential of antifungal lock therapy and sheds more light on new promising compounds resulting from the gradual shift of anti-biofilm research activities to natural products, plants and their extracts.", "title": "" } ]
scidocsrr
f58b827c69309f317b9324a3a168c487
A simultaneous X/Ka feed system for reflectors with a F/D ratio of 0.8
[ { "docid": "1b13208e3f8b70dbee13cf0bff2203b8", "text": "A variation on an existing antenna feed system for use on simultaneous X/Ka-band satellite ground terminals is presented. The modified design retains the important functionality of the existing feed system, using a simplified approach that aims to significantly reduce the weight and the cost of manufacture.", "title": "" } ]
[ { "docid": "62a7cf86e1e0f36b77cd606e1c3ea1f7", "text": "Mastering 3D Printing shows you how to get the most out of your printer, including how to design models, choose materials, work with different printers, and integrate 3D printing with traditional prototyping to make techniques like sand casting more efficient. you’ve printed key chains. you’ve printed simple toys. now you’re ready to innovate with your 3D printer to start a business or teach and inspire others. Joan horvath has been an educator, engineer, author, and startup 3D printing company team member. She shows you all of the technical details you need to know to go beyond simple model printing to make your 3D printer work for you as a prototyping device, a teaching tool, or a business machine.", "title": "" }, { "docid": "2ada0c045f1f844063629889c6eef679", "text": "Fine-grained address space layout randomization has recently been proposed as a method of efficiently mitigating ROP attacks. In this paper, we introduce a design and implementation of a framework based on a runtime strategy that undermines the benefits of fine-grained ASLR. Specifically, we abuse a memory disclosure to map an application’s memory layout on-the-fly, dynamically discover gadgets and construct the desired exploit payload, and finish our goals by using virtual function call mechanism—all with a script environment at the time an exploit is launched. We demonstrate the effectiveness of our framework by using it in conjunction with a real-world exploit against Internet Explorer and other applications protected by fine-grained ASLR. Moreover, we provide evaluations that demonstrate the practicality of run-time code reuse attacks. Our work shows that such a framework is effective and fine-grained ASLR may not be as promising as first thought. Keywords-code reuse; security; dynamic; fine-grained ASLR", "title": "" }, { "docid": "2dd1c442119003e11959d969186b564c", "text": "—In the context of the public funded project BoniRob we have developed an autonomous agricultural robot that can autonomously perform repeating phenotyping tasks for individual plants on different days. These tasks require a robust and reliable navigation and perception system. In this paper, we summarize our navigation approach and give details about the employed sensors, algorithms and the system integration. The system has successfully been demonstrated at various public occasions and we are now, with our partners, investigating opportunities for a limited-lot production for research and evaluation purposes. An overview of the complete project is given in [1]. Furthermore, we are currently developing the next version of the robot that will allow in addition mechanical weed control.", "title": "" }, { "docid": "c9fc426722df72b247093779ad6e2c0e", "text": "Biped robots have better mobility than conventional wheeled robots, but they tend to tip over easily. To be able to walk stably in various environments, such as on rough terrain, up and down slopes, or in regions containing obstacles, it is necessary for the robot to adapt to the ground conditions with a foot motion, and maintain its stability with a torso motion. When the ground conditions and stability constraint are satisfied, it is desirable to select a walking pattern that requires small torque and velocity of the joint actuators. In this paper, we first formulate the constraints of the foot motion parameters. By varying the values of the constraint parameters, we can produce different types of foot motion to adapt to ground conditions. We then propose a method for formulating the problem of the smooth hip motion with the largest stability margin using only two parameters, and derive the hip trajectory by iterative computation. Finally, the correlation between the actuator specifications and the walking patterns is described through simulation studies, and the effectiveness of the proposed methods is confirmed by simulation examples and experimental results.", "title": "" }, { "docid": "392c2499a8d9c0ec2bf329ab92d6ace3", "text": "OBJECTIVE\nCurrent state-of-the-art artificial pancreas systems are either based on traditional linear control theory or rely on mathematical models of glucose-insulin dynamics. Blood glucose control using these methods is limited due to the complexity of the biological system. The aim of this study was to describe the principles and clinical performance of the novel MD-Logic Artificial Pancreas (MDLAP) System.\n\n\nRESEARCH DESIGN AND METHODS\nThe MDLAP applies fuzzy logic theory to imitate lines of reasoning of diabetes caregivers. It uses a combination of control-to-range and control-to-target strategies to automatically regulate individual glucose levels. Feasibility clinical studies were conducted in seven adults with type 1 diabetes (aged 19-30 years, mean diabetes duration 10 +/- 4 years, mean A1C 6.6 +/- 0.7%). All underwent 14 full, closed-loop control sessions of 8 h (fasting and meal challenge conditions) and 24 h.\n\n\nRESULTS\nThe mean peak postprandial (overall sessions) glucose level was 224 +/- 22 mg/dl. Postprandial glucose levels returned to <180 mg/dl within 2.6 +/- 0.6 h and remained stable in the normal range for at least 1 h. During 24-h closed-loop control, 73% of the sensor values ranged between 70 and 180 mg/dl, 27% were >180 mg/dl, and none were <70 mg/dl. There were no events of symptomatic hypoglycemia during any of the trials.\n\n\nCONCLUSIONS\nThe MDLAP system is a promising tool for individualized glucose control in patients with type 1 diabetes. It is designed to minimize high glucose peaks while preventing hypoglycemia. Further studies are planned in the broad population under daily-life conditions.", "title": "" }, { "docid": "992a563958252b3e3f1147d30a4a12b9", "text": "Creating high quality virtual reality (VR) experience takes time and requires extensive practice. Although, multiple virtual and augmented reality courses existed for years all over the world, high costs of the equipment were always in the way of building up the up to date knowledge. Rapid development of the technology caused the new VR boom and exposed a serious lack of the experienced VR/AR developers.\n In this paper, we present our introductory VR courses for master students. The aim of the courses is to provide introduction to the basics of VR and AR and supporting technology. We move the teaching focus from the specifics of a particular VR system to the skill build-up and development of the VR user experience. For that we provide the access to the up to date consumer hardware, such as HTC Vive and Leap Motion. We discuss the structure of the courses and methodology, and provide the teaching materials. Moreover, we discuss in details our updated practical course that unites the development of a low-cost desktop and a high-quality immersive VR applications using only off-the-shelf consumer equipment. Furthermore, we discuss the overall course evaluation by the students and further opportunities for their professional growth, as well as consecutive changes that will be made next.", "title": "" }, { "docid": "627b14801c8728adf02b75e8eb62896f", "text": "In the 45 years since Cattell used English trait terms to begin the formulation of his \"description of personality,\" a number of investigators have proposed an alternative structure based on 5 orthogonal factors. The generality of this 5-factor model is here demonstrated across unusually comprehensive sets of trait terms. In the first of 3 studies, 1,431 trait adjectives grouped into 75 clusters were analyzed; virtually identical structures emerged in 10 replications, each based on a different factor-analytic procedure. A 2nd study of 479 common terms grouped into 133 synonym clusters revealed the same structure in 2 samples of self-ratings and in 2 samples of peer ratings. None of the factors beyond the 5th generalized across the samples. In the 3rd study, analyses of 100 clusters derived from 339 trait terms suggest their potential utility as Big-Five markers in future studies.", "title": "" }, { "docid": "e30b861714eb453a360da272ad7a0911", "text": "In this contribution a novel LLCC-PWM inverter is presented for driving ultrasonic high power piezoelectric actuators. The proposed system of a pulse-width modulated inverter and LLCC-type filter is designed in a way to reduce the total harmonic distortion of the motor voltage and to locally compensate for the reactive power of piezoelectric actuators. In order to limit the switching frequency, a pulse width modulation using elimination technique of selected harmonics is designed and implemented on a FPGA. Due to local compensation of reactive power and high dynamic behavior of LLCC PWM inverter, the whole power supply shows an optimal performance at minimized volume and weight compared to LC and LLCC resonant converters.", "title": "" }, { "docid": "d066670bbf58a2c96fa3ef2c037166b1", "text": "Artificial neural networks are applied in many situations. neuralnet is built to train multi-layer perceptrons in the context of regression analyses, i.e. to approximate functional relationships between covariates and response variables. Thus, neural networks are used as extensions of generalized linear models. neuralnet is a very flexible package. The backpropagation algorithm and three versions of resilient backpropagation are implemented and it provides a custom-choice of activation and error function. An arbitrary number of covariates and response variables as well as of hidden layers can theoretically be included. The paper gives a brief introduction to multilayer perceptrons and resilient backpropagation and demonstrates the application of neuralnet using the data set infert, which is contained in the R distribution.", "title": "" }, { "docid": "3c25366758f0e102a1008605eedf8f4d", "text": "Taobao is a network retailer which founded in May 2003 and now is the most popular online retail platform in China with nearly 500 million registered users. More than 60 million people visit Taobao everyday and over 48000 items are sold every minute on this platform. During the expansion progress, Taobao has transformed from a C2C network market into a worldwide E-commerce trading platform including C2C, group purchase, distribution and other electronic commerce modes. And its future strategy is focusing on community, content and local. This article studies service and business model of Taobao from five aspects: service description and market context, service supply chain, quality of service, service management system and risk management. An analysis of the present situation of Taobao reveals that it has formed its unique business pattern and raises problems and suggestions. For Taobao stepping into cross-border E-commerce, the article analyses its strength, weakness and points out the direction of its future.", "title": "" }, { "docid": "95ce70e3c893aac8036af7aab1e9c0ac", "text": "Wireless communications is one of the most successful technologies in modern years, given that an exponential growth rate in wireless traffic has been sustained for over a century (known as Cooper's law). This trend will certainly continue, driven by new innovative applications; for example, augmented reality and the Internet of Things. Massive MIMO has been identified as a key technology to handle orders of magnitude more data traffic. Despite the attention it is receiving from the communication community, we have personally witnessed that Massive MIMO is subject to several widespread misunderstandings, as epitomized by following (fictional) abstract: “The Massive MIMO technology uses a nearly infinite number of high-quality antennas at the base stations. By having at least an order of magnitude more antennas than active terminals, one can exploit asymptotic behaviors that some special kinds of wireless channels have. This technology looks great at first sight, but unfortunately the signal processing complexity is off the charts and the antenna arrays would be so huge that it can only be implemented in millimeter-wave bands.” These statements are, in fact, completely false. In this overview article, we identify 10 myths and explain why they are not true. We also ask a question that is critical for the practical adoption of the technology and which will require intense future research activities to answer properly. We provide references to key technical papers that support our claims, while a further list of related overview and technical papers can be found at the Massive MIMO Info Point: http://massivemimo. eu.", "title": "" }, { "docid": "11c8a8b5e99c6150f9d6810b3ab79864", "text": "Finding telecommunications fraud in masses of call records is more difficult than finding a needle in a haystack. In the haystack problem, there is only one needle that does not look like hay, the pieces of hay all look similar, and neither the needle nor the hay changes much over time. Fraudulent calls may be rare like needles in haystacks, but they are much more challenging to find. Callers", "title": "" }, { "docid": "99d9dcef0e4441ed959129a2a705c88e", "text": "Wikipedia has grown to a huge, multi-lingual source of encyclopedic knowledge. Apart from textual content, a large and everincreasing number of articles feature so-called infoboxes, which provide factual information about the articles’ subjects. As the different language versions evolve independently, they provide different information on the same topics. Correspondences between infobox attributes in different language editions can be leveraged for several use cases, such as automatic detection and resolution of inconsistencies in infobox data across language versions, or the automatic augmentation of infoboxes in one language with data from other language versions. We present an instance-based schema matching technique that exploits information overlap in infoboxes across different language editions. As a prerequisite we present a graph-based approach to identify articles in different languages representing the same real-world entity using (and correcting) the interlanguage links in Wikipedia. To account for the untyped nature of infobox schemas, we present a robust similarity measure that can reliably quantify the similarity of strings with mixed types of data. The qualitative evaluation on the basis of manually labeled attribute correspondences between infoboxes in four of the largest Wikipedia editions demonstrates the effectiveness of the proposed approach. 1. Entity and Attribute Matching across Wikipedia Languages Wikipedia is a well-known public encyclopedia. While most of the information contained in Wikipedia is in textual form, the so-called infoboxes provide semi-structured, factual information. They are displayed as tables in many Wikipedia articles and state basic facts about the subject. There are different templates for infoboxes, each targeting a specific category of articles and providing fields for properties that are relevant for the respective subject type. For example, in the English Wikipedia, there is a class of infoboxes about companies, one to describe the fundamental facts about countries (such as their capital and population), one for musical artists, etc. However, each of the currently 281 language versions1 defines and maintains its own set of infobox classes with their own set of properties, as well as providing sometimes different values for corresponding attributes. Figure 1 shows extracts of the English and German infoboxes for the city of Berlin. The arrows indicate matches between properties. It is already apparent that matching purely based on property names is futile: The terms Population density and Bevölkerungsdichte or Governing parties and Reg. Parteien have no textual similarity. However, their property values are more revealing: <3,857.6/km2> and <3.875 Einw. je km2> or <SPD/Die Linke> and <SPD und Die Linke> have a high textual similarity, respectively. Email addresses: daniel.rinser@alumni.hpi.uni-potsdam.de (Daniel Rinser), dustin.lange@hpi.uni-potsdam.de (Dustin Lange), naumann@hpi.uni-potsdam.de (Felix Naumann) 1as of March 2011 Our overall goal is to automatically find a mapping between attributes of infobox templates across different language versions. Such a mapping can be valuable for several different use cases: First, it can be used to increase the information quality and quantity in Wikipedia infoboxes, or at least help the Wikipedia communities to do so. Inconsistencies among the data provided by different editions for corresponding attributes could be detected automatically. For example, the infobox in the English article about Germany claims that the population is 81,799,600, while the German article specifies a value of 81,768,000 for the same country. Detecting such conflicts can help the Wikipedia communities to increase consistency and information quality across language versions. Further, the detected inconsistencies could be resolved automatically by fusing the data in infoboxes, as proposed by [1]. Finally, the coverage of information in infoboxes could be increased significantly by completing missing attribute values in one Wikipedia edition with data found in other editions. An infobox template does not describe a strict schema, so that we need to collect the infobox template attributes from the template instances. For the purpose of this paper, an infobox template is determined by the set of attributes that are mentioned in any article that reference the template. The task of matching attributes of corresponding infoboxes across language versions is a specific application of schema matching. Automatic schema matching is a highly researched topic and numerous different approaches have been developed for this task as surveyed in [2] and [3]. Among these, schema-level matchers exploit attribute labels, schema constraints, and structural similarities of the schemas. However, in the setting of Wikipedia infoboxes these Preprint submitted to Information Systems October 19, 2012 Figure 1: A mapping between the English and German infoboxes for Berlin techniques are not useful, because infobox definitions only describe a rather loose list of supported properties, as opposed to a strict relational or XML schema. Attribute names in infoboxes are not always sound, often cryptic or abbreviated, and the exact semantics of the attributes are not always clear from their names alone. Moreover, due to our multi-lingual scenario, attributes are labeled in different natural languages. This latter problem might be tackled by employing bilingual dictionaries, if the previously mentioned issues were solved. Due to the flat nature of infoboxes and their lack of constraints or types, other constraint-based matching approaches must fail. On the other hand, there are instance-based matching approaches, which leverage instance data of multiple data sources. Here, the basic assumption is that similarity of the instances of the attributes reflects the similarity of the attributes. To assess this similarity, instance-based approaches usually analyze the attributes of each schema individually, collecting information about value patterns and ranges, amongst others, such as in [4]. A different, duplicate-based approach exploits information overlap across data sources [5]. The idea there is to find two representations of same real-world objects (duplicates) and then suggest mappings between attributes that have the same or similar values. This approach has one important requirement: The data sources need to share a sufficient amount of common instances (or tuples, in a relational setting), i.e., instances describing the same real-world entity. Furthermore, the duplicates either have to be known in advance or have to be discovered despite a lack of knowledge of corresponding attributes. The approach presented in this article is based on such duplicate-based matching. Our approach consists of three steps: Entity matching, template matching, and attribute matching. The process is visualized in Fig. 2. (1) Entity matching: First, we find articles in different language versions that describe the same real-world entity. In particular, we make use of the crosslanguage links that are present for most Wikipedia articles and provide links between same entities across different language versions. We present a graph-based approach to resolve conflicts in the linking information. (2) Template matching: We determine a cross-lingual mapping between infobox templates by analyzing template co-occurrences in the language versions. (3) Attribute matching: The infobox attribute values of the corresponding articles are compared to identify matching attributes across the language versions, assuming that the values of corresponding attributes are highly similar for the majority of article pairs. As a first step we analyze the quality of Wikipedia’s interlanguage links in Sec. 2. We show how to use those links to create clusters of semantically equivalent entities with only one entity from each language in Sec. 3. This entity matching approach is evaluated in Sec. 4. In Sec. 5, we show how a crosslingual mapping between infobox templates can be established. The infobox attribute matching approach is described in Sec. 6 and in turn evaluated in Sec. 7. Related work in the areas of ILLs, concept identification, and infobox attribute matching is discussed in Sec. 8. Finally, Sec. 9 draws conclusions and discusses future work. 2. Interlanguage Links Our basic assumption is that there is a considerable amount of information overlap across the different Wikipedia language editions. Our infobox matching approach presented later requires mappings between articles in different language editions", "title": "" }, { "docid": "3a7427c67b7758516af15da12b663c40", "text": "The initial focus of recombinant protein production by filamentous fungi related to exploiting the extraordinary extracellular enzyme synthesis and secretion machinery of industrial strains, including Aspergillus, Trichoderma, Penicillium and Rhizopus species, was to produce single recombinant protein products. An early recognized disadvantage of filamentous fungi as hosts of recombinant proteins was their common ability to produce homologous proteases which could degrade the heterologous protein product and strategies to prevent proteolysis have met with some limited success. It was also recognized that the protein glycosylation patterns in filamentous fungi and in mammals were quite different, such that filamentous fungi are likely not to be the most suitable microbial hosts for production of recombinant human glycoproteins for therapeutic use. By combining the experience gained from production of single recombinant proteins with new scientific information being generated through genomics and proteomics research, biotechnologists are now poised to extend the biomanufacturing capabilities of recombinant filamentous fungi by enabling them to express genes encoding multiple proteins, including, for example, new biosynthetic pathways for production of new primary or secondary metabolites. It is recognized that filamentous fungi, most species of which have not yet been isolated, represent an enormously diverse source of novel biosynthetic pathways, and that the natural fungal host harboring a valuable biosynthesis pathway may often not be the most suitable organism for biomanufacture purposes. Hence it is expected that substantial effort will be directed to transforming other fungal hosts, non-fungal microbial hosts and indeed non microbial hosts to express some of these novel biosynthetic pathways. But future applications of recombinant expression of proteins will not be confined to biomanufacturing. Opportunities to exploit recombinant technology to unravel the causes of the deleterious impacts of fungi, for example as human, mammalian and plant pathogens, and then to bring forward solutions, is expected to represent a very important future focus of fungal recombinant protein technology.", "title": "" }, { "docid": "8c3f6fcda9965a4dab3936b913c2fe14", "text": "Automatic Number Plate Recognition (ANPR) became a very important tool in our daily life because of the unlimited increase of cars and transportation systems, which make it impossible to be fully managed and monitored by humans. Examples are so many, like traffic monitoring, tracking stolen cars, managing parking toll, red-light violation enforcement, border and customs checkpoints. Yet, it’s a very challenging problem, due to the diversity of plate formats, different scales, rotations and non-uniform illumination conditions during image acquisition. The objective of this paper is to provide a novel algorithm for license plate recognition in complex scenes, particularly for the all-day traffic surveillance environment. This is achieved using mathematical morphology and artificial neural network (ANN). A preprocessing step is applied to improve the performance of license plate localization and character segmentation in case of severe imaging conditions. The first and second stages utilize edge detection and mathematical morphology followed by connected component analysis. ANN is employed in the last stage to construct a classifier to categorize the input numbers of the license plate. The algorithm has been applied on 102 car images with different backgrounds, license plate angles, distances, lightening conditions, and colors. The average accuracy of the license plate localization is 97.06%, 95.10% for license plate segmentation, and 94.12% for character recognition. The experimental results show the outstanding detection performance of the proposed method comparing with traditional algorithms.", "title": "" }, { "docid": "9ba6656cb67dcb72d4ebadcaf9450f40", "text": "OBJECTIVE\nThe Japan Ankylosing Spondylitis Society conducted a nationwide questionnaire survey of spondyloarthropathies (SpA) in 1990 and 1997, (1) to estimate the prevalence and incidence, and (2) to validate the criteria of Amor and the European Spondylarthropathy Study Group (ESSG) in Japan.\n\n\nMETHODS\nJapan was divided into 9 districts, to each of which a survey supervisor was assigned. According to unified criteria, each supervisor selected all the clinics and hospitals with potential for SpA patients in the district. The study population consisted of all patients with SpA seen at these institutes during a 5 year period (1985-89) for the 1st survey and a 7 year period (1990-96) for the 2nd survey.\n\n\nRESULTS\nThe 1st survey recruited 426 and the 2nd survey 638 cases, 74 of which were registered in both studies. The total number of patients with SpA identified 1985-96 was 990 (760 men, 227 women). They consisted of patients with ankylosing spondylitis (68.3%), psoriatic arthritis (12.7%), reactive arthritis (4.0%), undifferentiated SpA (5.4%), inflammatory bowel disease (2.2%), pustulosis palmaris et plantaris (4.7%), and others (polyenthesitis, etc.) (0.8%). The maximum onset number per year was 49. With the assumption that at least one-tenth of the Japanese population with SpA was recruited, incidence and prevalence were estimated not to exceed 0.48/100,000 and 9.5/100,000 person-years, respectively. The sensitivity was 84.0% for Amor criteria and 84.6 for ESSG criteria.\n\n\nCONCLUSION\nThe incidence and prevalence of SpA in Japanese were estimated to be less than 1/10 and 1/200, respectively, of those among Caucasians. The adaptability of the Amor and ESSG criteria was validated for the Japanese population.", "title": "" }, { "docid": "dfaccd0aa36efbafe5cb1101f9d4f93e", "text": "At present, the modern manufacturing and management concepts such as digitalization, networking and intellectualization have been popularized in the industry, and the degree of industrial automation and information has been improved unprecedentedly. Industrial products are everywhere in the world. They are involved in design, manufacture, operation, maintenance and recycling. The whole life cycle involves huge amounts of data. Improving data quality is very important for data mining and data analysis. To solve the problem of data inconsistency is a very important part of improving data quality.", "title": "" }, { "docid": "261f146b67fd8e13d1ad8c9f6f5a8845", "text": "Vision based automatic lane tracking system requires information such as lane markings, road curvature and leading vehicle be detected before capturing the next image frame. Placing a camera on the vehicle dashboard and capturing the forward view results in a perspective view of the road image. The perspective view of the captured image somehow distorts the actual shape of the road, which involves the width, height, and depth. Respectively, these parameters represent the x, y and z components. As such, the image needs to go through a pre-processing stage to remedy the distortion using a transformation technique known as an inverse perspective mapping (IPM). This paper outlines the procedures involved.", "title": "" }, { "docid": "af254a16b14a3880c9b8fe5b13f1a695", "text": "MOOCs or Massive Online Open Courses based on Open Educational Resources (OER) might be one of the most versatile ways to offer access to quality education, especially for those residing in far or disadvantaged areas. This article analyzes the state of the art on MOOCs, exploring open research questions and setting interesting topics and goals for further research. Finally, it proposes a framework that includes the use of software agents with the aim to improve and personalize management, delivery, efficiency and evaluation of massive online courses on an individual level basis.", "title": "" } ]
scidocsrr
3842897afd2e6fe30f8e5f8a23f96fb7
RQ-RDF-3X: Going beyond triplestores
[ { "docid": "a0501b0b3ba110692f9b162ce5f72c05", "text": "RDF and related Semantic Web technologies have been the recent focus of much research activity. This work has led to new specifications for RDF and OWL. However, efficient implementations of these standards are needed to realize the vision of a world-wide semantic Web. In particular, implementations that scale to large, enterprise-class data sets are required. Jena2 is the second generation of Jena, a leading semantic web programmers’ toolkit. This paper describes the persistence subsystem of Jena2 which is intended to support large datasets. This paper describes its features, the changes from Jena1, relevant details of the implementation and performance tuning issues. Query optimization for RDF is identified as a promising area for future research.", "title": "" } ]
[ { "docid": "a881ace343eceee214d5f9bd6c8ccf5f", "text": "Series elastic actuation (SEA) has been widely used for output force/torque regulation, however, the nonlinearity introduced by its torsion spring hysteresis will greatly affect the quality of feedback control. Thus an experimental procedure is conducted for the modeling and identification of spring hysteresis effect, based on which a compensated control scheme is given. Experiment results show that the compensated control can improve the control performance compared with ordinary linear controller.", "title": "" }, { "docid": "3d11d784839910fdc1d2093db3d7c762", "text": "This paper presents a detailed investigation of the influence of pin gap size on the S-parameters of the 1.85 mm connector. In contrast to earlier publications connector geometry is simulated with all chamfers, gaps and contact fingers. Simulation results are verified by cross-checking between finite element frequency domain and finite difference time domain methods. Based on reliable simulation results, a very fast tool was developed to calculate S-parameters for a given connector geometry. This was done using database and interpolation techniques. The most important result is that very small pin gaps in conjunction with large chamfers have a drastic impact on connector S-parameters for frequencies above 50 GHz.", "title": "" }, { "docid": "b4d27850fecbc5d2154fdf1ac5e03f6a", "text": "Measuring free-living peoples’ food intake represents methodological and technical challenges. The Remote Food Photography Method (RFPM) involves participants capturing pictures of their food selection and plate waste and sending these pictures to the research center via a wireless network, where they are analyzed by Registered Dietitians to estimate food intake. Initial tests indicate that the RFPM is reliable and valid, though the efficiency of the method is limited due to the reliance on human raters to estimate food intake. Herein, we describe the development of a semi-automated computer imaging application to estimate food intake based on pictures captured by participants.", "title": "" }, { "docid": "5f1438ee189ab3f3dee9ae17071c8387", "text": "Neurodevelopmental disorders, including autism spectrum disorder (ASD), are defined by core behavioral impairments; however, subsets of individuals display a spectrum of gastrointestinal (GI) abnormalities. We demonstrate GI barrier defects and microbiota alterations in the maternal immune activation (MIA) mouse model that is known to display features of ASD. Oral treatment of MIA offspring with the human commensal Bacteroides fragilis corrects gut permeability, alters microbial composition, and ameliorates defects in communicative, stereotypic, anxiety-like and sensorimotor behaviors. MIA offspring display an altered serum metabolomic profile, and B. fragilis modulates levels of several metabolites. Treating naive mice with a metabolite that is increased by MIA and restored by B. fragilis causes certain behavioral abnormalities, suggesting that gut bacterial effects on the host metabolome impact behavior. Taken together, these findings support a gut-microbiome-brain connection in a mouse model of ASD and identify a potential probiotic therapy for GI and particular behavioral symptoms in human neurodevelopmental disorders.", "title": "" }, { "docid": "59eb15885307870ee9270582f79b9cc0", "text": "Vulnerable Android applications are traditionally exploited via malicious apps. In this paper, we study an underexplored class of Android attacks which do not require the user to install malicious apps, but merely to visit a malicious website in an Android browser. We call them web-to-app injection (or W2AI) attacks, and distinguish between different categories of W2AI sideeffects. To estimate their prevalence, we present an automated W2AIScanner to find and confirm W2AI vulnerabilities. Analyzing real apps from the official Google Play store – we found 286 confirmed vulnerabilities in 134 distinct applications. Our findings suggest that these attacks are pervasive and developers do not adequately protect apps against them. Our tool employs a novel combination of static analysis and symbolic execution with dynamic testing. We show through experiments that this design significantly enhances the detection accuracy compared with an existing state-of-the-art analysis.", "title": "" }, { "docid": "06ca9b3cdeeae59e67d25235ee410f73", "text": "Since many years ago, the scientific community is concerned about how to increase the accuracy of different classification methods, and major achievements have been made so far. Besides this issue, the increasing amount of data that is being generated every day by remote sensors raises more challenges to be overcome. In this work, a tool within the scope of InterIMAGE Cloud Platform (ICP), which is an open-source, distributed framework for automatic image interpretation, is presented. The tool, named ICP: Data Mining Package, is able to perform supervised classification procedures on huge amounts of data, usually referred as big data, on a distributed infrastructure using Hadoop MapReduce. The tool has four classification algorithms implemented, taken from WEKA’s machine learning library, namely: Decision Trees, Naïve Bayes, Random Forest and Support Vector Machines (SVM). The results of an experimental analysis using a SVM classifier on data sets of different sizes for different cluster configurations demonstrates the potential of the tool, as well as aspects that affect its performance. * Corresponding author", "title": "" }, { "docid": "0e0b0b6b0fdab06fa9d3ebf6a8aefd6b", "text": "Hippocampal place fields have been shown to reflect behaviorally relevant aspects of space. For instance, place fields tend to be skewed along commonly traveled directions, they cluster around rewarded locations, and they are constrained by the geometric structure of the environment. We hypothesize a set of design principles for the hippocampal cognitive map that explain how place fields represent space in a way that facilitates navigation and reinforcement learning. In particular, we suggest that place fields encode not just information about the current location, but also predictions about future locations under the current transition distribution. Under this model, a variety of place field phenomena arise naturally from the structure of rewards, barriers, and directional biases as reflected in the transition policy. Furthermore, we demonstrate that this representation of space can support efficient reinforcement learning. We also propose that grid cells compute the eigendecomposition of place fields in part because is useful for segmenting an enclosure along natural boundaries. When applied recursively, this segmentation can be used to discover a hierarchical decomposition of space. Thus, grid cells might be involved in computing subgoals for hierarchical reinforcement learning.", "title": "" }, { "docid": "faac043b0c32bad5a44d52b93e468b78", "text": "Comparative genomic analyses of primates offer considerable potential to define and understand the processes that mold, shape, and transform the human genome. However, primate taxonomy is both complex and controversial, with marginal unifying consensus of the evolutionary hierarchy of extant primate species. Here we provide new genomic sequence (~8 Mb) from 186 primates representing 61 (~90%) of the described genera, and we include outgroup species from Dermoptera, Scandentia, and Lagomorpha. The resultant phylogeny is exceptionally robust and illuminates events in primate evolution from ancient to recent, clarifying numerous taxonomic controversies and providing new data on human evolution. Ongoing speciation, reticulate evolution, ancient relic lineages, unequal rates of evolution, and disparate distributions of insertions/deletions among the reconstructed primate lineages are uncovered. Our resolution of the primate phylogeny provides an essential evolutionary framework with far-reaching applications including: human selection and adaptation, global emergence of zoonotic diseases, mammalian comparative genomics, primate taxonomy, and conservation of endangered species.", "title": "" }, { "docid": "7f53f16a4806d8179725cd9aa4537800", "text": "Corpus linguistics is one of the fastest-growing methodologies in contemporary linguistics. In a conversational format, this article answers a few questions that corpus linguists regularly face from linguists who have not used corpus-based methods so far. It discusses some of the central assumptions (‘formal distributional differences reflect functional differences’), notions (corpora, representativity and balancedness, markup and annotation), and methods of corpus linguistics (frequency lists, concordances, collocations), and discusses a few ways in which the discipline still needs to mature. At a recent LSA meeting ... [with an obvious bow to Frederick Newmeyer] Question: So, I hear you’re a corpus linguist. Interesting, I get to see more and more abstracts and papers and even job ads where experience with corpus-based methods are mentioned, but I actually know only very little about this area. So, what’s this all about? Answer: Yes, it’s true, it’s really an approach that’s gaining more and more prominence in the field. In an editorial of the flagship journal of the discipline, Joseph (2004:382) actually wrote ‘we seem to be witnessing as well a shift in the way some linguists find and utilize data – many papers now use corpora as their primary data, and many use internet data’. Question: My impression exactly. Now, you say ‘approach’, but that’s something I’ve never really understood. Corpus linguistics – is that a theory or model or a method or what? Answer: Good question and, as usual, people differ in their opinions. One well-known corpus linguist, for example, considers corpus linguistics – he calls it computer corpus linguistics – a ‘new philosophical approach [...]’ Leech (1992:106). Many others, including myself, consider it a method(ology), no more, but also no less (cf. McEnery et al. 2006:7f ). However, I don’t think this difference would result in many practical differences. Taylor (2008) discusses this issue in more detail, and for an amazingly comprehensive overview of how huge and diverse the field has become, cf. Lüdeling and Kytö (2008, 2009). Question: Hm ... But if you think corpus linguistics is a methodology, .... Well, let me ask you this: usually, linguists try to interpret the data they investigate against the background of some theory. Generative grammarians interpret their acceptability judgments within Government and Binding Theory or the Minimalist Program; some psycholinguists interpret their reaction time data within, for example, a connectionist interactive Language and Linguistics Compass 3 (2009): 1–17, 10.1111/j.1749-818x.2009.00149.x a 2009 The Author Journal Compilation a 2009 Blackwell Publishing Ltd activation model – now if corpus linguistics is only a methodology, then what is the theory within which you interpret your findings? Answer: Again as usual, there’s no simple answer to this question; it depends .... There are different perspectives one can take. One is that many corpus linguists would perhaps even say that for them, linguistic theory is not of the same prime importance as it is in, for example, generative approaches. Correspondingly, I think it’s fair to say that a large body of corpus-linguistic work has a rather descriptive or applied focus and does actually not involve much linguistic theory. Another one is that corpus linguistic methods are a method just as acceptability judgments, experimental data, etc. and that linguists of every theoretical persuasion can use corpus data. If a linguist investigates how lexical items become more and more used as grammatical markers in a corpus, then the results are descriptive and ⁄ or most likely interpreted within some form of grammaticalization theory. If a linguist studies how German second language learners of English acquire the formation of complex clauses, then he will either just describe what he finds or interpret it within some theory of second language acquisition and so on... . There’s one other, more general way to look at it, though. I can of course not speak for all corpus linguists, but I myself think that a particular kind of linguistic theory is actually particularly compatible with corpus-linguistic methods. These are usage-based cognitive-linguistic theories, and they’re compatible with corpus linguistics in several ways. (You’ll find some discussion in Schönefeld 1999.) First, the units of language assumed in cognitive linguistics and corpus linguistics are very similar: what is a unit in probably most versions of cognitive linguistics or construction grammar is a symbolic unit or a construction, which is an element that covers morphemes, words, etc. Such symbolic units or constructions are often defined broadly enough to match nearly all of the relevant corpus-linguistic notions (cf. Gries 2008a): collocations, colligations, phraseologisms, .... Lastly, corpus-linguistic analyses are always based on the evaluation of some kind of frequencies, and frequency as well as its supposed mental correlate of cognitive entrenchment is one of several central key explanatory mechanisms within cognitively motivated approaches (cf., e.g. Bybee and Hopper 1997; Barlow and Kemmer 2000; Ellis 2002a,b; Goldberg 2006). Question: Wait a second – ‘corpus-linguistic analyses are always based on the evaluation of some kind of frequencies?’ What does that mean? I mean, most linguistic research I know is not about frequencies at all – if corpus linguistics is all about frequencies, then what does corpus linguistics have to contribute? Answer: Well, many corpus linguists would probably not immediately agree to my statement, but I think it’s true anyway. There are two things to be clarified here. First, frequency of what? The answer is, there are no meanings, no functions, no concepts in corpora – corpora are (usually text) files and all you can get out of such files is distributional (or quantitative ⁄ statistical) information: ) frequencies of occurrence of linguistic elements, i.e. how often morphemes, words, grammatical patterns etc. occur in (parts of) a corpus, etc.; this information is usually represented in so-called frequency lists; ) frequencies of co-occurrence of these elements, i.e. how often morphemes occur with particular words, how often particular words occur in a certain grammatical 2 Stefan Th. Gries a 2009 The Author Language and Linguistics Compass 3 (2009): 1–17, 10.1111/j.1749-818x.2009.00149.x Journal Compilation a 2009 Blackwell Publishing Ltd construction, etc.; this information is mostly shown in so-called concordances in which all occurrences of, say, the word searched for are shown in their respective contexts. Figure 1 is an example. As a linguist, you don’t just want to talk about frequencies or distributional information, which is why corpus linguists must make a particular fundamental assumption or a conceptual leap, from frequencies to the things linguists are interested in, but frequencies is where it all starts. Second, what kind of frequency? The answer is that the notion frequency doesn’t presuppose that the relevant linguistic phenomenon occurs in a corpus 100 or 1000 times – the notion of frequency also includes phenomena that occur only once or not at all. For example, there are statistical methods and models out there that can handle non-occurrence or estimate frequencies of unseen items. Thus, corpus linguistics is concerned with whether ) something (an individual element or the co-occurrence of more than one individual element) is attested in corpora; i.e. whether the observed frequency (of occurrence or co-occurrence) is 0 or larger; ) something is attested in corpora more often than something else; i.e. whether an observed frequency is larger than the observed frequency of something else; ) something is observed more or less often than you would expect by chance [this is a more profound issue than it may seem at first; Stefanowitsch (2006) discusses this in more detail]. This also implies that statistical methods can play a large part in corpus linguistics, but this is one area where I think the discipline must still mature or evolve. Fig. 1. A concordance output from AntConc 3.2.2w. What is Corpus Linguistics? 3 a 2009 The Author Language and Linguistics Compass 3 (2009): 1–17, 10.1111/j.1749-818x.2009.00149.x Journal Compilation a 2009 Blackwell Publishing Ltd Question: What do you mean? Answer: Well, this is certainly a matter of debate, but I think that a field that developed in part out of a dissatisfaction concerning methods and data in linguistics ought to be very careful as far as its own methods and data are concerned. It is probably fair to say that many linguists turned to corpus data because they felt there must be more to data collection than researchers intuiting acceptability judgments about what one can say and what one cannot; cf. Labov (1975) and, say, Wasow and Arnold (2005:1485) for discussion and exemplification of the mismatch between the reliability of judgment data by prominent linguists of that time and the importance that was placed on them, as well as McEnery and Wilson (2001: Ch. 1), Sampson (2001: Chs 2, 8, and 10), and the special issue of Corpus Linguistics and Linguistic Theory (CLLT ) 5.1 (2008) on corpus linguistic positions regarding many of Chomsky’s claims in general and the method of acceptability judgments in particular. However, since corpus data only provide distributional information in the sense mentioned earlier, this also means that corpus data must be evaluated with tools that have been designed to deal with distributional information and the discipline that provides such tools is statistics. And this is actually completely natural: psychologists and psycholinguists undergo comprehensive training in experimental methods and the statistical tools relevant to these methods so it’s only fair that corpus linguists do the same in their domain. After all, it would be kind of a double standard to on the one hand bash many theoretical li", "title": "" }, { "docid": "2ec9ac2c283fa0458eb97d1e359ec358", "text": "Multiple automakers have in development or in production automated driving systems (ADS) that offer freeway-pilot functions. This type of ADS is typically limited to restricted-access freeways only, that is, the transition from manual to automated modes takes place only after the ramp merging process is completed manually. One major challenge to extend the automation to ramp merging is that the automated vehicle needs to incorporate and optimize long-term objectives (e.g. successful and smooth merge) when near-term actions must be safely executed. Moreover, the merging process involves interactions with other vehicles whose behaviors are sometimes hard to predict but may influence the merging vehicle's optimal actions. To tackle such a complicated control problem, we propose to apply Deep Reinforcement Learning (DRL) techniques for finding an optimal driving policy by maximizing the long-term reward in an interactive environment. Specifically, we apply a Long Short-Term Memory (LSTM) architecture to model the interactive environment, from which an internal state containing historical driving information is conveyed to a Deep Q-Network (DQN). The DQN is used to approximate the Q-function, which takes the internal state as input and generates Q-values as output for action selection. With this DRL architecture, the historical impact of interactive environment on the long-term reward can be captured and taken into account for deciding the optimal control policy. The proposed architecture has the potential to be extended and applied to other autonomous driving scenarios such as driving through a complex intersection or changing lanes under varying traffic flow conditions.", "title": "" }, { "docid": "9d4dad7aea5935253661141ac806f62a", "text": "A growing body of literature suggests that people often turn to religion when coping with stressful events. However, studies on the efficacy of religious coping for people dealing with stressful situations have yielded mixed results. No published studies to date have attempted to quantitatively synthesize the research on religious coping and psychological adjustment to stress. The purpose of the current study was to synthesize the research on situation-specific religious coping methods and quantitatively determine their efficacy for people dealing with stressful situations. A meta-analysis of 49 relevant studies with a total of 105 effect sizes was conducted in order to quantitatively examine the relationship between religious coping and psychological adjustment to stress. Four types of relationships were investigated: positive religious coping with positive psychological adjustment, positive religious coping with negative psychological adjustment, negative religious coping with positive psychological adjustment, and negative religious coping with negative psychological adjustment. The results of the study generally supported the hypotheses that positive and negative forms of religious coping are related to positive and negative psychological adjustment to stress, respectively. Implications of the findings and their limitations are discussed.", "title": "" }, { "docid": "d83d672642531e1744afe77ed8379b90", "text": "Customer churn prediction in Telecom Industry is a core research topic in recent years. A huge amount of data is generated in Telecom Industry every minute. On the other hand, there is lots of development in data mining techniques. Customer churn has emerged as one of the major issues in Telecom Industry. Telecom research indicates that it is more expensive to gain a new customer than to retain an existing one. In order to retain existing customers, Telecom providers need to know the reasons of churn, which can be realized through the knowledge extracted from Telecom data. This paper surveys the commonly used data mining techniques to identify customer churn patterns. The recent literature in the area of predictive data mining techniques in customer churn behavior is reviewed and a discussion on the future research directions is offered.", "title": "" }, { "docid": "5b9d8b0786691f68659bcce2e6803cdb", "text": "We introduce SentEval, a toolkit for evaluating the quality of universal sentence representations. SentEval encompasses a variety of tasks, including binary and multi-class classification, natural language inference and sentence similarity. The set of tasks was selected based on what appears to be the community consensus regarding the appropriate evaluations for universal sentence representations. The toolkit comes with scripts to download and preprocess datasets, and an easy interface to evaluate sentence encoders. The aim is to provide a fairer, less cumbersome and more centralized way for evaluating sentence representations.", "title": "" }, { "docid": "cec597fa08571d3ff7d8a80b9ded1745", "text": "According to the Merriam-Webster dictionary, satire is a trenchant wit, irony, or sarcasm used to expose and discredit vice or folly. Though it is an important language aspect used in everyday communication, the study of satire detection in natural text is often ignored. In this paper, we identify key value components and features for automatic satire detection. Our experiments have been carried out on three datasets, namely, tweets, product reviews and newswire articles. We examine the impact of a number of state-of-the-art features as well as new generalized textual features. By using these features, we outperform the state of the art by a significant 6% margin.", "title": "" }, { "docid": "be9fc2798c145abe70e652b7967c3760", "text": "Given semantic descriptions of object classes, zero-shot learning aims to accurately recognize objects of the unseen classes, from which no examples are available at the training stage, by associating them to the seen classes, from which labeled examples are provided. We propose to tackle this problem from the perspective of manifold learning. Our main idea is to align the semantic space that is derived from external information to the model space that concerns itself with recognizing visual features. To this end, we introduce a set of \"phantom\" object classes whose coordinates live in both the semantic space and the model space. Serving as bases in a dictionary, they can be optimized from labeled data such that the synthesized real object classifiers achieve optimal discriminative performance. We demonstrate superior accuracy of our approach over the state of the art on four benchmark datasets for zero-shot learning, including the full ImageNet Fall 2011 dataset with more than 20,000 unseen classes.", "title": "" }, { "docid": "c29586780948b05929bed472bccb48e3", "text": "Recognition and perception based mobile applications, such as image recognition, are on the rise. These applications recognize the user's surroundings and augment it with information and/or media. These applications are latency-sensitive. They have a soft-realtime nature - late results are potentially meaningless. On the one hand, given the compute-intensive nature of the tasks performed by such applications, execution is typically offloaded to the cloud. On the other hand, offloading such applications to the cloud incurs network latency, which can increase the user-perceived latency. Consequently, edge computing has been proposed to let devices offload intensive tasks to edge servers instead of the cloud, to reduce latency. In this paper, we propose a different model for using edge servers. We propose to use the edge as a specialized cache for recognition applications and formulate the expected latency for such a cache. We show that using an edge server like a typical web cache, for recognition applications, can lead to higher latencies. We propose Cachier, a system that uses the caching model along with novel optimizations to minimize latency by adaptively balancing load between the edge and the cloud, by leveraging spatiotemporal locality of requests, using offline analysis of applications, and online estimates of network conditions. We evaluate Cachier for image-recognition applications and show that our techniques yield 3x speedup in responsiveness, and perform accurately over a range of operating conditions. To the best of our knowledge, this is the first work that models edge servers as caches for compute-intensive recognition applications, and Cachier is the first system that uses this model to minimize latency for these applications.", "title": "" }, { "docid": "f90fcd27a0ac4a22dc5f229f826d64bf", "text": "While deep reinforcement learning (deep RL) agents are effective at maximizing rewards, it is often unclear what strategies they use to do so. In this paper, we take a step toward explaining deep RL agents through a case study using Atari 2600 environments. In particular, we focus on using saliency maps to understand how an agent learns and executes a policy. We introduce a method for generating useful saliency maps and use it to show 1) what strong agents attend to, 2) whether agents are making decisions for the right or wrong reasons, and 3) how agents evolve during learning. We also test our method on non-expert human subjects and find that it improves their ability to reason about these agents. Overall, our results show that saliency information can provide significant insight into an RL agent’s decisions and learning behavior.", "title": "" }, { "docid": "7b4140cb95fbaae6e272326ab59fb884", "text": "Network intrusion detection systems (NIDSs) play a crucial role in defending computer networks. However, there are concerns regarding the feasibility and sustainability of current approaches when faced with the demands of modern networks. More specifically, these concerns relate to the increasing levels of required human interaction and the decreasing levels of detection accuracy. This paper presents a novel deep learning technique for intrusion detection, which addresses these concerns. We detail our proposed nonsymmetric deep autoencoder (NDAE) for unsupervised feature learning. Furthermore, we also propose our novel deep learning classification model constructed using stacked NDAEs. Our proposed classifier has been implemented in graphics processing unit (GPU)-enabled TensorFlow and evaluated using the benchmark KDD Cup ’99 and NSL-KDD datasets. Promising results have been obtained from our model thus far, demonstrating improvements over existing approaches and the strong potential for use in modern NIDSs.", "title": "" }, { "docid": "dd32879d2b030aa4853f635504afdd98", "text": "A recent addition to Microsoft's Xbox Live Marketplace is a recommender system which allows users to explore both movies and games in a personalized context. The system largely relies on implicit feedback, and runs on a large scale, serving tens of millions of daily users. We describe the system design, and review the core recommendation algorithm.", "title": "" }, { "docid": "6e28ce874571ef5db8f5e44ff78488d2", "text": "The importance of the maintenance function has increased because of its role in keeping and improving system availability and safety, as well as product quality. To support this role, the development of the communication and information technologies has allowed the emergence of the concept of e-maintenance. Within the era of e-manufacturing and e-business, e-maintenance provides the opportunity for a new maintenance generation. As we will discuss later in this paper, e-maintenance integrates existing telemaintenance principles, with Web services and modern e-collaboration principles. Collaboration allows to share and exchange not only information but also knowledge and (e)-intelligence. By means of a collaborative environment, pertinent knowledge and intelligence become available and usable at the right place and time, in order to facilitate reaching the best maintenance decisions. This paper outlines the basic ideas within the e-maintenance concept and then provides an overview of the current research and challenges in this emerging field. An underlying objective is to identify the industrial/academic actors involved in the technological, organizational or management issues related to the development of e-maintenance. Today, this heterogeneous community has to be federated in order to bring up e-maintenance as a new scientific discipline. r 2007 Elsevier Ltd. All rights reserved.", "title": "" } ]
scidocsrr
7c9397b29891bb243f569476161ef9a8
Hierarchical spatio-temporal context modeling for action recognition
[ { "docid": "73b239e6449d82c0d9b1aaef0e9e1d23", "text": "While navigating in an environment, a vision system has to be able to recognize where it is and what the main objects in the scene are. In this paper we present a contextbased vision system for place and object recognition. The goal is to identify familiar locations (e.g., office 610, conference room 941, Main Street), to categorize new environments (office, corridor, street) and to use that information to provide contextual priors for object recognition (e.g., tables are more likely in an office than a street). We present a low-dimensional global image representation that provides relevant information for place recognition and categorization, and show how such contextual information introduces strong priors that simplify object recognition. We have trained the system to recognize over 60 locations (indoors and outdoors) and to suggest the presence and locations of more than 20 different object types. The algorithm has been integrated into a mobile system that provides realtime feedback to the user.", "title": "" } ]
[ { "docid": "d7cc1619647d83911ad65fac9637ef03", "text": "We analyze the increasing threats against IoT devices. We show that Telnet-based attacks that target IoT devices have rocketed since 2014. Based on this observation, we propose an IoT honeypot and sandbox, which attracts and analyzes Telnet-based attacks against various IoT devices running on different CPU architectures such as ARM, MIPS, and PPC. By analyzing the observation results of our honeypot and captured malware samples, we show that there are currently at least 4 distinct DDoS malware families targeting Telnet-enabled IoT devices and one of the families has quickly evolved to target more devices with as many as 9 different CPU architectures.", "title": "" }, { "docid": "68fa8199b92bf8280856138f13c5456a", "text": "To enhance the resolution and accuracy of depth data, some video-based depth super-resolution methods have been proposed, which utilizes its neighboring depth images in the temporal domain. They often consist of two main stages: motion compensation of temporally neighboring depth images and fusion of compensated depth images. However, large displacement 3D motion often leads to compensation error, and the compensation error is further introduced into the fusion. A video-based depth super-resolution method with novel motion compensation and fusion approaches is proposed in this paper. We claim that 3D nearest neighboring field (NNF) is a better choice than using positions with true motion displacement for depth enhancements. To handle large displacement 3D motion, the compensation stage utilized 3D NNF instead of true motion used in the previous methods. Next, the fusion approach is modeled as a regression problem to predict the super-resolution result efficiently for each depth image by using its compensated depth images. A new deep convolutional neural network architecture is designed for fusion, which is able to employ a large amount of video data for learning the complicated regression function. We comprehensively evaluate our method on various RGB-D video sequences to show its superior performance.", "title": "" }, { "docid": "aab2126d980eb594c3c831971d7e3ba9", "text": "IP traceback can be used to find the origin of anonymous traffic; however, Internet-scale IP traceback systems have not been deployed due to a need for cooperation between Internet Service Providers (ISPs). This article presents an Internet-scale Passive IP Trackback (PIT) mechanism that does not require ISP deployment. PIT analyzes the ICMP messages that may scattered to a network telescope as spoofed packets travel from attacker to victim. An Internet route model is then used to help re-construct the attack path. Applying this mechanism to data collected by Cooperative Association for Internet Data Analysis (CAIDA), we found PIT can construct a trace tree from at least one intermediate router in 55.4% the fiercest packet spoofing attacks, and can construct a tree from at least 10 routers in 23.4% of attacks. This initial result shows PIT is a promising mechanism.", "title": "" }, { "docid": "8fa135e5d01ba2480dea4621ceb1e9f4", "text": "With the advent of Grid and application technologies, scientists and engineers are building more and more complex applications to manage and process large data sets, and execute scientific experiments on distributed resources. Such application scenarios require means for composing and executing complex workflows. Therefore, many efforts have been made towards the development of workflow management systems for Grid computing. In this paper, we propose a taxonomy that characterizes and classifies various approaches for building and executing workflows on Grids. The taxonomy not only highlights the design and engineering similarities and differences of state-of-the-art in Grid workflow systems, but also identifies the areas that need further research.", "title": "" }, { "docid": "73b8f2d4b5ed12a50d7b8c775b700718", "text": "Recent \"connectionist\" models provide a new explanatory alternative to the digital computer as a model for brain function. Evidence from our EEG research on the olfactory bulb suggests that the brain may indeed use computational mechanisms like those found in connectionist models. In the present paper we discuss our data and develop a model to describe the neural dynamics responsible for odor recognition and discrimination. The results indicate the existence of sensoryand motor-specific information in the spatial dimension of EEG activity and call for new physiological metaphors and techniques of analysis. Special emphasis is placed in our model on chaotic neural activity. We hypothesize that chaotic behavior serves as the essential ground state for the neural perceptual apparatus, and we propose a mechanism for acquiring new forms of patterned activity corresponding to new learned odors. Finally, some of the implications of our neural model for behavioral theories are briefly discussed. Our research, in concert with the connectionist work, encourages a reevaluation of explanatory models that are based only on the digital computer metaphor.", "title": "" }, { "docid": "4348c83744962fcc238e7f73abecfa5e", "text": "We introduce MeSys, a meaning-based approach, for solving English math word problems (MWPs) via understanding and reasoning in this paper. It first analyzes the text, transforms both body and question parts into their corresponding logic forms, and then performs inference on them. The associated context of each quantity is represented with proposed role-tags (e.g., nsubj, verb, etc.), which provides the flexibility for annotating an extracted math quantity with its associated context information (i.e., the physical meaning of this quantity). Statistical models are proposed to select the operator and operands. A noisy dataset is designed to assess if a solver solves MWPs mainly via understanding or mechanical pattern matching. Experimental results show that our approach outperforms existing systems on both benchmark datasets and the noisy dataset, which demonstrates that the proposed approach understands the meaning of each quantity in the text more.", "title": "" }, { "docid": "99a6bd86719cf6c2f41c83f0a0f5fcbb", "text": "Commercial buildings are significant consumers of electrical power. Also, energy expenses are an increasing cost factor. Many companies therefore want to save money and reduce their power usage. Building administrators have to first understand the power consumption behavior, before they can devise strategies to save energy. Second, sudden unexpected changes in power consumption may hint at device failures of critical technical infrastructure. The goal of our research is to enable the analyst to understand the power consumption behavior and to be aware of unexpected power consumption values. In this paper, we introduce a novel unsupervised anomaly detection algorithm and visualize the resulting anomaly scores to guide the analyst to important time points. Different possibilities for visualizing the power usage time series are presented, combined with a discussion of the design choices to encode the anomaly values. Our methods are applied to real-world time series of power consumption, logged in a hierarchical sensor network. & 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "0068289b5e8d70ca3a0949268f3e43fc", "text": "A novel flexible pressure sensor has been successfully developed for detecting varying applied pressures. Polydimethylsiloxane (PDMS) and eutectic gallium indium (EGaIn) based liquid metal, as the conductive electrodes, were used to fabricate the sensor. The sensor was designed with four capacitors (C1, C2, C3 and C4) which were formed by overlapping the liquid metal based electrodes. The capability of the fabricated capacitive pressure sensor was investigated by applying varying pressures. A maximum average capacitance change of 10.14%, 11.56%, 11.57% and 11.82% was obtained for C1, C2, C3 and C4 respectively, when pressures were applied from 0.25 MPa to 1.1 MPa. A sensitivity of 0.11%/MPa and correlation coefficient of 0.9875 was obtained for the fabricated pressure sensor. The results thus demonstrated the potential of using liquid metal based electrodes for the development of flexible pressure sensors.", "title": "" }, { "docid": "0dd558f3094d82f55806d1170218efce", "text": "As the key supporting system of telecommunication enterprises, OSS/BSS needs to support the service steadily in the long-term running and maintenance process. The system architecture must remain steady and consistent in order to accomplish its goal, which is quite difficult when both the technique and business requirements are changing so rapidly. The framework method raised in this article can guarantee the system architecture’s steadiness and business processing’s consistence by means of describing business requirements, application and information abstractly, becoming more specific and formalized in the planning, developing and maintaining process, and getting the results needed. This article introduces firstly the concepts of framework method, then recommends its applications and superiority in OSS/BSS systems, and lastly gives the prospect of its application.", "title": "" }, { "docid": "131517391d81c321f922e2c1507bb247", "text": "This study was undertaken to apply recurrent neural networks to the recognition of stock price patterns, and to develop a new method for evaluating the networks. In stock tradings, triangle patterns indicate an important clue to the trend of future change in stock prices, but the patterns are not clearly defined by rule-based approaches. From stock price data for all names of corporations listed in The First Section of Tokyo Stock Exchange, an expert called c h a d reader extracted sixteen triangles. These patterns were divided into two groups, 15 training patterns and one test pattern. Using stock data during past 3 years for 16 names, 16 experiments for the recognition were carried out, where the groups were cyclically used. The experiments revealed that the given test triangle was accurately recognized in 15 out of 16 experiments, and that the number of the mismatching patterns was 1.06 per name on the average. A new method was developed for evaluating recurrent networks with context transition performances, in particular, temporal transition performances. The method for the triangle sequences is applicable to decrease in mismatching patterns. By applying a cluster analysis to context vectors generated in the networks at recognition stage, a transition chart for context vector categorization was obtained for each stock price sequence. The finishing categories for the context vectors in the charts indicated that this method was effective in decreasing mismatching patterns.", "title": "" }, { "docid": "63a16361103abc8b2cc149f44f79ae62", "text": "Maturity models are a well-known instrument to support the improvement of functional domains in IS, like software development or testing. In this paper we present a generic method for developing focus area maturity models based on both extensive industrial experience and scientific investigation. Focus area maturity models are distinguished from fixed-level maturity models, like CMM, in that they are especially suited to the incremental improvement of functional domains.", "title": "" }, { "docid": "ba56c75498bfd733eb29ea5601c53181", "text": "The designations employed and the presentation of material in this information product do not imply the expression of any opinion whatsoever on the part of the Food and Agriculture Organization of the United Nations (FAO) concerning the legal or development status of any country, territory, city or area or of its authorities, or concerning the delimitation of its frontiers or boundaries. The mention of specific companies or products of manufacturers, whether or not these have been patented, does not imply that these have been endorsed or recommended by FAO in preference to others of a similar nature that are not mentioned. The views expressed in this information product are those of the author(s) and do not necessarily reflect the views of FAO.", "title": "" }, { "docid": "1a90c5688663bcb368d61ba7e0d5033f", "text": "Content-based audio classification and segmentation is a basis for further audio/video analysis. In this paper, we present our work on audio segmentation and classification which employs support vector machines (SVMs). Five audio classes are considered in this paper: silence, music, background sound, pure speech, and non- pure speech which includes speech over music and speech over noise. A sound stream is segmented by classifying each sub-segment into one of these five classes. We have evaluated the performance of SVM on different audio type-pairs classification with testing unit of different- length and compared the performance of SVM, K-Nearest Neighbor (KNN), and Gaussian Mixture Model (GMM). We also evaluated the effectiveness of some new proposed features. Experiments on a database composed of about 4- hour audio data show that the proposed classifier is very efficient on audio classification and segmentation. It also shows the accuracy of the SVM-based method is much better than the method based on KNN and GMM.", "title": "" }, { "docid": "fb2724d712f76a9c9515ba593b5cdf6c", "text": "This study used meta-analytic techniques to examine the relationship between emotional intelligence (EI) and performance outcomes. A total of 69 independent studies were located that reported correlations between EI and performance or other variables such as general mental ability (GMA) and the Big Five factors of personality. Results indicated that, across criteria, EI had an operational validity of .23 (k 1⁄4 59, N 1⁄4 9522). Various moderating influences such as the EI measure used, dimensions of EI, scoring method and criterion were evaluated. EI correlated .22 with general mental ability (k 1⁄4 19, N 1⁄4 4158) and .23 (Agreeableness and Openness to Experience; k 1⁄4 14, N 1⁄4 3306) to .34 (Extraversion; k 1⁄4 19, N 1⁄4 3718) with the Big Five factors of personality. Results of various subgroup analyses are presented and implications and future directions are provided. 2003 Elsevier Inc. All rights reserved.", "title": "" }, { "docid": "c858d0fd00e7cc0d5ee38c49446264f4", "text": "Following their success in Computer Vision and other areas, deep learning techniques have recently become widely adopted in Music Information Retrieval (MIR) research. However, the majority of works aim to adopt and assess methods that have been shown to be effective in other domains, while there is still a great need for more original research focusing on music primarily and utilising musical knowledge and insight. The goal of this paper is to boost the interest of beginners by providing a comprehensive tutorial and reducing the barriers to entry into deep learning for MIR. We lay out the basic principles and review prominent works in this hard to navigate field. We then outline the network structures that have been successful in MIR problems and facilitate the selection of building blocks for the problems at hand. Finally, guidelines for new tasks and some advanced topics in deep learning are discussed to stimulate new research in this fascinating field.", "title": "" }, { "docid": "16488fc65794a318e06777189edc3e4b", "text": "This work details Sighthoundś fully automated license plate detection and recognition system. The core technology of the system is built using a sequence of deep Convolutional Neural Networks (CNNs) interlaced with accurate and efficient algorithms. The CNNs are trained and fine-tuned so that they are robust under different conditions (e.g. variations in pose, lighting, occlusion, etc.) and can work across a variety of license plate templates (e.g. sizes, backgrounds, fonts, etc). For quantitative analysis, we show that our system outperforms the leading license plate detection and recognition technology i.e. ALPR on several benchmarks. Our system is available to developers through the Sighthound Cloud API at https://www.sighthound.com/products/cloud", "title": "" }, { "docid": "165fbade7d495ce47a379520697f0d75", "text": "Neutral-point-clamped (NPC) inverters are the most widely used topology of multilevel inverters in high-power applications (several megawatts). This paper presents in a very simple way the basic operation and the most used modulation and control techniques developed to date. Special attention is paid to the loss distribution in semiconductors, and an active NPC inverter is presented to overcome this problem. This paper discusses the main fields of application and presents some technological problems such as capacitor balance and losses.", "title": "" }, { "docid": "13ffc17fe344471e96ada190493354d8", "text": "The role of inflammation in the pathogenesis of type 2 diabetes and associated complications is now well established. Several conditions that are driven by inflammatory processes are also associated with diabetes, including rheumatoid arthritis, gout, psoriasis and Crohn's disease, and various anti-inflammatory drugs have been approved or are in late stages of development for the treatment of these conditions. This Review discusses the rationale for the use of some of these anti-inflammatory treatments in patients with diabetes and what we could expect from their use. Future immunomodulatory treatments may not target a specific disease, but could instead act on a dysfunctional pathway that causes several conditions associated with the metabolic syndrome.", "title": "" }, { "docid": "0eea36947d6cfcf1e064f84c89b0e68c", "text": "Recently, large-scale knowledge bases have been constructed by automatically extracting relational facts from text. Unfortunately, most of the current knowledge bases focus on static facts and ignore the temporal dimension. However, the vast majority of facts are evolving with time or are valid only during a particular time period. Thus, time is a significant dimension that should be included in knowledge bases.\n In this paper, we introduce a complete information extraction framework that harvests temporal facts and events from semi-structured data and free text of Wikipedia articles to create a temporal ontology. First, we extend a temporal data representation model by making it aware of events. Second, we develop an information extraction method which harvests temporal facts and events from Wikipedia infoboxes, categories, lists, and article titles in order to build a temporal knowledge base. Third, we show how the system can use its extracted knowledge for further growing the knowledge base.\n We demonstrate the effectiveness of our proposed methods through several experiments. We extracted more than one million temporal facts with precision over 90% for extraction from semi-structured data and almost 70% for extraction from text.", "title": "" }, { "docid": "a421e716d4e47b03f773d8b05fe9c808", "text": "Determining the “origin of a file” in a file system is often required during digital investigations. While the problem of “origin of a file” appears intractable in isolation, it often becomes simpler if one considers the environmental context, viz., the presence of browser history, cache logs, cookies and so on. Metadata can help bridge this contextual gap. Majority of the current tools, with their search-and-query interface, while enabling extraction of metadata stops short of leading the investigator to the “associations” that metadata potentially point to, thereby enabling an approach to solving the “origin of a file” problem. In this paper, we develop a method to identify the origin of files downloaded from the Internet using metadata based associations. Metadata based associations are derived though metadata value matches on the digital artifacts and the artifacts thus associated, are grouped together automatically. These associations can reveal certain higher-order relationships across different sources such as file systems and log files. We define four relationships between files on file systems and log records in log files which we use to determine the origin of a particular file. The files in question are tracked from the user file system under examination to the different browser logs generated during a user’s online activity to their points of origin in the Internet.", "title": "" } ]
scidocsrr
c4ff2a995cc53e5fe7bac38c6fc98bc6
Beyond the Transfer-and-Merge Wordnet Construction: plWordNet and a Comparison with WordNet
[ { "docid": "6c175d7a90ed74ab3b115977c82b0ffa", "text": "We present statistical analyses of the large-scale structure of 3 types of semantic networks: word associations, WordNet, and Roget's Thesaurus. We show that they have a small-world structure, characterized by sparse connectivity, short average path lengths between words, and strong local clustering. In addition, the distributions of the number of connections follow power laws that indicate a scale-free pattern of connectivity, with most nodes having relatively few connections joined together through a small number of hubs with many connections. These regularities have also been found in certain other complex natural networks, such as the World Wide Web, but they are not consistent with many conventional models of semantic organization, based on inheritance hierarchies, arbitrarily structured networks, or high-dimensional vector spaces. We propose that these structures reflect the mechanisms by which semantic networks grow. We describe a simple model for semantic growth, in which each new word or concept is connected to an existing network by differentiating the connectivity pattern of an existing node. This model generates appropriate small-world statistics and power-law connectivity distributions, and it also suggests one possible mechanistic basis for the effects of learning history variables (age of acquisition, usage frequency) on behavioral performance in semantic processing tasks.", "title": "" } ]
[ { "docid": "e2c6437d257559211d182b5707aca1a4", "text": "In present times, social forums such as Quora and Yahoo! Answers constitute powerful media through which people discuss on a variety of topics and express their intentions and thoughts. Here they often reveal their potential intent to purchase ‘Purchase Intent’ (PI). A purchase intent is defined as a text expression showing a desire to purchase a product or a service in future. Extracting posts having PI from a user’s social posts gives huge opportunities towards web personalization, targeted marketing and improving community observing systems. In this paper, we explore the novel problem of detecting PIs from social posts and classifying them. We find that using linguistic features along with statistical features of PI expressions achieves a significant improvement in PI classification over ‘bag-ofwords’ based features used in many present day socialmedia classification tasks. Our approach takes into consideration the specifics of social posts like limited contextual information, incorrect grammar, language ambiguities, etc. by extracting features at two different levels of text granularity word and phrase based features and grammatical dependency based features. Apart from these, the patterns observed in PI posts help us to identify some specific features.", "title": "" }, { "docid": "feaa54ff80bac29319a33de7b252827a", "text": "Feedback is assessing an individual's action in any endeavor. The judgment helps one to grow well in any field. By the feedback a student can understand and improve upon mistakes committed, teachers come to know about the student's capability and implement new teaching methods. New Technologies also come up for the enhancement of Student's Performance. A study of the assessment of student performance through various papers using data mining and also with ontology based applications makes one decide certain factors like confidence level, stress and time management, holistic approach towards an issue which may be useful in giving a prediction about the students' work performance level in organizations. The Survey encompasses the assessment of a student's performance in academics using Data mining Techniques and also with Ontology based Applications.", "title": "" }, { "docid": "3593a71d0792cc79dc16d077ddb41adc", "text": "Dimension reduction provides a useful tool for analyzing high dimensional data. The recently developed Envelope method is a parsimonious version of the classical multivariate regression model through identifying a minimal reducing subspace of the responses. However, existing envelope methods assume an independent error structure in the model. While the assumption of independence is convenient, it does not address the additional complications associated with spatial or temporal correlations in the data. In this article, we introduce a Spatial Envelope method for dimension reduction in the presence of dependencies across space. We study the asymptotic properties of the proposed estimators and show that the asymptotic variance of the estimated regression coefficients under the spatial envelope model is smaller than that from the traditional maximum likelihood estimation. Furthermore, we present a computationally efficient approach for inference. The efficacy of the new approach is investigated through simulation studies and an analysis of an Air Quality Standard (AQS) dataset from the Environmental Protection Agency (EPA). Keyword: Dimension reduction, Grassmanian manifold, Matern covariance function, Spatial dependency.", "title": "" }, { "docid": "52115901d15b2c0d75748ac6f4cf2851", "text": "This paper presents the development of the CYBERLEGs Alpha-Prototype prosthesis, a new transfemoral prosthesis incorporating a new variable stiffness ankle actuator based on the MACCEPA architecture, a passive knee with two locking mechanisms, and an energy transfer mechanism that harvests negative work from the knee and delivers it to the ankle to assist pushoff. The CYBERLEGs Alpha-Prosthesis is part of the CYBERLEGs FP7-ICT project, which combines a prosthesis system to replace a lost limb in parallel with an exoskeleton to assist the sound leg, and sensory array to control both systems. The prosthesis attempts to produce a natural level ground walking gait that approximates the joint torques and kinematics of a non-amputee while maintaining compliant joints, which has the potential to decrease impulsive losses, and ultimately reduce the end user energy consumption. This first prototype consists of a passive knee and an active ankle which are energetically coupled to reduce the total power consumption of the device. Here we present simulations of the actuation system of the ankle and the passive behavior of the knee module with and without the energy transfer effects, the mechanical design of the prosthesis, and empirical results from testing of the A preliminary version of this paper was presented at the Wearable Robotics Workshop, Neurotechnix 2013. ∗Corresponding author Email addresses: lflynn@vub.ac.be (Louis Flynn), jgeeroms@vub.ac.be (Joost Geeroms), rjimenez@vub.ac.be (Rene Jimenez-Fabian), bram.vanderborght@vub.ac.be (Bram Vanderborght), n.vitiello@sssup.it (Nicola Vitiello), dlefeber@vub.ac.be (Dirk Lefeber) Preprint submitted to Journal of Robotics and Autonomous Systems November 30, 2014 physical device with amputee subjects.", "title": "" }, { "docid": "51d579a4d0d1fa3ea0be1ccfd3bb92a9", "text": "ÐThis paper describes a method for partitioning 3D surface meshes into useful segments. The proposed method generalizes morphological watersheds, an image segmentation technique, to 3D surfaces. This surface segmentation uses the total curvature of the surface as an indication of region boundaries. The surface is segmented into patches, where each patch has a relatively consistent curvature throughout, and is bounded by areas of higher, or drastically different, curvature. This algorithm has applications for a variety of important problems in visualization and geometrical modeling including 3D feature extraction, mesh reduction, texture mapping 3D surfaces, and computer aided design. Index TermsÐSurfaces, surface segmentation, watershed algorithm, curvature-based methods.", "title": "" }, { "docid": "4073da56cc874ea71f5e8f9c1c376cf8", "text": "AIM\nThis article reports the results of a study evaluating a preferred music listening intervention for reducing anxiety in older adults with dementia in nursing homes.\n\n\nBACKGROUND\nAnxiety can have a significant negative impact on older adults' functional status, quality of life and health care resources. However, anxiety is often under-diagnosed and inappropriately treated in those with dementia. Little is known about the use of a preferred music listening intervention for managing anxiety in those with dementia.\n\n\nDESIGN\nA quasi-experimental pretest and posttest design was used.\n\n\nMETHODS\nThis study aimed to evaluate the effectiveness of a preferred music listening intervention on anxiety in older adults with dementia in nursing home. Twenty-nine participants in the experimental group received a 30-minute music listening intervention based on personal preferences delivered by trained nursing staff in mid-afternoon, twice a week for six weeks. Meanwhile, 23 participants in the control group only received usual standard care with no music. Anxiety was measured by Rating Anxiety in Dementia at baseline and week six. Analysis of covariance (ancova) was used to determine the effectiveness of a preferred music listening intervention on anxiety at six weeks while controlling for pretest anxiety, age and marital status.\n\n\nRESULTS\nancova results indicated that older adults who received the preferred music listening had a significantly lower anxiety score at six weeks compared with those who received the usual standard care with no music (F = 12.15, p = 0.001).\n\n\nCONCLUSIONS\nPreferred music listening had a positive impact by reducing the level of anxiety in older adults with dementia.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nNursing staff can learn how to implement preferred music intervention to provide appropriate care tailored to the individual needs of older adults with dementia. Preferred music listening is an inexpensive and viable intervention to promote mental health of those with dementia.", "title": "" }, { "docid": "3b5d119416d602a31d5975bacd7acc8e", "text": "We present a parametric family of regression models for interval-censored event-time (survival) data that accomodates both fixed (e.g. baseline) and time-dependent covariates. The model employs a three-parameter family of survival distributions that includes the Weibull, negative binomial, and log-logistic distributions as special cases, and can be applied to data with left, right, interval, or non-censored event times. Standard methods, such as Newton-Raphson, can be employed to estimate the model and the resulting estimates have an asymptotically normal distribution about the true values with a covariance matrix that is consistently estimated by the information function. The deviance function is described to assess model fit and a robust sandwich estimate of the covariance may also be employed to provide asymptotically robust inferences when the model assumptions do not apply. Spline functions may also be employed to allow for non-linear covariates. The model is applied to data from a long-term study of type 1 diabetes to describe the effects of longitudinal measures of glycemia (HbA1c) over time (the time-dependent covariate) on the risk of progression of diabetic retinopathy (eye disease), an interval-censored event-time outcome.", "title": "" }, { "docid": "d4b8b16e48cc7463819635741d283ecb", "text": "Based on recent developments in physics-informed deep learning and deep hidden physics models, we put forth a framework for discovering turbulence models from scattered and potentially noisy spatio-temporal measurements of the probability density function (PDF). The models are for the conditional expected diffusion and the conditional expected dissipation of a Fickian scalar described by its transported single-point PDF equation. The discovered model are appraised against exact solution derived by the amplitude mapping closure (AMC)/ Johnsohn-Edgeworth translation (JET) model of binary scalar mixing in homogeneous turbulence.", "title": "" }, { "docid": "4661b378eda6cd44c95c40ebf06b066b", "text": "Speech signal degradation in real environments mainly results from room reverberation and concurrent noise. While human listening is robust in complex auditory scenes, current speech segregation algorithms do not perform well in noisy and reverberant environments. We treat the binaural segregation problem as binary classification, and employ deep neural networks (DNNs) for the classification task. The binaural features of the interaural time difference and interaural level difference are used as the main auditory features for classification. The monaural feature of gammatone frequency cepstral coefficients is also used to improve classification performance, especially when interference and target speech are collocated or very close to one another. We systematically examine DNN generalization to untrained spatial configurations. Evaluations and comparisons show that DNN-based binaural classification produces superior segregation performance in a variety of multisource and reverberant conditions.", "title": "" }, { "docid": "7c17cb4da60caf8806027273c4c10708", "text": "Recently, IEEE 802.11ax Task Group has adapted OFDMA as a new technique for enabling multi-user transmission. It has been also decided that the scheduling duration should be same for all the users in a multi-user OFDMA so that the transmission of the users should end at the same time. In order to realize that condition, the users with insufficient data should transmit null data (i.e. padding) to fill the duration. While this scheme offers strong features such as resilience to Overlapping Basic Service Set (OBSS) interference and ease of synchronization, it also poses major side issues of degraded throughput performance and waste of devices' energy. In this work, for OFDMA based 802.11 WLANs we first propose practical algorithm in which the scheduling duration is fixed and does not change from time to time. In the second algorithm the scheduling duration is dynamically determined in a resource allocation framework by taking into account the padding overhead, airtime fairness and energy consumption of the users. We analytically investigate our resource allocation problems through Lyapunov optimization techniques and show that our algorithms are arbitrarily close to the optimal performance at the price of reduced convergence rate. We also calculate the overhead of our algorithms in a realistic setup and propose solutions for the implementation issues.", "title": "" }, { "docid": "20ada3cc7bfc2ed7167f9c9f6484cbb0", "text": "This paper presents a Model Predictive Control (MPC) for grid-tied Packed U Cells (PUC) multilevel inverter. The system under study consists of a single-phase 3-cell PUC inverter connected to the grid through filtering inductor. The proposed topology allows the generation of 7-level output voltage with reduction of passive and active components compared to the conventional multilevel inverters. The aim of the proposed MPC technique is to achieve grid-tied current injection, low Total Harmonic Distortion (THD) of the current, unity power factor, while balancing the capacitor voltages at maximum power point (MPP). The feasibility of this strategy is validated by simulation using Matlab/Simulink environment.", "title": "" }, { "docid": "dfd6367741547212520b4303bbd2b8d1", "text": "A highly digital two-stage fractional-N phaselocked loop (PLL) architecture utilizing a first-order 1-bit frequency-to-digital converter (FDC) is proposed and implemented in a 65 nm CMOS process. Performance of the first-order 1-bit FDC is improved by using a phase interpolatorbased fractional divider that reduces phase quantizer input span and by using a multiplying delay-locked loop that increases its oversampling ratio. We also describe an analogy between a time-to-digital converter (TDC) and a FDC followed by an accumulator that allows us to leverage the TDC-based PLL analysis techniques to study the impact of FDC characteristics on FDC-based fractional-N PLL (FDCPLL) performance. Utilizing proposed techniques, a prototype PLL achieves 1 MHz bandwidth, −101.6 dBc/Hz in-band phase noise, and 1.22 psrms (1 kHz–40 MHz) jitter while generating 5.031 GHz output from 31.25 MHz reference clock input. For the same output frequency, the stand-alone second-stage fractional-N FDCPLL achieves 1 MHz bandwidth, −106.1 dBc/Hz in-band phase noise, and 403 fsrms jitter with a 500 MHz reference clock input. The two-stage PLL consumes 10.1 mW power from a 1 V supply, out of which 7.1 mW is consumed by the second-stage FDCPLL.", "title": "" }, { "docid": "5e2eee141595ae58ca69ee694dc51c8a", "text": "Evidence-based dietary information represented as unstructured text is a crucial information that needs to be accessed in order to help dietitians follow the new knowledge arrives daily with newly published scientific reports. Different named-entity recognition (NER) methods have been introduced previously to extract useful information from the biomedical literature. They are focused on, for example extracting gene mentions, proteins mentions, relationships between genes and proteins, chemical concepts and relationships between drugs and diseases. In this paper, we present a novel NER method, called drNER, for knowledge extraction of evidence-based dietary information. To the best of our knowledge this is the first attempt at extracting dietary concepts. DrNER is a rule-based NER that consists of two phases. The first one involves the detection and determination of the entities mention, and the second one involves the selection and extraction of the entities. We evaluate the method by using text corpora from heterogeneous sources, including text from several scientifically validated web sites and text from scientific publications. Evaluation of the method showed that drNER gives good results and can be used for knowledge extraction of evidence-based dietary recommendations.", "title": "" }, { "docid": "c746704be981521aa38f7760a37d4b83", "text": "Myoelectric or electromyogram (EMG) signals can be useful in intelligently recognizing intended limb motion of a person. This paper presents an attempt to develop a four-channel EMG signal acquisition system as part of an ongoing research in the development of an active prosthetic device. The acquired signals are used for identification and classification of six unique movements of hand and wrist, viz. hand open, hand close, wrist flexion, wrist extension, ulnar deviation and radial deviation. This information is used for actuation of prosthetic drive. The time domain features are extracted, and their dimension is reduced using principal component analysis. The reduced features are classified using two different techniques: k nearest neighbor and artificial neural networks, and the results are compared.", "title": "" }, { "docid": "becea3d4b1a791b74dc7c6de15584611", "text": "This study analyzes the climate change and economic impacts of food waste in the United States. Using lossadjusted national food availability data for 134 food commodities, it calculates the greenhouse gas emissions due to wasted food using life cycle assessment and the economic cost of the waste using retail prices. The analysis shows that avoidable food waste in the US exceeds 55 million metric tonnes per year, nearly 29% of annual production. This waste produces life-cycle greenhouse gas emissions of at least 113 million metric tonnes of CO2e annually, equivalent to 2% of national emissions, and costs $198 billion.", "title": "" }, { "docid": "611c8ce42410f8f678aa5cb5c0de535b", "text": "User simulators are a principal offline method for training and evaluating human-computer dialog systems. In this paper, we examine simple sequence-to-sequence neural network architectures for training end-to-end, natural language to natural language, user simulators, using only raw logs of previous interactions without any additional human labelling. We compare the neural network-based simulators with a language model (LM)-based approach for creating natural language user simulators. Using both an automatic evaluation using LM perplexity and a human evaluation, we demonstrate that the sequence-tosequence approaches outperform the LM-based method. We show correlation between LM perplexity and the human evaluation on this task, and discuss the benefits of different neural network architecture variations.", "title": "" }, { "docid": "4a0756bffc50e11a0bcc2ab88502e1a2", "text": "The interest in attribute weighting for soft subspace clustering have been increasing in the last years. However, most of the proposed approaches are designed for dealing only with numeric data. In this paper, our focus is on soft subspace clustering for categorical data. In soft subspace clustering, the attribute weighting approach plays a crucial role. Due to this, we propose an entropy-based approach for measuring the relevance of each categorical attribute in each cluster. Besides that, we propose the EBK-modes (entropy-based k-modes), an extension of the basic k-modes that uses our approach for attribute weighting. We performed experiments on five real-world datasets, comparing the performance of our algorithms with four state-of-the-art algorithms, using three well-known evaluation metrics: accuracy, f-measure and adjusted Rand index. According to the experiments, the EBK-modes outperforms the algorithms that were considered in the evaluation, regarding the considered metrics.", "title": "" }, { "docid": "229fb099d2485907648d7a71cd8682af", "text": "This study was undertaken to determine the in vitro antimicrobial activities of 15 commercial essential oils and their main components in order to pre-select candidates for potential application in highly perishable food preservation. The antibacterial effects against food-borne pathogenic bacteria (Listeria monocytogenes, Salmonella Typhimurium, and enterohemorrhagic Escherichia coli O157:H7) and food spoilage bacteria (Brochothrix thermosphacta and Pseudomonas fluorescens) were tested using paper disk diffusion method, followed by determination of minimum inhibitory (MIC) and bactericidal (MBC) concentrations. Most of the tested essential oils exhibited antimicrobial activity against all tested bacteria, except galangal oil. The essential oils of cinnamon, oregano, and thyme showed strong antimicrobial activities with MIC ≥ 0.125 μL/mL and MBC ≥ 0.25 μL/mL. Among tested bacteria, P. fluorescens was the most resistant to selected essential oils with MICs and MBCs of 1 μL/mL. The results suggest that the activity of the essential oils of cinnamon, oregano, thyme, and clove can be attributed to the existence mostly of cinnamaldehyde, carvacrol, thymol, and eugenol, which appear to possess similar activities against all the tested bacteria. These materials could be served as an important natural alternative to prevent bacterial growth in food products.", "title": "" }, { "docid": "8b548e2c1922e6e105ab40b60fd7433c", "text": "Although there have been many decades of research and commercial presence on high performance general purpose processors, there are still many applications that require fully customized hardware architectures for further computational acceleration. Recently, deep learning has been successfully used to learn in a wide variety of applications, but their heavy computation demand has considerably limited their practical applications. This paper proposes a fully pipelined acceleration architecture to alleviate high computational demand of an artificial neural network (ANN) which is restricted Boltzmann machine (RBM) ANNs. The implemented RBM ANN accelerator (integrating $1024\\times 1024$ network size, using 128 input cases per batch, and running at a 303-MHz clock frequency) integrated in a state-of-the art field-programmable gate array (FPGA) (Xilinx Virtex 7 XC7V-2000T) provides a computational performance of 301-billion connection-updates-per-second and about 193 times higher performance than a software solution running on general purpose processors. Most importantly, the architecture enables over 4 times (12 times in batch learning) higher performance compared with a previous work when both are implemented in an FPGA device (XC2VP70).", "title": "" }, { "docid": "cf20ffac349478b3fc5753624eb17c7f", "text": "Knowledge stickiness often impedes knowledge transfer. When knowledge is complex and the knowledge seeker lacks intimacy with the knowledge source, knowledge sticks in its point of origin because the knowledge seeker faces ambiguity about the best way to acquire the needed knowledge. We theorize that, given the extent of that ambiguity, knowledge seekers will make a choice to either ask for needed knowledge immediately after deciding it is needed, or wait and ask for it at a later date. We hypothesize that when knowledge is sticky, knowledge seekers will delay asking for knowledge and, in the interim period, use an enterprise social networking site to gather information that can lubricate stuck knowledge, such as how, when, and in what way to ask for the desired knowledge. We propose that by doing this, knowledge seekers can increase their ultimate satisfaction with the knowledge once they ask for it. Data describing specific instances of knowledge transfer occurring in a large telecommunications firm supported these hypotheses, showing that knowledge transfer is made easier by the fact that enterprise social networking sites make other peoples’ communications visible to casual observers such that knowledge seekers can gather information about the knowledge and its source simply by watching his or her actions through the technology, even if they never interacted with the source directly themselves. The findings show that simple awareness of others’ communications (what we call ambient awareness) played a pivotal role in helping knowledge seekers to obtain interpersonal and knowledge-related material with which to lubricate their interactions with knowledge sources. 1University of California, Santa Barbara, CA, USA 2Northwestern University, Evanston, IL, USA Corresponding Author: Paul M. Leonardi, Phelps Hall, University of California, Santa Barbara, CA, USA, 93106-5129. Email: Leonardi@tmp.ucsb.edu 540509 ABSXXX10.1177/0002764214540509American Behavioral ScientistLeonardi and Meyer research-article2014 at UNIV CALIFORNIA SANTA BARBARA on December 9, 2014 abs.sagepub.com Downloaded from Leonardi and Meyer 11", "title": "" } ]
scidocsrr
483344d0bba0c0e1a5a5aadb781e32d3
Matters of Consequence : An Empirical Investigation of the WAIS-III and WAIS-IV and Implications for Addressing the Atkins Intelligence Criterion
[ { "docid": "00dd59c3d18a6d10a9ea315ba59b5e3f", "text": "Compared Verbal, Performance, and Full Scale IQ scores from two groups of neurologically impaired patients (N = 114) similar in age, years of education, occupation, race, sex, and etiology and location of cerebral dysfunction. One group had been given the WAIS and the other the WAIS-R. All three IQ scores were higher for the WAIS group, with Full Scale and Verbal scores significantly (p less than .05) higher. Changes in item content and standardization sample cohort effects are offered as partial possible explanation for the results. The IQ scores from the two tests cannot be considered as interchangeable for neurological patients.", "title": "" } ]
[ { "docid": "1f677c07ba42617ac590e6e0a5cdfeab", "text": "Network Functions Virtualization (NFV) is an emerging initiative to overcome increasing operational and capital costs faced by network operators due to the need to physically locate network functions in specific hardware appliances. In NFV, standard IT virtualization evolves to consolidate network functions onto high volume servers, switches and storage that can be located anywhere in the network. Services are built by chaining a set of Virtual Network Functions (VNFs) deployed on commodity hardware. The implementation of NFV leads to the challenge: How several network services (VNF chains) are optimally orchestrated and allocated on the substrate network infrastructure? In this paper, we address this problem and propose CoordVNF, a heuristic method to coordinate the composition of VNF chains and their embedding into the substrate network. CoordVNF aims to minimize bandwidth utilization while computing results within reasonable runtime.", "title": "" }, { "docid": "69ca1ebc519ed772e0d7444c98547060", "text": "The direct position determination (DPD) approach is a single-step method, which uses the maximum likelihood estimator to localize sources emitting electromagnetic energy using combined data from all available sensors. The DPD is known to outperform the traditional two-step methods under low signal-to-noise ratio conditions. We propose an improvement to the DPD approach, using the well-known minimum-variance-distortionless-response (MVDR) approach. Unlike maximum likelihood, the number of sources needs not be known before applying the method. The combination of both the direct approach and MVDR yields unprecedented localization accuracy and resolution for weak sources. We demonstrate this approach on the problem of multistatic radar, but the method can easily be extended to general localization problems.", "title": "" }, { "docid": "b4d7974ca20b727e8c361826f24861d4", "text": "We introduce gvnn, a neural network library in Torch aimed towards bridging the gap between classic geometric computer vision and deep learning. Inspired by the recent success of Spatial Transformer Networks, we propose several new layers which are often used as parametric transformations on the data in geometric computer vision. These layers can be inserted within a neural network much in the spirit of the original spatial transformers and allow backpropagation to enable end-toend learning of a network involving any domain knowledge in geometric computer vision. This opens up applications in learning invariance to 3D geometric transformation for place recognition, end-to-end visual odometry, depth estimation and unsupervised learning through warping with a parametric transformation for image reconstruction error.", "title": "" }, { "docid": "eab2dfb9e8e129f99e263aef38dee26b", "text": "A fully passive printable chipless RFID system is presented. The chipless tag uses the amplitude and phase of the spectral signature of a multiresonator circuit and provides 1:1 correspondence of data bits. The tag comprises of a microstrip spiral multiresonator and cross-polarized transmitting and receiving microstrip ultra-wideband disc loaded monopole antennas. The reader antenna is a log periodic dipole antenna with average 5.5-dBi gain. Firstly, a 6-bit chipless tag is designed to encode 000000 and 010101 IDs. Finally, a 35-bit chipless tag based on the same principle is presented. The tag has potentials for low-cost item tagging such as banknotes and secured documents.", "title": "" }, { "docid": "e2d42717d11a6e0eaf788502a423df44", "text": "Naeemul Hassan1 Gensheng Zhang2 Fatma Arslan2 Josue Caraballo2 Damian Jimenez2 Siddhant Gawsane2 Shohedul Hasan2 Minumol Joseph2 Aaditya Kulkarni2 Anil Kumar Nayak2 Vikas Sable2 Chengkai Li2 Mark Tremayne3 1Department of Computer and Information Science, University of Mississippi 2Department of Computer Science and Engineering, University of Texas at Arlington 3Department of Communication, University of Texas at Arlington", "title": "" }, { "docid": "6bbc32ecaf54b9a51442f92edbc2604a", "text": "Artificial bee colony (ABC), an optimization algorithm is a recent addition to the family of population based search algorithm. ABC has taken its inspiration from the collective intelligent foraging behavior of honey bees. In this study we have incorporated golden section search mechanism in the structure of basic ABC to improve the global convergence and prevent to stick on a local solution. The proposed variant is termed as ILS-ABC. Comparative numerical results with the state-of-art algorithms show the performance of the proposal when applied to the set of unconstrained engineering design problems. The simulated results show that the proposed variant can be successfully applied to solve real life problems.", "title": "" }, { "docid": "a992be26d6b41ee4d3a8f8fa7014727b", "text": "In this paper, we develop a heart disease prediction model that can assist medical professionals in predicting heart disease status based on the clinical data of patients. Firstly, we select 14 important clinical features, i.e., age, sex, chest pain type, trestbps, cholesterol, fasting blood sugar, resting ecg, max heart rate, exercise induced angina, old peak, slope, number of vessels colored, thal and diagnosis of heart disease. Secondly, we develop an prediction model using J48 decision tree for classifying heart disease based on these clinical features against unpruned, pruned and pruned with reduced error pruning approach.. Finally, the accuracy of Pruned J48 Decision Tree with Reduced Error Pruning Approach is more better then the simple Pruned and Unpruned approach. The result obtained that which shows that fasting blood sugar is the most important attribute which gives better classification against the other attributes but its gives not better accuracy. Keywords—Data mining, Reduced Error Pruning, Gain Ratio and Decision Tree.", "title": "" }, { "docid": "81ef390009fb64bf235147bc0e186bab", "text": "In this paper, we show how to calibrate a camera and to recover the geometry and the photometry (textures) of objects from a single image. The aim of this work is to make it possible walkthrough and augment reality in a 3D model reconstructed from a single image. The calibration step does not need any calibration target and makes only four assumptions: (1) the single image contains at least two vanishing points, (2) the length (in 3D space) of one line segment (for determining the translation vector) in the image is known, (3) the principle point is the center of the image, and (4) the aspect ratio is fixed by the user. Each vanishing point is determined from a set of parallel lines. These vanishing points help determine a 3D world coordinate system R o. After having computed the focal length, the rotation matrix and the translation vector are evaluated in turn for describing the rigid motion between R o and the camera coordinate system R c. Next, the reconstruction step consists in placing, rotating, scaling, and translating a rectangular 3D box that must fit at best with the potential objects within the scene as seen through the single image. With each face of a rectangular box, a texture that may contain holes due to invisible parts of certain objects is assigned. We show how the textures are extracted and how these holes are located and filled. Our method has been applied to various real images (pictures scanned from books, photographs) and synthetic images.", "title": "" }, { "docid": "6cd5be1e4d422373084a2ef78a964594", "text": "Economic inequality was a key issue in the 2016 presidential campaign1 and probably influenced the election of Donald Trump.2 It is an issue that is profoundly significant to the growing number of individuals—disproportionately women and minorities—who find themselves on the wrong end of the increasingly bi-modal economic spectrum, and raises serious concerns about the erosion of the “American dream”3 and the stability and viability of our democracy.4 Economists, political theorists, sociologists, and", "title": "" }, { "docid": "a7acd2da721136143ebd9608a041236b", "text": "Mr M, a patient with semantic dementia — a neurodegenerative disease that is characterized by the gradual deterioration of semantic memory — was being driven through the countryside to visit a friend and was able to remind his wife where to turn along the not-recently-travelled route. Then, pointing at the sheep in the field, he asked her “What are those things?” Prior to the onset of symptoms in his late 40s, this man had normal semantic memory. What has gone wrong in his brain to produce this dramatic and selective erosion of conceptual knowledge?", "title": "" }, { "docid": "7e5906ccadcf8471933ed0e25a357eaf", "text": "In this paper, we introduce Rec4LRW, a recommender system (RS) for assisting researchers in finding research papers for their literature review and writing purposes. This system focuses on three researcher tasks – (1) Building a reading list of research papers, (2) Finding similar papers based on a set of papers, and (3) Shortlisting papers from the final reading list for inclusion in manuscript based on article type. A set of intermediate criteria are proposed to capture the relations between a research paper and its bibliography. The recommendation techniques for the three tasks in Rec4LRW are specifically devised on top of the intermediate criteria. The Rec4LRW workflow along with the screen designs for the three tasks is provided in this paper. The recommendation techniques in the system will be evaluated with state-of-the-art approaches along with user-based evaluation in subsequent studies.", "title": "" }, { "docid": "118738ca4b870e164c7be53e882a9ab4", "text": "IA. Cause and Effect . . . . . . . . . . . . . . 465 1.2. Prerequisites of Selforganization . . . . . . . 467 1.2.3. Evolut ion Must S ta r t f rom R andom Even ts 467 1.2.2. Ins t ruc t ion Requires In format ion . . . . 467 1.2.3. In format ion Originates or Gains Value by S e l e c t i o n . . . . . . . . . . . . . . . 469 1.2.4. Selection Occurs wi th Special Substances under Special Conditions . . . . . . . . 470", "title": "" }, { "docid": "85cb15ae35a6368c004fde646c486491", "text": "OBJECTIVES\nThe purposes of this study were to identify age-related changes in objectively recorded sleep patterns across the human life span in healthy individuals and to clarify whether sleep latency and percentages of stage 1, stage 2, and rapid eye movement (REM) sleep significantly change with age.\n\n\nDESIGN\nReview of literature of articles published between 1960 and 2003 in peer-reviewed journals and meta-analysis.\n\n\nPARTICIPANTS\n65 studies representing 3,577 subjects aged 5 years to 102 years.\n\n\nMEASUREMENT\nThe research reports included in this meta-analysis met the following criteria: (1) included nonclinical participants aged 5 years or older; (2) included measures of sleep characteristics by \"all night\" polysomnography or actigraphy on sleep latency, sleep efficiency, total sleep time, stage 1 sleep, stage 2 sleep, slow-wave sleep, REM sleep, REM latency, or minutes awake after sleep onset; (3) included numeric presentation of the data; and (4) were published between 1960 and 2003 in peer-reviewed journals.\n\n\nRESULTS\nIn children and adolescents, total sleep time decreased with age only in studies performed on school days. Percentage of slow-wave sleep was significantly negatively correlated with age. Percentages of stage 2 and REM sleep significantly changed with age. In adults, total sleep time, sleep efficiency, percentage of slow-wave sleep, percentage of REM sleep, and REM latency all significantly decreased with age, while sleep latency, percentage of stage 1 sleep, percentage of stage 2 sleep, and wake after sleep onset significantly increased with age. However, only sleep efficiency continued to significantly decrease after 60 years of age. The magnitudes of the effect sizes noted changed depending on whether or not studied participants were screened for mental disorders, organic diseases, use of drug or alcohol, obstructive sleep apnea syndrome, or other sleep disorders.\n\n\nCONCLUSIONS\nIn adults, it appeared that sleep latency, percentages of stage 1 and stage 2 significantly increased with age while percentage of REM sleep decreased. However, effect sizes for the different sleep parameters were greatly modified by the quality of subject screening, diminishing or even masking age associations with different sleep parameters. The number of studies that examined the evolution of sleep parameters with age are scant among school-aged children, adolescents, and middle-aged adults. There are also very few studies that examined the effect of race on polysomnographic sleep parameters.", "title": "" }, { "docid": "6e9ed92dc37e2d7e7ed956ed7b880ff2", "text": "Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first derivatives are available and that the constraint gradients are sparse. We discuss an SQP algorithm that uses a smooth augmented Lagrangian merit function and makes explicit provision for infeasibility in the original problem and the QP subproblems. SNOPT is a particular implementation that makes use of a semidefinite QP solver. It is based on a limited-memory quasi-Newton approximation to the Hessian of the Lagrangian and uses a reduced-Hessian algorithm (SQOPT) for solving the QP subproblems. It is designed for problems with many thousands of constraints and variables but a moderate number of degrees of freedom (say, up to 2000). An important application is to trajectory optimization in the aerospace industry. Numerical results are given for most problems in the CUTE and COPS test collections (about 900 examples).", "title": "" }, { "docid": "4c3e6abcc0963efe7423fa25e9b231cb", "text": "In this demo, we present NaLIR, a generic interactive natural language interface for querying relational databases. NaLIR can accept a logically complex English language sentence as query input. This query is first translated into a SQL query, which may include aggregation, nesting, and various types of joins, among other things, and then evaluated against an RDBMS. In this demonstration, we show that NaLIR, while far from being able to pass the Turing test, is perfectly usable in practice, and able to handle even quite complex queries in a variety of application domains. In addition, we also demonstrate how carefully designed interactive communication can avoid misinterpretation with minimum user burden.", "title": "" }, { "docid": "8febd83ab32225be6a89b5f0236e01f6", "text": "Tissue engineering can be used to restore, maintain, or enhance tissues and organs. The potential impact of this field, however, is far broader-in the future, engineered tissues could reduce the need for organ replacement, and could greatly accelerate the development of new drugs that may cure patients, eliminating the need for organ transplants altogether.", "title": "" }, { "docid": "bfeff1e1ef24d0cb92d1844188f87cc8", "text": "While user attribute extraction on social media has received considerable attention, existing approaches, mostly supervised, encounter great difficulty in obtaining gold standard data and are therefore limited to predicting unary predicates (e.g., gender). In this paper, we present a weaklysupervised approach to user profile extraction from Twitter. Users’ profiles from social media websites such as Facebook or Google Plus are used as a distant source of supervision for extraction of their attributes from user-generated text. In addition to traditional linguistic features used in distant supervision for information extraction, our approach also takes into account network information, a unique opportunity offered by social media. We test our algorithm on three attribute domains: spouse, education and job; experimental results demonstrate our approach is able to make accurate predictions for users’ attributes based on their tweets.1", "title": "" }, { "docid": "356684bac2e5fecd903eb428dc5455f4", "text": "Social media expose millions of users every day to information campaigns - some emerging organically from grassroots activity, others sustained by advertising or other coordinated efforts. These campaigns contribute to the shaping of collective opinions. While most information campaigns are benign, some may be deployed for nefarious purposes, including terrorist propaganda, political astroturf, and financial market manipulation. It is therefore important to be able to detect whether a meme is being artificially promoted at the very moment it becomes wildly popular. This problem has important social implications and poses numerous technical challenges. As a first step, here we focus on discriminating between trending memes that are either organic or promoted by means of advertisement. The classification is not trivial: ads cause bursts of attention that can be easily mistaken for those of organic trends. We designed a machine learning framework to classify memes that have been labeled as trending on Twitter. After trending, we can rely on a large volume of activity data. Early detection, occurring immediately at trending time, is a more challenging problem due to the minimal volume of activity data that is available prior to trending. Our supervised learning framework exploits hundreds of time-varying features to capture changing network and diffusion patterns, content and sentiment information, timing signals, and user meta-data. We explore different methods for encoding feature time series. Using millions of tweets containing trending hashtags, we achieve 75% AUC score for early detection, increasing to above 95% after trending. We evaluate the robustness of the algorithms by introducing random temporal shifts on the trend time series. Feature selection analysis reveals that content cues provide consistently useful signals; user features are more informative for early detection, while network and timing features are more helpful once more data is available.", "title": "" }, { "docid": "8f21e1acc47777bd9d3663fa00a419b3", "text": "We derive new models for gravity driven shallow water flows in several space dimensions over a general topography. A first model is valid for small slope variation, i.e. small curvature, and a second model is valid for arbitrary topography. In both cases no particular assumption is made on the velocity profile in the material layer. The models are written for an arbitrary coordinate system, and several formulations are provided. A Coulomb friction term is derived within the same framework, relevant in particular for debris avalanches. All our models are invariant under rotation, admit a conservative energy equation, and preserve the steady state of a lake at rest.", "title": "" }, { "docid": "cff5ceab3d0b181e5278688371652495", "text": "The redesign of business processes has a huge potential in terms of reducing costs and throughput times, as well as improving customer satisfaction. Despite rapid developments in the business process management discipline during the last decade, a comprehensive overview of the options to methodologically support a team to move from as-is process insights to to-be process alternatives is lacking. As such, no safeguard exists that a systematic exploration of the full range of redesign possibilities takes place by practitioners. Consequently, many attractive redesign possibilities remain unidentified and the improvement potential of redesign initiatives is not fulfilled. This systematic literature review establishes a comprehensive methodological framework, which serves as a catalog for process improvement use cases. The framework contains an overview of all the method options regarding the generation of process improvement ideas. This is established by identifying six key methodological decision areas, e.g. the human actors who can be invited to generate these ideas or the information that can be collected prior to this act. This framework enables practitioners to compose a well-considered method to generate process improvement ideas themselves. Based on a critical evaluation of the framework, the authors also offer recommendations that support academic researchers in grounding and improving methods for generating process Accepted after two revisions by the editors of the special issue. Electronic supplementary material The online version of this article (doi:10.1007/s12599-015-0417-x) contains supplementary material, which is available to authorized users. ir. R. J. B. Vanwersch (&) Dr. ir. I. Vanderfeesten Prof. Dr. ir. P. Grefen School of Industrial Engineering and Innovation Sciences, Eindhoven University of Technology, De Lismortel, room K.3, P.O. Box 513, 5600 MB Eindhoven, The Netherlands e-mail: r.j.b.vanwersch@tue.nl Dr. K. Shahzad College of Information Technology, University of the Punjab, Lahore, Pakistan Dr. K. Vanhaecht Department of Public Health and Primary Care, KU Leuven, University of Leuven, Leuven, Belgium Dr. K. Vanhaecht Department of Quality Management, University Hospitals KU Leuven, Leuven, Belgium Prof. Dr. ir. L. Pintelon Centre for Industrial Management/Traffic and Infrastructure, KU Leuven, University of Leuven, Leuven, Belgium Prof. Dr. J. Mendling Institute for Information Business, Vienna University of Economics and Business, Vienna, Austria Prof. Dr. G. G. van Merode Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands Prof. Dr. ir. H. A. Reijers Department of Mathematics and Computer Science, Eindhoven University of Technology, Eindhoven, The Netherlands Prof. Dr. ir. H. A. Reijers Department of Computer Science, VU University Amsterdam, Amsterdam, The Netherlands 123 Bus Inf Syst Eng 58(1):43–53 (2016) DOI 10.1007/s12599-015-0417-x", "title": "" } ]
scidocsrr
92c1a11d7303bb7f4c9c05d1eced1dd3
Toward integrated motion planning and control using potential fields and torque-based steering actuation for autonomous driving
[ { "docid": "5e9dce428a2bcb6f7bc0074d9fe5162c", "text": "This paper describes a real-time motion planning algorithm, based on the rapidly-exploring random tree (RRT) approach, applicable to autonomous vehicles operating in an urban environment. Extensions to the standard RRT are predominantly motivated by: 1) the need to generate dynamically feasible plans in real-time; 2) safety requirements; 3) the constraints dictated by the uncertain operating (urban) environment. The primary novelty is in the use of closed-loop prediction in the framework of RRT. The proposed algorithm was at the core of the planning and control software for Team MIT's entry for the 2007 DARPA Urban Challenge, where the vehicle demonstrated the ability to complete a 60 mile simulated military supply mission, while safely interacting with other autonomous and human driven vehicles.", "title": "" }, { "docid": "5728682e998b89cb23b12ba9acc3d993", "text": "Potential field methods are rapidly gaining popularity in obstacle avoidance applications for mobile robots and manipulators. While the potential field principle is particularly attractive because of its elegance and simplicity, substantial shortcomings have been identified as problems that are inherent to this principle. Based upon mathematical analysis, this paper presents a systematic criticism of the inherent problems. The heart of this analysis is a differential equation that combines the robot and the environment into a unified system. The identified problems are discussed in qualitative and theoretical terms and documented with experimental results from actual mobile robot runs.", "title": "" } ]
[ { "docid": "da4ec6dcf7f47b8ec0261195db7af5ca", "text": "Smart factories are on the verge of becoming the new industrial paradigm, wherein optimization permeates all aspects of production, from concept generation to sales. To fully pursue this paradigm, flexibility in the production means as well as in their timely organization is of paramount importance. AI is planning a major role in this transition, but the scenarios encountered in practice might be challenging for current tools. Task planning is one example where AI enables more efficient and flexible operation through an online automated adaptation and rescheduling of the activities to cope with new operational constraints and demands. In this paper we present SMarTplan, a task planner specifically conceived to deal with real-world scenarios in the emerging smart factory paradigm. Including both special-purpose and general-purpose algorithms, SMarTplan is based on current automated reasoning technology and it is designed to tackle complex application domains. In particular, we show its effectiveness on a logistic scenario, by comparing its specialized version with the general purpose one, and extending the comparison to other state-of-the-art task planners.", "title": "" }, { "docid": "673f1315f3699e0fbc3701743a90eb71", "text": "The majority of learning algorithms available today focus on approximating the state (V ) or state-action (Q) value function and efficient action selection comes as an afterthought. On the other hand, real-world problems tend to have large action spaces, where evaluating every possible action becomes impractical. This mismatch presents a major obstacle in successfully applying reinforcement learning to real-world problems. In this paper we present an effective approach to learning and acting in domains with multidimensional and/or continuous control variables where efficient action selection is embedded in the learning process. Instead of learning and representing the state or state-action value function of the MDP, we learn a value function over an implied augmented MDP, where states represent collections of actions in the original MDP and transitions represent choices eliminating parts of the action space at each step. Action selection in the original MDP is reduced to a binary search by the agent in the transformed MDP, with computational complexity logarithmic in the number of actions, or equivalently linear in the number of action dimensions. Our method can be combined with any discrete-action reinforcement learning algorithm for learning multidimensional continuous-action policies using a state value approximator in the transformed MDP. Our preliminary results with two well-known reinforcement learning algorithms (Least-Squares Policy Iteration and Fitted Q-Iteration) on two continuous action domains (1-dimensional inverted pendulum regulator, 2-dimensional bicycle balancing) demonstrate the viability and the potential of the proposed approach.", "title": "" }, { "docid": "719116d934205ce9e12791cf790844f9", "text": "A LLC resonant topology is analyzed to derive efficiency and cost optimal design for wide input ranges and load variations. In the LLC converter, a wide range of output power is controlled with only a narrow variation in operating frequency since this converter is capable of both step-up and step-down. In addition, ZVS turn-on and ZCS turn-off of MOSFETs and diode rectifiers can be achieved over the entire operating range. Finally, the inductance of a resonant tank in the primary side can be merged in the main power transformer by resonant inductance and the absence of the secondary filter inductor makes low voltage stress on secondary rectifier and cost-effective property. DC characteristics and input-output response in frequency domain are obtained with the equivalent circuit derived by first harmonic approximation (FHA) method. In addition, operational principles are explained to show the ZVS and ZCS conditions of primary switches and output diode rectifiers, respectively. Efficiency and cost optimal design rules of the LLC resonant converter are derived by a primary resonant network, operating frequency, and dead time duration. Proposed analysis and designation are proved by experimental results with a 400 W LLC resonant converter.", "title": "" }, { "docid": "eabd58fbd89cba84d3d4fa117bcd84b5", "text": "Today, peer-to-peer (p2p) networks have risen to the top echelon of information sharing on the Internet. It is a daunting task to prevent sharing of both legitimate and illegitimate information such as – music, movies, software, and child pornography – on p2p overt channels. Considering that, preventing covert channel information sharing is inconceivable given even its detection is near impossible. In this paper, we describe SURREAL – a technique for covert communication over the very popular p2p BitTorrent protocol. Standard BitTorrent protocol uses a 3-step handshake process and as such does not provide peer authentication service. In SURREAL, we have extended the standard handshake to a 6-step authenticated covert handshake to provide peer authentication service and robust peer anonymity with one way functions. After authenticating a potential covert partner, participating peers send data over an encrypted covert channel using one a standard BitTorrent message type. We have also SURREL’s security robustness to potential attacks. Finally, we have validated SURREAL’s performance and presented results comparing it with [4] and [5]. Keywords—Authentication, BitTorrent, covert channel, handshake, information hiding, p2p networks, security, steganography.", "title": "" }, { "docid": "3fd6d0ef0240b2fdd2a9c76a023ecab6", "text": "In this work, an exponential spline method is developed and a nalyzed for approximating solutions of calculus of variati ons problems. The method uses a spline interpolant, which is con structed from exponential spline. It is proved to be secondrder convergent. Finally some illustrative examples are includ ed to demonstrate the applicability of the new technique. Nu merical results confirm the order of convergence predicted by the analysis.", "title": "" }, { "docid": "1e464e122d0fe178244fc9af3fa8be25", "text": "Research on sentiment analysis in English language has undergone major developments in recent years. Chinese sentiment analysis research, however, has not evolved significantly despite the exponential growth of Chinese e-business and e-markets. This review paper aims to study past, present, and future of Chinese sentiment analysis from both monolingual and multilingual perspectives. The constructions of sentiment corpora and lexica are first introduced and summarized. Following, a survey of monolingual sentiment classification in Chinese via three different classification frameworks is conducted. Finally, sentiment classification based on the multilingual approach is introduced. After an overview of the literature, we propose that a more human-like (cognitive) representation of Chinese concepts and their inter-connections could overcome the scarceness of available resources and, hence, improve the state of the art. With the increasing expansion of Chinese language on the Web, sentiment analysis in Chinese is becoming an increasingly important research field. Concept-level sentiment analysis, in particular, is an exciting yet challenging direction for such research field which holds great promise for the future.", "title": "" }, { "docid": "b22aebbcbca0c53a20a6efa6e0d9cd94", "text": "Visual aesthetic perception (\"aesthetics\") or the capacity to visually perceive a particular attribute added to other features of objects, such as form, color, and movement, was fixed during human evolutionary lineage as a trait not shared with any great ape. Although prefrontal brain expansion is mentioned as responsible for the appearance of such human trait, no current knowledge exists on the role of prefrontal areas in the aesthetic perception. The visual brain consists of \"several parallel multistage processing systems, each specialized in a given task such as, color or motion\" [Bartels, A. & Zeki, S. (1999) Proc. R. Soc. London Ser. B 265, 2327-2332]. Here we report the results of an experiment carried out with magnetoencephalography which shows that the prefrontal area is selectively activated in humans during the perception of objects qualified as \"beautiful\" by the participants. Therefore, aesthetics can be hypothetically considered as an attribute perceived by means of a particular brain processing system, in which the prefrontal cortex seems to play a key role.", "title": "" }, { "docid": "805ea1349c046008a5efd67382ff82aa", "text": "Agent architectures need to organize themselves and adapt dynamically to changing circumstances without top-down control from a system operator. Some researchers provide this capability with complex agents that emulate human intelligence and reason explicitly about their coordination, reintroducing many of the problems of complex system design and implementation that motivated increasing software localization in the first place. Naturally occurring systems of simple agents (such as populations of insects or other animals) suggest that this retreat is not necessary. This paper summarizes several studies of such systems, and derives from them a set of general principles that artificial multiagent systems can use to support overall system behavior significantly more complex than the behavior of the individuals agents.", "title": "" }, { "docid": "a66765e24b6cfdab2cc0b30de8afd12e", "text": "A broadband transition structure from rectangular waveguide (RWG) to microstrip line (MSL) is presented for the realization of the low-loss packaging module using Low-temperature co-fired ceramic (LTCC) technology at W-band. In this transition, a cavity structure is buried in LTCC layers, which provides the wide bandwidth, and a laminated waveguide (LWG) transition is designed, which provides the low-loss performance, as it reduces the radiation loss of conventional direct transition between RWG and MSL. The design procedure is also given. The measured results show that the insertion loss of better than 0.7 dB from 86 to 97 GHz can be achieved.", "title": "" }, { "docid": "6c4431b23a86f7d5c8a627c240432823", "text": "Presently the most successful approaches to semi-supervised learning are based on consistency regularization, whereby a model is trained to be robust to small perturbations of its inputs and parameters. To understand consistency regularization, we conceptually explore how loss geometry interacts with training procedures. The consistency loss dramatically improves generalization performance over supervisedonly training; however, we show that SGD struggles to converge on the consistency loss and continues to make large steps that lead to changes in predictions on the test data. Motivated by these observations, we propose to train consistencybased methods with Stochastic Weight Averaging (SWA), a recent approach which averages weights along the trajectory of SGD with a modified learning rate schedule. We also propose fast-SWA, which further accelerates convergence by averaging multiple points within each cycle of a cyclical learning rate schedule. With weight averaging, we achieve the best known semi-supervised results on CIFAR-10 and CIFAR-100, over many different quantities of labeled training data. For example, we achieve 5.0% error on CIFAR-10 with only 4000 labels, compared to the previous best result in the literature of 6.3%.", "title": "" }, { "docid": "da0d17860604269378c8649e7353ba83", "text": "Responsive, implantable stimulation devices to treat epilepsy are now in clinical trials. New evidence suggests that these devices may be more effective when they deliver therapy before seizure onset. Despite years of effort, prospective seizure prediction, which could improve device performance, remains elusive. In large part, this is explained by lack of agreement on a statistical framework for modeling seizure generation and a method for validating algorithm performance. We present a novel stochastic framework based on a three-state hidden Markov model (HMM) (representing interictal, preictal, and seizure states) with the feature that periods of increased seizure probability can transition back to the interictal state. This notion reflects clinical experience and may enhance interpretation of published seizure prediction studies. Our model accommodates clipped EEG segments and formalizes intuitive notions regarding statistical validation. We derive equations for type I and type II errors as a function of the number of seizures, duration of interictal data, and prediction horizon length and we demonstrate the model's utility with a novel seizure detection algorithm that appeared to predicted seizure onset. We propose this framework as a vital tool for designing and validating prediction algorithms and for facilitating collaborative research in this area.", "title": "" }, { "docid": "a887b4ed84d35c4d27f1c4de3cfd43b9", "text": "Humic substances (HS) are complex mixtures of natural organic material which are found almost everywhere in the environment, and particularly in soils, sediments, and natural water. HS play key roles in many processes of paramount importance, such as plant growth, carbon storage, and the fate of contaminants in the environment. While most of the research on HS has been traditionally carried out by conventional experimental approaches, over the past 20 years complementary investigations have emerged from the application of computer modeling and simulation techniques. This paper reviews the literature regarding computational studies of HS, with a specific focus on molecular dynamics simulations. Significant achievements, outstanding issues, and future prospects are summarized and discussed.", "title": "" }, { "docid": "e64c8560d798b891f9addde71e473ff8", "text": "The use of phosphate solubilizing bacteria as inoculants simultaneously increases P uptake by the plant and crop yield. Strains from the genera Pseudomonas, Bacillus and Rhizobium are among the most powerful phosphate solubilizers. The principal mechanism for mineral phosphate solubilization is the production of organic acids, and acid phosphatases play a major role in the mineralization of organic phosphorous in soil. Several phosphatase-encoding genes have been cloned and characterized and a few genes involved in mineral phosphate solubilization have been isolated. Therefore, genetic manipulation of phosphate-solubilizing bacteria to improve their ability to improve plant growth may include cloning genes involved in both mineral and organic phosphate solubilization, followed by their expression in selected rhizobacterial strains. Chromosomal insertion of these genes under appropriate promoters is an interesting approach.", "title": "" }, { "docid": "3d490d7d30dcddc3f1c0833794a0f2df", "text": "Purpose-This study attempts to investigate (1) the effect of meditation experience on employees’ self-directed learning (SDL) readiness and organizational innovative (OI) ability as well as organizational performance (OP), and (2) the relationships among SDL, OI, and OP. Design/methodology/approach-This study conducts an empirical study of 15 technological companies (n = 412) in Taiwan, utilizing the collected survey data to test the relationships among the three dimensions. Findings-Results show that: (1) The employees’ meditation experience significantly and positively influenced employees’ SDL readiness, companies’ OI capability and OP; (2) The study found that SDL has a direct and significant impact on OI; and OI has direct and significant influences on OP. Research limitation/implications-The generalization of the present study is constrained by (1) the existence of possible biases of the participants, (2) the variations of length, type and form of meditation demonstrated by the employees in these high tech companies, and (3) the fact that local data collection in Taiwan may present different cultural characteristics which may be quite different from those in other areas or countries. Managerial implications are presented at the end of the work. Practical implications-The findings indicate that SDL can only impact organizational innovation through employees “openness to a challenge”, “inquisitive nature”, self-understanding and acceptance of responsibility for learning. Such finding implies better organizational innovative capability under such conditions, thus organizations may encourage employees to take risks or accept new opportunities through various incentives, such as monetary rewards or public recognitions. More specifically, the present study discovers that while administration innovation is the most important element influencing an organization’s financial performance, market innovation is the key component in an organization’s market performance. Social implications-The present study discovers that meditation experience positively", "title": "" }, { "docid": "d74131a431ca54f45a494091e576740c", "text": "In today’s highly competitive business environments with shortened product and technology life cycle, it is critical for software industry to continuously innovate. This goal can be achieved by developing a better understanding and control of the activities and determinants of innovation. Innovation measurement initiatives assess innovation capability, output and performance to help develop such an understanding. This study explores various aspects relevant to innovation measurement ranging from definitions, measurement frameworks and metrics that have been proposed in literature and used in practice. A systematic literature review followed by an online questionnaire and interviews with practitioners and academics were employed to identify a comprehensive definition of innovation that can be used in software industry. The metrics for the evaluation of determinants, inputs, outputs and performance were also aggregated and categorised. Based on these findings, a conceptual model of the key measurable elements of innovation was constructed from the findings of the systematic review. The model was further refined after feedback from academia and industry through interviews.", "title": "" }, { "docid": "3b285e3bd36dfeabb80a2ab57470bdc5", "text": "This paper presents algorithms and a prototype system for hand tracking and hand posture recognition. Hand postures are represented in terms of hierarchies of multi-scale colour image features at different scales, with qualitative inter-relations in terms of scale, position and orientation. In each image, detection of multi-scale colour features is performed. Hand states are then simultaneously detected and tracked using particle filtering, with an extension of layered sampling referred to as hierarchical layered sampling. Experiments are presented showing that the performance of the system is substantially improved by performing feature detection in colour space and including a prior with respect to skin colour. These components have been integrated into a real-time prototype system, applied to a test problem of controlling consumer electronics using hand gestures. In a simplified demo scenario, this system has been successfully tested by participants at two fairs during 2001.", "title": "" }, { "docid": "dcd919590e0b6b52ea3a6be7378d5d25", "text": "This work, concerning paraphrase identification task, on one hand contributes to expanding deep learning embeddings to include continuous and discontinuous linguistic phrases. On the other hand, it comes up with a new scheme TF-KLD-KNN to learn the discriminative weights of words and phrases specific to paraphrase task, so that a weighted sum of embeddings can represent sentences more effectively. Based on these two innovations we get competitive state-of-the-art performance on paraphrase identification.", "title": "" }, { "docid": "10a25736139db87efc5a3c2af6fa02fa", "text": "Two main fields of interest form the background of actual demand for optimized levels of phenolic compounds in crop plants. These are human health and plant resistance to pathogens and to biotic and abiotic stress factors. A survey of agricultural technologies influencing the biosynthesis and accumulation of phenolic compounds in crop plants is presented, including observations on the effects of light, temperature, mineral nutrition, water management, grafting, elevated atmospheric CO(2), growth and differentiation of the plant and application of elicitors, stimulating agents and plant activators. The underlying mechanisms are discussed with respect to carbohydrate availability, trade-offs to competing demands as well as to regulatory elements. Outlines are given for genetic engineering and plant breeding. Constraints and possible physiological feedbacks are considered for successful and sustainable application of agricultural techniques with respect to management of plant phenol profiles and concentrations.", "title": "" }, { "docid": "9118bd3f700d197a3fc8ca08204abd45", "text": "CONTEXT\nThe incidence of localised prostate cancer is increasing worldwide. In light of recent evidence, current, radical, whole-gland treatments for organ-confined disease have being questioned with respect to their side effects, cancer control, and cost. Focal therapy may be an effective alternative strategy.\n\n\nOBJECTIVE\nTo systematically review the existing literature on baseline characteristics of the target population; preoperative evaluation to localise disease; and perioperative, functional, and disease control outcomes following focal therapy.\n\n\nEVIDENCE ACQUISITION\nMedline (through PubMed), Embase, Web of Science, and Cochrane Review databases were searched from inception to 31 October 2012. In addition, registered but not yet published trials were retrieved. Studies evaluating tissue-preserving therapies in men with biopsy-proven prostate cancer in the primary or salvage setting were included.\n\n\nEVIDENCE SYNTHESIS\nA total of 2350 cases were treated to date across 30 studies. Most studies were retrospective with variable standards of reporting, although there was an increasing number of prospective registered trials. Focal therapy was mainly delivered to men with low and intermediate disease, although some high-risk cases were treated that had known, unilateral, significant cancer. In most of the cases, biopsy findings were correlated to specific preoperative imaging, such as multiparametric magnetic resonance imaging or Doppler ultrasound to determine eligibility. Follow-up varied between 0 and 11.1 yr. In treatment-naïve prostates, pad-free continence ranged from 95% to 100%, erectile function ranged from 54% to 100%, and absence of clinically significant cancer ranged from 83% to 100%. In focal salvage cases for radiotherapy failure, the same outcomes were achieved in 87.2-100%, 29-40%, and 92% of cases, respectively. Biochemical disease-free survival was reported using a number of definitions that were not validated in the focal-therapy setting.\n\n\nCONCLUSIONS\nOur systematic review highlights that, when focal therapy is delivered with intention to treat, the perioperative, functional, and disease control outcomes are encouraging within a short- to medium-term follow-up. Focal therapy is a strategy by which the overtreatment burden of the current prostate cancer pathway could be reduced, but robust comparative effectiveness studies are now required.", "title": "" }, { "docid": "7c804a568854a80af9d5c564a270d079", "text": "Large-scale online ride-sharing platforms have substantially transformed our lives by reallocating transportation resources to alleviate traffic congestion and promote transportation efficiency. An efficient fleet management strategy not only can significantly improve the utilization of transportation resources but also increase the revenue and customer satisfaction. It is a challenging task to design an effective fleet management strategy that can adapt to an environment involving complex dynamics between demand and supply. Existing studies usually work on a simplified problem setting that can hardly capture the complicated stochastic demand-supply variations in high-dimensional space. In this paper we propose to tackle the large-scale fleet management problem using reinforcement learning, and propose a contextual multi-agent reinforcement learning framework including two concrete algorithms, namely contextual deep Q-learning and contextual multi-agent actor-critic, to achieve explicit coordination among a large number of agents adaptive to different contexts. We show significant improvements of the proposed framework over state-of-the-art approaches through extensive empirical studies.", "title": "" } ]
scidocsrr
dc96831996a36d8aa710ed1bb7506a91
Interactivity and user participation in the television lifecycle: creating, sharing, and controlling content
[ { "docid": "6dca3d00b378482c3ea261df9a4b259c", "text": "We explore the generation of interactive computer graphics at digital set-top boxes in place of the fixed graphics that were embedded to the television video before the broadcast. This direction raises new requirements for user interface development, since the graphics are merged with video at each set-top box dynamically, without the traditional quality control from the television producers. Besides the technical issues, interactive computer graphics for television should be evaluated by television viewers. We employ an animated character in an interactive music television application that was evaluated by consumers, and was developed using the Virtual Channel Control Library, a custom high-level API, that was built using Microsoft Windows and TV technologies. r 2004 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "1c891aa5787d52497f8869011b234440", "text": "This paper compares different indexing techniques proposed for supporting efficient access to temporal data. The comparison is based on a collection of important performance criteria, including the space consumed, update processing, and query time for representative queries. The comparison is based on worst-case analysis, hence no assumptions on data distribution or query frequencies are made. When a number of methods have the same asymptotic worst-case behavior, features in the methods that affect average case behavior are discussed. Additional criteria examined are the pagination of an index, the ability to cluster related data together, and the ability to efficiently separate old from current data (so that larger archival storage media such as write-once optical disks can be used). The purpose of the paper is to identify the difficult problems in accessing temporal data and describe how the different methods aim to solve them. A general lower bound for answering basic temporal queries is also introduced.", "title": "" }, { "docid": "6b5455a7e5b93cd754c0ad90a7181a4d", "text": "This paper reports an exploration of the concept of social intelligence in the context of designing home dialogue systems for an Ambient Intelligence home. It describes a Wizard of Oz experiment involving a robotic interface capable of simulating several human social behaviours. Our results show that endowing a home dialogue system with some social intelligence will: (a) create a positive bias in the user’s perception of technology in the home environment, (b) enhance user acceptance for the home dialogue system, and (c) trigger social behaviours by the user in relation to the home dialogue system. q 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "7e439ac3ff2304b6e1aaa098ff44b0cb", "text": "Geological structures, such as faults and fractures, appear as image discontinuities or lineaments in remote sensing data. Geologic lineament mapping is a very important issue in geo-engineering, especially for construction site selection, seismic, and risk assessment, mineral exploration and hydrogeological research. Classical methods of lineaments extraction are based on semi-automated (or visual) interpretation of optical data and digital elevation models. We developed a freely available Matlab based toolbox TecLines (Tectonic Lineament Analysis) for locating and quantifying lineament patterns using satellite data and digital elevation models. TecLines consists of a set of functions including frequency filtering, spatial filtering, tensor voting, Hough transformation, and polynomial fitting. Due to differences in the mathematical background of the edge detection and edge linking procedure as well as the breadth of the methods, we introduce the approach in two-parts. In this first study, we present the steps that lead to edge detection. We introduce the data pre-processing using selected filters in spatial and frequency domains. We then describe the application of the tensor-voting framework to improve position and length accuracies of the detected lineaments. We demonstrate the robustness of the approach in a complex area in the northeast of Afghanistan using a panchromatic QUICKBIRD-2 image with 1-meter resolution. Finally, we compare the results of TecLines with manual lineament extraction, and other lineament extraction algorithms, as well as a published fault map of the study area. OPEN ACCESS Remote Sens. 2014, 6 5939", "title": "" }, { "docid": "f09f5d7e0f75d4b0fdbd8c40860c4473", "text": "Purpose – The purpose of this paper is to examine the diffusion of a popular Korean music video on the video-sharing web site YouTube. It applies a webometric approach in the diffusion of innovations framework to study three elements of diffusion in a Web 2.0 environment: users, user-to-user relationship and user-generated comment. Design/methodology/approach – The webometric approach combines profile analyses, social network analyses, semantic and sentiment analyses. Findings – The results show that male users in the US played a dominant role in the early-stage diffusion. The dominant users represented the innovators and early adopters in the evaluation stage of the diffusion, and they engaged in continuous discussions about the cultural origin of the video and expressed criticisms. Overall, the discussion between users varied according to their gender, age, and cultural background. Specifically, male users were more interactive than female users, and users in countries culturally similar to Korea were more likely to express favourable attitudes toward the video. Originality/value – The study provides a webometric approach to examine the Web 2.0-based social system in the early-stage global diffusion of cultural offerings. This approach connects the diffusion of innovations framework to the new context of Web 2.0-based diffusion.", "title": "" }, { "docid": "179e9c0672086798e74fa1197a0fda21", "text": "Narcissism is typically viewed as a dimensional construct in social psychology. Direct evidence supporting this position is lacking, however, and recent research suggests that clinical measures of narcissism exhibit categorical properties. It is therefore unclear whether social psychological researchers should conceptualize narcissism as a category or continuum. To help remedy this, the latent structure of narcissism—measured by the Narcissistic Personality Inventory (NPI)—was examined using 3895 participants and three taxometric procedures. Results suggest that NPI scores are distributed dimensionally. There is no apparent shift from ‘‘normal’’ to ‘‘narcissist’’ observed across the NPI continuum. This is consistent with the prevailing view of narcissism in social psychology and suggests that narcissism is structured similar to other aspects of general personality. This also suggests a difference in how narcissism is structured in clinical versus social psychology (134 words). 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "bff620a959d9cf9d1fe261967e674faf", "text": "In-memory columnar databases such as SAP HANA achieve extreme performance by means of vector processing over logical units of main memory resident columns. The core in-memory algorithms can be challenged when the working set of an application does not fit into main memory. To deal with memory pressure, most in-memory columnar databases evict candidate columns (or tables) using a set of heuristics gleaned from recent workload. As an alternative approach, we propose to reduce the unit of load and eviction from column to a contiguous portion of the in-memory columnar representation, which we call a page. In this paper, we adapt the core algorithms to be able to operate with partially loaded columns while preserving the performance benefits of vector processing. Our approach has two key advantages. First, partial column loading reduces the mandatory memory footprint for each column, making more memory available for other purposes. Second, partial eviction extends the in-memory lifetime of partially loaded column. We present a new in-memory columnar implementation for our approach, that we term page loadable column. We design a new persistency layout and access algorithms for the encoded data vector of the column, the order-preserving dictionary, and the inverted index. We compare the performance attributes of page loadable columns with those of regular in-memory columns and present a use-case for page loadable columns for cold data in data aging scenarios. Page loadable columns are completely integrated in SAP HANA, and we present extensive experimental results that quantify the performance overhead and the resource consumption when these columns are deployed.", "title": "" }, { "docid": "d3b2283ce3815576a084f98c34f37358", "text": "We present a system for the detection of the stance of headlines with regard to their corresponding article bodies. The approach can be applied in fake news, especially clickbait detection scenarios. The component is part of a larger platform for the curation of digital content; we consider veracity and relevancy an increasingly important part of curating online information. We want to contribute to the debate on how to deal with fake news and related online phenomena with technological means, by providing means to separate related from unrelated headlines and further classifying the related headlines. On a publicly available data set annotated for the stance of headlines with regard to their corresponding article bodies, we achieve a (weighted) accuracy score of 89.59.", "title": "" }, { "docid": "6f0283efa932663c83cc2c63d19fd6cf", "text": "Most research that explores the emotional state of users of spoken dialog systems does not fully utilize the contextual nature that the dialog structure provides. This paper reports results of machine learning experiments designed to automatically classify the emotional state of user turns using a corpus of 5,690 dialogs collected with the “How May I Help You” spoken dialog system. We show that augmenting standard lexical and prosodic features with contextual features that exploit the structure of spoken dialog and track user state increases classification accuracy by 2.6%.", "title": "" }, { "docid": "41b8c1b04f11f5ac86d1d6e696007036", "text": "The neural systems involved in hearing and repeating single words were investigated in a series of experiments using PET. Neuropsychological and psycholinguistic studies implicate the involvement of posterior and anterior left perisylvian regions (Wernicke's and Broca's areas). Although previous functional neuroimaging studies have consistently shown activation of Wernicke's area, there has been only variable implication of Broca's area. This study demonstrates that Broca's area is involved in both auditory word perception and repetition but activation is dependent on task (greater during repetition than hearing) and stimulus presentation (greater when hearing words at a slow rate). The peak of frontal activation in response to hearing words is anterior to that associated with repeating words; the former is probably located in Brodmann's area 45, the latter in Brodmann's area 44 and the adjacent precentral sulcus. As Broca's area activation is more subtle and complex than that in Wernicke's area during these tasks, the likelihood of observing it is influenced by both the study design and the image analysis technique employed. As a secondary outcome from the study, the response of bilateral auditory association cortex to 'own voice' during repetition was shown to be the same as when listening to \"other voice' from a prerecorded tape.", "title": "" }, { "docid": "b51fcfa32dbcdcbcc49f1635b44601ed", "text": "An adjusted rank correlation test is proposed as a technique for identifying publication bias in a meta-analysis, and its operating characteristics are evaluated via simulations. The test statistic is a direct statistical analogue of the popular \"funnel-graph.\" The number of component studies in the meta-analysis, the nature of the selection mechanism, the range of variances of the effect size estimates, and the true underlying effect size are all observed to be influential in determining the power of the test. The test is fairly powerful for large meta-analyses with 75 component studies, but has only moderate power for meta-analyses with 25 component studies. However, in many of the configurations in which there is low power, there is also relatively little bias in the summary effect size estimate. Nonetheless, the test must be interpreted with caution in small meta-analyses. In particular, bias cannot be ruled out if the test is not significant. The proposed technique has potential utility as an exploratory tool for meta-analysts, as a formal procedure to complement the funnel-graph.", "title": "" }, { "docid": "a1de64b46c7ef05b624f8dccc8bf5abe", "text": "Rumors may potentially cause undesirable effect such as the widespread panic in the general public. Especially, with the unprecedented growth of different types of social and enterprise networks, rumors could reach a larger audience than before. Many researchers have proposed different approaches to analyze and detect rumors in social networks. However, most of them either study on theoretical models without real data experiments or use content-based analysis and limited information diffusion analysis without fully considering social interactions. In this paper, we propose a social interaction based model FAST by taking four major properties of social interactions into account including familiarity, activeness, similarity, and trustworthiness. Also, we evaluate our model on real data from Sina Weibo (Twitter-like social network in China), which contains around 200 million tweets and 14 million Weibo users. Based on our model, we create a new metrics Fractional Directed Power Community Index (FD-PCI) derived from μ-PCI to identify the influential spreaders in social networks. FD-PCI shows better performance than conventional metrics such as K-core index and PageRank. Moreover, we obtain interesting influential features to detect rumors by the comparison between rumor and real news dynamics.", "title": "" }, { "docid": "d94d0db91e65bde2b1918ca95cc275bb", "text": "This study was undertaken to investigate the positive and negative effects of excessive Internet use on undergraduate students. The Internet Effect Scale (IES), especially constructed by the authors to determine these effects, consisted of seven dimensions namely: behavioral problems, interpersonal problems, educational problems, psychological problems, physical problems, Internet abuse, and positive effects. The sample consisted of 200 undergraduate students studying at the GC University Lahore, Pakistan. A set of Pearson Product Moment correlations showed positive associations between time spent on the Internet and various dimensions of the IES indicating that excessive Internet use can lead to a host of problems of educational, physical, psychological and interpersonal nature. However, a greater number of students reported positive than negative effects of Internet use. Without negating the advantages of Internet, the current findings suggest that Internet use should be within reasonable limits focusing more on activities enhancing one's productivity.", "title": "" }, { "docid": "8aa43fa820c3e2504c08d597e3fb4c69", "text": "BACKGROUND\nThe use of cannabis, or marijuana, for medicinal purposes is deeply rooted though history, dating back to ancient times. It once held a prominent position in the history of medicine, recommended by many eminent physicians for numerous diseases, particularly headache and migraine. Through the decades, this plant has taken a fascinating journey from a legal and frequently prescribed status to illegal, driven by political and social factors rather than by science. However, with an abundance of growing support for its multitude of medicinal uses, the misguided stigma of cannabis is fading, and there has been a dramatic push for legalizing medicinal cannabis and research. Almost half of the United States has now legalized medicinal cannabis, several states have legalized recreational use, and others have legalized cannabidiol-only use, which is one of many therapeutic cannabinoids extracted from cannabis. Physicians need to be educated on the history, pharmacology, clinical indications, and proper clinical use of cannabis, as patients will inevitably inquire about it for many diseases, including chronic pain and headache disorders for which there is some intriguing supportive evidence.\n\n\nOBJECTIVE\nTo review the history of medicinal cannabis use, discuss the pharmacology and physiology of the endocannabinoid system and cannabis-derived cannabinoids, perform a comprehensive literature review of the clinical uses of medicinal cannabis and cannabinoids with a focus on migraine and other headache disorders, and outline general clinical practice guidelines.\n\n\nCONCLUSION\nThe literature suggests that the medicinal use of cannabis may have a therapeutic role for a multitude of diseases, particularly chronic pain disorders including headache. Supporting literature suggests a role for medicinal cannabis and cannabinoids in several types of headache disorders including migraine and cluster headache, although it is primarily limited to case based, anecdotal, or laboratory-based scientific research. Cannabis contains an extensive number of pharmacological and biochemical compounds, of which only a minority are understood, so many potential therapeutic uses likely remain undiscovered. Cannabinoids appear to modulate and interact at many pathways inherent to migraine, triptan mechanisms ofaction, and opiate pathways, suggesting potential synergistic or similar benefits. Modulation of the endocannabinoid system through agonism or antagonism of its receptors, targeting its metabolic pathways, or combining cannabinoids with other analgesics for synergistic effects, may provide the foundation for many new classes of medications. Despite the limited evidence and research suggesting a role for cannabis and cannabinoids in some headache disorders, randomized clinical trials are lacking and necessary for confirmation and further evaluation.", "title": "" }, { "docid": "244116ffa1ed424fc8519eedc7062277", "text": "This paper describes a method of automatic placement for standard cells (polycells) that yields areas within 10-20 percent of careful hand placements. The method is based on graph partitioning to identify groups of modules that ought to be close to each other, and a technique for properly accounting for external connections at each level of partitioning. The placement procedure is in production use as part of an automated design system; it has been used in the design of more than 40 chips, in CMOS, NMOS, and bipolar technologies.", "title": "" }, { "docid": "fb1c4605eb6663fdd04e9ac1579e7ff0", "text": "We present an enhanced autonomous indoor navigation system for a stock quadcopter drone where all navigation commands are derived off-board on a base station. The base station processes the video stream transmitted from a forward-facing camera on the drone to determine the drone's physical disposition and trajectory in building hallways to derive steering commands that are sent to the drone. Off-board processing and the lack of on-board sensors for localizing the drone permits standard mid-range quadcopters to be used and conserves the limited power source on the quadcopter. We introduce improved and new techniques, compared to our prototype system [1], to maintain stable flights, estimate distance to hallway intersections and describe algorithms to stop the drone ahead of time and turn correctly at intersections.", "title": "" }, { "docid": "16a3bf4df6fb8e61efad6f053f1c6f9c", "text": "The objective of this paper is to improve large scale visual object retrieval for visual place recognition. Geo-localization based on a visual query is made difficult by plenty of non-distinctive features which commonly occur in imagery of urban environments, such as generic modern windows, doors, cars, trees, etc. The focus of this work is to adapt standard Hamming Embedding retrieval system to account for varying descriptor distinctiveness. To this end, we propose a novel method for efficiently estimating distinctiveness of all database descriptors, based on estimating local descriptor density everywhere in the descriptor space. In contrast to all competing methods, the (unsupervised) training time for our method (DisLoc) is linear in the number database descriptors and takes only a 100 seconds on a single CPU core for a 1 million image database. Furthermore, the added memory requirements are negligible (1%). The method is evaluated on standard publicly available large-scale place recognition benchmarks containing street-view imagery of Pittsburgh and San Francisco. DisLoc is shown to outperform all baselines, while setting the new state-of-the-art on both benchmarks. The method is compatible with spatial reranking, which further improves recognition results. Finally, we also demonstrate that 7% of the least distinctive features can be removed, therefore reducing storage requirements and improving retrieval speed, without any loss in place recognition accuracy.", "title": "" }, { "docid": "085d7cf044dd6a0d3505308484b58eb6", "text": "The success of software development using third party components highly depends on the ability to select a suitable component for the intended application. The evidence shows that there is limited knowledge about current industrial OTS selection practices. As a result, there is often a gap between theory and practice, and the proposed methods for supporting selection are rarely adopted in the industrial practice. This paper’s goal is to investigate the actual industrial practice of component selection in order to provide an initial empirical basis that allows the reconciliation of research and industrial endeavors. The study consisted of semi-structured interviews with 23 employees from 20 different software-intensive companies that mostly develop web information system applications. It provides qualitative information that mpirical study ff-The-Shelf-based software development omponent selection help to further understand these practices, and emphasize some aspects that have been overlooked by researchers. For instance, although the literature claims that component repositories are important for locating reusable components; these are hardly used in industrial practice. Instead, other resources that have not received considerable attention are used with this aim. Practices and potential market niches for software-intensive companies have been also identified. The results are valuable from both the research and the industrial perspectives as they provide a basis for formulating well-substantiated hypotheses veme and more effective impro", "title": "" }, { "docid": "ddd8c2c44ecb82f7892bed163610f4aa", "text": "Our aim is to make shape memory alloys (SMAs) accessible and visible as creative crafting materials by combining them with paper. In this paper, we begin by presenting mechanisms for actuating paper with SMAs along with a set of design guidelines for achieving dramatic movement. We then describe how we tested the usability and educational potential of one of these mechanisms in a workshop where participants, age 9 to 15, made actuated electronic origami cranes. We found that participants were able to successfully build constructions integrating SMAs and paper, that they enjoyed doing so, and were able to learn skills like circuitry design and soldering over the course of the workshop.", "title": "" }, { "docid": "a52bac75c0b605c6205572a2c35444bb", "text": "This article reviews five approximate statistical tests for determining whether one learning algorithm outperforms another on a particular learning task. These test sare compared experimentally to determine their probability of incorrectly detecting a difference when no difference exists (type I error). Two widely used statistical tests are shown to have high probability of type I error in certain situations and should never be used: a test for the difference of two proportions and a paired-differences t test based on taking several random train-test splits. A third test, a paired-differences t test based on 10-fold cross-validation, exhibits somewhat elevated probability of type I error. A fourth test, McNemar's test, is shown to have low type I error. The fifth test is a new test, 5 2 cv, based on five iterations of twofold cross-validation. Experiments show that this test also has acceptable type I error. The article also measures the power (ability to detect algorithm differences when they do exist) of these tests. The cross-validated t test is the most powerful. The 52 cv test is shown to be slightly more powerful than McNemar's test. The choice of the best test is determined by the computational cost of running the learning algorithm. For algorithms that can be executed only once, Mc-Nemar's test is the only test with acceptable type I error. For algorithms that can be executed 10 times, the 5 2 cv test is recommended, because it is slightly more powerful and because it directly measures variation due to the choice of training set.", "title": "" }, { "docid": "e9353d465c5dfd8af684d4e09407ea28", "text": "An overview of the main contributions that introduced the use of nonresonating modes for the realization of pseudoelliptic narrowband waveguide filters is presented. The following are also highlighted: early work using asymmetric irises; oversized H-plane cavity; transverse magnetic cavity; TM dual-mode cavity; and multiple cavity filters.", "title": "" } ]
scidocsrr
16ecba48aca55c2e6a6c7fc6c0e5b65d
Unfolding Temporal Dynamics: Predicting Social Media Popularity Using Multi-scale Temporal Decomposition
[ { "docid": "1a6a7f515aa19b3525989f2cc4aa514f", "text": "Hundreds of thousands of photographs are uploaded to the internet every minute through various social networking and photo sharing platforms. While some images get millions of views, others are completely ignored. Even from the same users, different photographs receive different number of views. This begs the question: What makes a photograph popular? Can we predict the number of views a photograph will receive even before it is uploaded? These are some of the questions we address in this work. We investigate two key components of an image that affect its popularity, namely the image content and social context. Using a dataset of about 2.3 million images from Flickr, we demonstrate that we can reliably predict the normalized view count of images with a rank correlation of 0.81 using both image content and social cues. In this paper, we show the importance of image cues such as color, gradients, deep learning features and the set of objects present, as well as the importance of various social cues such as number of friends or number of photos uploaded that lead to high or low popularity of images.", "title": "" }, { "docid": "62f8eb0e7eafe1c0d857dadc72008684", "text": "In the current Web 2.0 era, the popularity of Web resources fluctuates ephemerally, based on trends and social interest. As a result, content-based relevance signals are insufficient to meet users' constantly evolving information needs in searching for Web 2.0 items. Incorporating future popularity into ranking is one way to counter this. However, predicting popularity as a third party (as in the case of general search engines) is difficult in practice, due to their limited access to item view histories. To enable popularity prediction externally without excessive crawling, we propose an alternative solution by leveraging user comments, which are more accessible than view counts. Due to the sparsity of comments, traditional solutions that are solely based on view histories do not perform well. To deal with this sparsity, we mine comments to recover additional signal, such as social influence. By modeling comments as a time-aware bipartite graph, we propose a regularization-based ranking algorithm that accounts for temporal, social influence and current popularity factors to predict the future popularity of items. Experimental results on three real-world datasets --- crawled from YouTube, Flickr and Last.fm --- show that our method consistently outperforms competitive baselines in several evaluation tasks.", "title": "" }, { "docid": "1459f6bf9ebf153277f49a0791e2cf6d", "text": "Content popularity prediction finds application in many areas, including media advertising, content caching, movie revenue estimation, traffic management and macro-economic trends forecasting, to name a few. However, predicting this popularity is difficult due to, among others, the effects of external phenomena, the influence of context such as locality and relevance to users,and the difficulty of forecasting information cascades.\n In this paper we identify patterns of temporal evolution that are generalisable to distinct types of data, and show that we can (1) accurately classify content based on the evolution of its popularity over time and (2) predict the value of the content's future popularity. We verify the generality of our method by testing it on YouTube, Digg and Vimeo data sets and find our results to outperform the K-Means baseline when classifying the behaviour of content and the linear regression baseline when predicting its popularity.", "title": "" } ]
[ { "docid": "d3e409b074c4c26eb208b27b7b58a928", "text": "The increase in concern for carbon emission and reduction in natural resources for conventional power generation, the renewable energy based generation such as Wind, Photovoltaic (PV), and Fuel cell has gained importance. Out of which the PV based generation has gained significance due to availability of abundant sunlight. As the Solar power conversion is a low efficient conversion process, accurate and reliable, modeling of solar cell is important. Due to the non-linear nature of diode based PV model, the accurate design of PV cell is a difficult task. A built-in model of PV cell is available in Simscape, Simelectronics library, Matlab. The equivalent circuit parameters have to be computed from data sheet and incorporated into the model. However it acts as a stiff source when implemented with a MPPT controller. Henceforth, to overcome this drawback, in this paper a two-diode model of PV cell is implemented in Matlab Simulink with reduced four required parameters along with similar configuration of the built-in model. This model allows incorporation of MPPT controller. The I-V and P-V characteristics of these two models are investigated under different insolation levels. A PV based generation system feeding a DC load is designed and investigated using these two models and further implemented with MPPT based on P&O technique.", "title": "" }, { "docid": "739669a06f0fbe94f5c21e1b0b514345", "text": "This paper proposes an image dehazing model built with a convolutional neural network (CNN), called All-in-One Dehazing Network (AOD-Net). It is designed based on a re-formulated atmospheric scattering model. Instead of estimating the transmission matrix and the atmospheric light separately as most previous models did, AOD-Net directly generates the clean image through a light-weight CNN. Such a novel end-to-end design makes it easy to embed AOD-Net into other deep models, e.g., Faster R-CNN, for improving high-level tasks on hazy images. Experimental results on both synthesized and natural hazy image datasets demonstrate our superior performance than the state-of-the-art in terms of PSNR, SSIM and the subjective visual quality. Furthermore, when concatenating AOD-Net with Faster R-CNN, we witness a large improvement of the object detection performance on hazy images.", "title": "" }, { "docid": "ef7069ddd470608196bbeef5e8fda49d", "text": "ETHNOPHARMACOLOGICAL RELEVANCE\nNigella sativa (N. sativa) L. (Ranunculaceae), well known as black cumin, has been used as a herbal medicine that has a rich historical background. It has been traditionally and clinically used in the treatment of several diseases. Many reviews have investigated this valuable plant, but none of them focused on its clinical effects. Therefore, the aim of the present review is to provide a comprehensive report of clinical studies on N. sativa and some of its constituents.\n\n\nMATERIALS AND METHODS\nStudies on the clinical effects of N. sativa and its main constituent, thymoquinone, which were published between 1979 and 2015, were searched using various databases.\n\n\nRESULTS AND DISCUSSION\nDuring the last three decades, several in vivo and in vitro animal studies revealed the pharmacological properties of the plant, including its antioxidant, antibacterial, antiproliferative, proapoptotic, anti-inflammatory, and antiepileptic properties, and its effect on improvement in atherogenesis, endothelial dysfunction, glucose metabolism, lipid profile dysfunction, and prevention of hippocampus pyramidal cell loss. In clinical studies, antimicrobial, antioxidant, anti-inflammatory, antitumor, and antidiabetic properties as well as therapeutic effects on metabolic syndrome, and gastrointestinal, neuronal, cardiovascular, respiratory, urinary, and reproductive disorders were found in N. sativa and its constituents.\n\n\nCONCLUSION\nExtensive basic and clinical studies on N. sativa seed powder, oil, extracts (aqueous, ethanolic, and methanolic), and thymoquinone showed valuable therapeutic effects on different disorders with a wide range of safe doses. However, there were some confounding factors in the reviewed clinical trials, and a few of them presented data about the phytochemical composition of the plant. Therefore, a more standard clinical trial with N. sativa supplementation is needed for the plant to be used as an inexpensive potential biological adjuvant therapy.", "title": "" }, { "docid": "843ae72860a6e8e10c973997be418fb0", "text": "Connective-tissue growth factor (CTGF) is a modular secreted protein implicated in multiple cellular events such as chondrogenesis, skeletogenesis, angiogenesis and wound healing. CTGF contains four different structural modules. This modular organization is characteristic of members of the CCN family. The acronym was derived from the first three members discovered, cysteine-rich 61 (CYR61), CTGF and nephroblastoma overexpressed (NOV). CTGF is implicated as a mediator of important cell processes such as adhesion, migration, proliferation and differentiation. Extensive data have shown that CTGF interacts particularly with the TGFβ, WNT and MAPK signaling pathways. The capacity of CTGF to interact with different growth factors lends it an important role during early and late development, especially in the anterior region of the embryo. ctgf knockout mice have several cranio-facial defects, and the skeletal system is also greatly affected due to an impairment of the vascular-system development during chondrogenesis. This study, for the first time, indicated that CTGF is a potent inductor of gliogenesis during development. Our results showed that in vitro addition of recombinant CTGF protein to an embryonic mouse neural precursor cell culture increased the number of GFAP- and GFAP/Nestin-positive cells. Surprisingly, CTGF also increased the number of Sox2-positive cells. Moreover, this induction seemed not to involve cell proliferation. In addition, exogenous CTGF activated p44/42 but not p38 or JNK MAPK signaling, and increased the expression and deposition of the fibronectin extracellular matrix protein. Finally, CTGF was also able to induce GFAP as well as Nestin expression in a human malignant glioma stem cell line, suggesting a possible role in the differentiation process of gliomas. These results implicate ctgf as a key gene for astrogenesis during development, and suggest that its mechanism may involve activation of p44/42 MAPK signaling. Additionally, CTGF-induced differentiation of glioblastoma stem cells into a less-tumorigenic state could increase the chances of successful intervention, since differentiated cells are more vulnerable to cancer treatments.", "title": "" }, { "docid": "bae1f44165387e086868efecf318ecd2", "text": "Clustering graphs under the Stochastic Block Model (SBM) and extensions are well studied. Guarantees of correctness exist under the assumption that the data is sampled from a model. In this paper, we propose a framework, in which we obtain “correctness” guarantees without assuming the data comes from a model. The guarantees we obtain depend instead on the statistics of the data that can be checked. We also show that this framework ties in with the existing model-based framework, and that we can exploit results in model-based recovery, as well as strengthen the results existing in that area of research.", "title": "" }, { "docid": "e04ee56394f3d1056e3755461c010911", "text": "MOTIVATION\nSingle Molecule Real-Time (SMRT) sequencing technology and Oxford Nanopore technologies (ONT) produce reads over 10 kb in length, which have enabled high-quality genome assembly at an affordable cost. However, at present, long reads have an error rate as high as 10-15%. Complex and computationally intensive pipelines are required to assemble such reads.\n\n\nRESULTS\nWe present a new mapper, minimap and a de novo assembler, miniasm, for efficiently mapping and assembling SMRT and ONT reads without an error correction stage. They can often assemble a sequencing run of bacterial data into a single contig in a few minutes, and assemble 45-fold Caenorhabditis elegans data in 9 min, orders of magnitude faster than the existing pipelines, though the consensus sequence error rate is as high as raw reads. We also introduce a pairwise read mapping format and a graphical fragment assembly format, and demonstrate the interoperability between ours and current tools.\n\n\nAVAILABILITY AND IMPLEMENTATION\nhttps://github.com/lh3/minimap and https://github.com/lh3/miniasm\n\n\nCONTACT\nhengli@broadinstitute.org\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online.", "title": "" }, { "docid": "31e6da3635ec5f538f15a7b3e2d95e5b", "text": "Smart electricity meters are currently deployed in millions of households to collect detailed individual electricity consumption data. Compared with traditional electricity data based on aggregated consumption, smart meter data are much more volatile and less predictable. There is a need within the energy industry for probabilistic forecasts of household electricity consumption to quantify the uncertainty of future electricity demand in order to undertake appropriate planning of generation and distribution. We propose to estimate an additive quantile regression model for a set of quantiles of the future distribution using a boosting procedure. By doing so, we can benefit from flexible and interpretable models, which include an automatic variable selection. We compare our approach with three benchmark methods on both aggregated and disaggregated scales using a smart meter data set collected from 3639 households in Ireland at 30-min intervals over a period of 1.5 years. The empirical results demonstrate that our approach based on quantile regression provides better forecast accuracy for disaggregated demand, while the traditional approach based on a normality assumption (possibly after an appropriate Box-Cox transformation) is a better approximation for aggregated demand. These results are particularly useful since more energy data will become available at the disaggregated level in the future.", "title": "" }, { "docid": "b3f5176f49b467413d172134b1734ed8", "text": "Commonsense reasoning is a long-standing challenge for deep learning. For example, it is difficult to use neural networks to tackle the Winograd Schema dataset [1]. In this paper, we present a simple method for commonsense reasoning with neural networks, using unsupervised learning. Key to our method is the use of language models, trained on a massive amount of unlabled data, to score multiple choice questions posed by commonsense reasoning tests. On both Pronoun Disambiguation and Winograd Schema challenges, our models outperform previous state-of-the-art methods by a large margin, without using expensive annotated knowledge bases or hand-engineered features. We train an array of large RNN language models that operate at word or character level on LM-1-Billion, CommonCrawl, SQuAD, Gutenberg Books, and a customized corpus for this task and show that diversity of training data plays an important role in test performance. Further analysis also shows that our system successfully discovers important features of the context that decide the correct answer, indicating a good grasp of commonsense knowledge.", "title": "" }, { "docid": "51be236c79d1af7a2aff62a8049fba34", "text": "BACKGROUND\nAs the number of children diagnosed with autism continues to rise, resources must be available to support parents of children with autism and their families. Parents need help as they assess their unique situations, reach out for help in their communities, and work to decrease their stress levels by using appropriate coping strategies that will benefit their entire family.\n\n\nMETHODS\nA descriptive, correlational, cross-sectional study was conducted with 75 parents/primary caregivers of children with autism. Using the McCubbin and Patterson model of family behavior, adaptive behaviors of children with autism, family support networks, parenting stress, and parent coping were measured.\n\n\nFINDINGS AND CONCLUSIONS\nAn association between low adaptive functioning in children with autism and increased parenting stress creates a need for additional family support as parents search for different coping strategies to assist the family with ongoing and new challenges. Professionals should have up-to-date knowledge of the supports available to families and refer families to appropriate resources to avoid overwhelming them with unnecessary and inappropriate referrals.", "title": "" }, { "docid": "2cc97c407494310f500525b938e8aaa4", "text": "OBJECTIVE\nIn this paper, we aim to investigate the effect of computer-aided triage system, which is implemented for the health checkup of lung lesions involving tens of thousands of chest X-rays (CXRs) that are required for diagnosis. Therefore, high accuracy of diagnosis by an automated system can reduce the radiologist's workload on scrutinizing the medical images.\n\n\nMETHOD\nWe present a deep learning model in order to efficiently detect abnormal levels or identify normal levels during mass chest screening so as to obtain the probability confidence of the CXRs. Moreover, a convolutional sparse denoising autoencoder is designed to compute the reconstruction error. We employ four publicly available radiology datasets pertaining to CXRs, analyze their reports, and utilize their images for mining the correct disease level of the CXRs that are to be submitted to a computer aided triaging system. Based on our approach, we vote for the final decision from multi-classifiers to determine which three levels of the images (i.e. normal, abnormal, and uncertain cases) that the CXRs fall into.\n\n\nRESULTS\nWe only deal with the grade diagnosis for physical examination and propose multiple new metric indices. Combining predictors for classification by using the area under a receiver operating characteristic curve, we observe that the final decision is related to the threshold from reconstruction error and the probability value. Our method achieves promising results in terms of precision of 98.7 and 94.3% based on the normal and abnormal cases, respectively.\n\n\nCONCLUSION\nThe results achieved by the proposed framework show superiority in classifying the disease level with high accuracy. This can potentially save the radiologists time and effort, so as to allow them to focus on higher-level risk CXRs.", "title": "" }, { "docid": "c399a885345466505cfbaf8c175533b7", "text": "Science is going through two rapidly changing phenomena: one is the increasing capabilities of the computers and software tools from terabytes to petabytes and beyond, and the other is the advancement in high-throughput molecular biology producing piles of data related to genomes, transcriptomes, proteomes, metabolomes, interactomes, and so on. Biology has become a data intensive science and as a consequence biology and computer science have become complementary to each other bridged by other branches of science such as statistics, mathematics, physics, and chemistry. The combination of versatile knowledge has caused the advent of big-data biology, network biology, and other new branches of biology. Network biology for instance facilitates the system-level understanding of the cell or cellular components and subprocesses. It is often also referred to as systems biology. The purpose of this field is to understand organisms or cells as a whole at various levels of functions and mechanisms. Systems biology is now facing the challenges of analyzing big molecular biological data and huge biological networks. This review gives an overview of the progress in big-data biology, and data handling and also introduces some applications of networks and multivariate analysis in systems biology.", "title": "" }, { "docid": "17cc2f4ae2286d36748b203492d406e6", "text": "In this paper, we consider sentence simplification as a special form of translation with the complex sentence as the source and the simple sentence as the target. We propose a Tree-based Simplification Model (TSM), which, to our knowledge, is the first statistical simplification model covering splitting, dropping, reordering and substitution integrally. We also describe an efficient method to train our model with a large-scale parallel dataset obtained from the Wikipedia and Simple Wikipedia. The evaluation shows that our model achieves better readability scores than a set of baseline systems.", "title": "" }, { "docid": "f69f8b58e926a8a4573dd650ee29f80b", "text": "Zab is a crash-recovery atomic broadcast algorithm we designed for the ZooKeeper coordination service. ZooKeeper implements a primary-backup scheme in which a primary process executes clients operations and uses Zab to propagate the corresponding incremental state changes to backup processes1. Due the dependence of an incremental state change on the sequence of changes previously generated, Zab must guarantee that if it delivers a given state change, then all other changes it depends upon must be delivered first. Since primaries may crash, Zab must satisfy this requirement despite crashes of primaries.", "title": "" }, { "docid": "5cb8c778f0672d88241cc22da9347415", "text": "Phishing websites, fraudulent sites that impersonate a trusted third party to gain access to private data, continue to cost Internet users over a billion dollars each year. In this paper, we describe the design and performance characteristics of a scalable machine learning classifier we developed to detect phishing websites. We use this classifier to maintain Google’s phishing blacklist automatically. Our classifier analyzes millions of pages a day, examining the URL and the contents of a page to determine whether or not a page is phishing. Unlike previous work in this field, we train the classifier on a noisy dataset consisting of millions of samples from previously collected live classification data. Despite the noise in the training data, our classifier learns a robust model for identifying phishing pages which correctly classifies more than 90% of phishing pages several weeks after training concludes.", "title": "" }, { "docid": "3bf0b58ebb852f7bbe2159ced917f3e1", "text": "Noun extraction is very important for many NLP applications such as information retrieval, automatic text classification, and information extraction. Most of the previous Korean noun extraction systems use a morphological analyzer or a Partof-Speech (POS) tagger. Therefore, they require much of the linguistic knowledge such as morpheme dictionaries and rules (e.g. morphosyntactic rules and morphological rules). This paper proposes a new noun extraction method that uses the syllable based word recognition model. It finds the most probable syllable-tag sequence of the input sentence by using automatically acquired statistical information from the POS tagged corpus and extracts nouns by detecting word boundaries. Furthermore, it does not require any labor for constructing and maintaining linguistic knowledge. We have performed various experiments with a wide range of variables influencing the performance. The experimental results show that without morphological analysis or POS tagging, the proposed method achieves comparable performance with the previous methods.", "title": "" }, { "docid": "1608c56c79af07858527473b2b0262de", "text": "The field weakening control strategy of interior permanent magnet synchronous motor for electric vehicles was studied in the paper. A field weakening control method based on gradient descent of voltage limit according to the ellipse and modified current setting were proposed. The field weakening region was determined by the angle between the constant torque direction and the voltage limited ellipse decreasing direction. The direction of voltage limited ellipse decreasing was calculated by using the gradient descent method. The current reference was modified by the field weakening direction and the magnitude of the voltage error according to the field weakening region. A simulink model was also founded by Matlab/Simulink, and the validity of the proposed strategy was proved by the simulation results.", "title": "" }, { "docid": "de9c4e83639f399fe8b11af450b86283", "text": "This paper deals with sentence-level sentiment classification. Though a variety of neural network models have been proposed recently, however, previous models either depend on expensive phrase-level annotation, most of which has remarkably degraded performance when trained with only sentence-level annotation; or do not fully employ linguistic resources (e.g., sentiment lexicons, negation words, intensity words). In this paper, we propose simple models trained with sentence-level annotation, but also attempt to model the linguistic role of sentiment lexicons, negation words, and intensity words. Results show that our models are able to capture the linguistic role of sentiment words, negation words, and intensity words in sentiment expression.", "title": "" }, { "docid": "ba696260b6b5ae71f4558e4c1addeebd", "text": "Over the last 100 years, many studies have been performed to determine the biochemical and histopathological phenomena that mark the origin of neoplasms. At the end of the last century, the leading paradigm, which is currently well rooted, considered the origin of neoplasms to be a set of genetic and/or epigenetic mutations, stochastic and independent in a single cell, or rather, a stochastic monoclonal pattern. However, in the last 20 years, two important areas of research have underlined numerous limitations and incongruities of this pattern, the hypothesis of the so-called cancer stem cell theory and a revaluation of several alterations in metabolic networks that are typical of the neoplastic cell, the so-called Warburg effect. Even if this specific \"metabolic sign\" has been known for more than 85 years, only in the last few years has it been given more attention; therefore, the so-called Warburg hypothesis has been used in multiple and independent surveys. Based on an accurate analysis of a series of considerations and of biophysical thermodynamic events in the literature, we will demonstrate a homogeneous pattern of the cancer stem cell theory, of the Warburg hypothesis and of the stochastic monoclonal pattern; this pattern could contribute considerably as the first basis of the development of a new uniform theory on the origin of neoplasms. Thus, a new possible epistemological paradigm is represented; this paradigm considers the Warburg effect as a specific \"metabolic sign\" reflecting the stem origin of the neoplastic cell, where, in this specific metabolic order, an essential reason for the genetic instability that is intrinsic to the neoplastic cell is defined.", "title": "" }, { "docid": "6a917d1c159c8445b82ac50f3f06f9d4", "text": "As renewable energy increasingly penetrates into power grid systems, new challenges arise for system operators to keep the systems reliable under uncertain circumstances, while ensuring high utilization of renewable energy. With the naturally intermittent renewable energy, such as wind energy, playing more important roles, system robustness becomes a must. In this paper, we propose a robust optimization approach to accommodate wind output uncertainty, with the objective of providing a robust unit commitment schedule for the thermal generators in the day-ahead market that minimizes the total cost under the worst wind power output scenario. Robust optimization models the randomness using an uncertainty set which includes the worst-case scenario, and protects this scenario under the minimal increment of costs. In our approach, the power system will be more reliable because the worst-case scenario has been considered. In addition, we introduce a variable to control the conservatism of our model, by which we can avoid over-protection. By considering pumped-storage units, the total cost is reduced significantly.", "title": "" }, { "docid": "25f0871346c370db4b26aecd08a9d75e", "text": "This review presents a comprehensive discussion of the key technical issues in woody biomass pretreatment: barriers to efficient cellulose saccharification, pretreatment energy consumption, in particular energy consumed for wood-size reduction, and criteria to evaluate the performance of a pretreatment. A post-chemical pretreatment size-reduction approach is proposed to significantly reduce mechanical energy consumption. Because the ultimate goal of biofuel production is net energy output, a concept of pretreatment energy efficiency (kg/MJ) based on the total sugar recovery (kg/kg wood) divided by the energy consumption in pretreatment (MJ/kg wood) is defined. It is then used to evaluate the performances of three of the most promising pretreatment technologies: steam explosion, organosolv, and sulfite pretreatment to overcome lignocelluloses recalcitrance (SPORL) for softwood pretreatment. The present study found that SPORL is the most efficient process and produced highest sugar yield. Other important issues, such as the effects of lignin on substrate saccharification and the effects of pretreatment on high-value lignin utilization in woody biomass pretreatment, are also discussed.", "title": "" } ]
scidocsrr
55e447290affe7d23a649566df16c300
Predicting collective sentiment dynamics from time-series social media
[ { "docid": "c3525081c0f4eec01069dd4bd5ef12ab", "text": "More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.", "title": "" }, { "docid": "ddb2ba1118e28acf687208bff99ce53a", "text": "We show that information about social relationships can be used to improve user-level sentiment analysis. The main motivation behind our approach is that users that are somehow \"connected\" may be more likely to hold similar opinions; therefore, relationship information can complement what we can extract about a user's viewpoints from their utterances. Employing Twitter as a source for our experimental data, and working within a semi-supervised framework, we propose models that are induced either from the Twitter follower/followee network or from the network in Twitter formed by users referring to each other using \"@\" mentions. Our transductive learning results reveal that incorporating social-network information can indeed lead to statistically significant sentiment classification improvements over the performance of an approach based on Support Vector Machines having access only to textual features.", "title": "" }, { "docid": "8a2586b1059534c5a23bac9c1cc59906", "text": "The web contains a wealth of product reviews, but sifting through them is a daunting task. Ideally, an opinion mining tool would process a set of search results for a given item, generating a list of product attributes (quality, features, etc.) and aggregating opinions about each of them (poor, mixed, good). We begin by identifying the unique properties of this problem and develop a method for automatically distinguishing between positive and negative reviews. Our classifier draws on information retrieval techniques for feature extraction and scoring, and the results for various metrics and heuristics vary depending on the testing situation. The best methods work as well as or better than traditional machine learning. When operating on individual sentences collected from web searches, performance is limited due to noise and ambiguity. But in the context of a complete web-based tool and aided by a simple method for grouping sentences into attributes, the results are qualitatively quite useful.", "title": "" } ]
[ { "docid": "f79e5a2b19bb51e8dc0017342a153fee", "text": "Decentralized ledger-based cryptocurrencies like Bitcoin present a way to construct payment systems without trusted banks. However, the anonymity of Bitcoin is fragile. Many altcoins and protocols are designed to improve Bitcoin on this issue, among which Zerocash is the first fullfledged anonymous ledger-based currency, using zero-knowledge proof, specifically zk-SNARK, to protect privacy. However, Zerocash suffers two problems: poor scalability and low efficiency. In this paper, we address the above issues by constructing a micropayment system in Zerocash called Z-Channel. First, we improve Zerocash to support multisignature and time lock functionalities, and prove that the reconstructed scheme is secure. Then we construct Z-Channel based on the improved Zerocash scheme. Our experiments demonstrate that Z-Channel significantly improves the scalability and reduces the confirmation time for Zerocash payments.", "title": "" }, { "docid": "18f739a605222415afdea4f725201fba", "text": "I discuss open theoretical questions pertaining to the modified dynamics (MOND)–a proposed alternative to dark matter, which posits a breakdown of Newtonian dynamics in the limit of small accelerations. In particular, I point the reasons for thinking that MOND is an effective theory–perhaps, despite appearance, not even in conflict with GR. I then contrast the two interpretations of MOND as modified gravity and as modified inertia. I describe two mechanical models that are described by potential theories similar to (non-relativistic) MOND: a potential-flow model, and a membrane model. These might shed some light on a possible origin of MOND. The possible involvement of vacuum effects is also speculated on.", "title": "" }, { "docid": "f857000c14d894b7d487556436b19cb0", "text": "Since IO performance on HPC machines strongly depends on machine characteristics and configuration, it is important to carefully tune IO libraries and make good use of appropriate library APIs. For instance, on current petascale machines, independent IO tends to outperform collective IO, in part due to bottlenecks at the metadata server. The problem is exacerbated by scaling issues, since each IO library scales differently on each machine, and typically, operates efficiently to different levels of scaling on different machines. With scientific codes being run on a variety of HPC resources, efficient code execution requires us to address three important issues: (1) end users should be able to select the most efficient IO methods for their codes, with minimal effort in terms of code updates or alterations; (2) such performance-driven choices should not prevent data from being stored in the desired file formats, since those are crucial for later data analysis; and (3) it is important to have efficient ways of identifying and selecting certain data for analysis, to help end users cope with the flood of data produced by high end codes. This paper employs ADIOS, the ADaptable IO System, as an IO API to address (1)–(3) above. Concerning (1), ADIOS makes it possible to independently select the IO methods being used by each grouping of data in an application, so that end users can use those IO methods that exhibit best performance based on both IO patterns and the underlying hardware. In this paper, we also use this facility of ADIOS to experimentally evaluate on petascale machines alternative methods for high performance IO. Specific examples studied include methods that use strong file consistency vs. delayed parallel data consistency, as that provided by MPI-IO or POSIX IO. Concerning (2), to avoid linking IO methods to specific file formats and attain high IO performance, ADIOS introduces an efficient intermediate file format, termed BP, which can be converted, at small cost, to the standard file formats used by analysis tools, such as NetCDF and HDF-5. Concerning (3), associated with BP are efficient methods for data characterization, which compute attributes that can be used to identify data sets without having to inspect or analyze the entire data contents of large files.", "title": "" }, { "docid": "da14a995b0a061a7045497be46eab411", "text": "Fully homomorphic encryption supports meaningful computations on encrypted data, and hence, is widely used in cloud computing and big data environments. Recently, Li et al. constructed an efficient symmetric fully homomorphic encryption scheme and utilized it to design a privacy-preserving-outsourced association rule mining scheme. Their proposal allows multiple data owners to jointly mine some association rules without sacrificing the data privacy. The security of the homomorphic encryption scheme against the known-plaintext attacks was established by examining the hardness of solving nonlinear systems. However, in this paper, we illustrate that the security of Li et al.’s homomorphic encryption is overvalued. First, we show that we can recover the first part of the secret key from several known plaintext/ciphertext pairs with the continued fraction algorithm. Second, we find that we can retrieve the second part of the secret key through the Euclidean algorithm for the greatest common divisor problem. Experiments on the suggested parameters demonstrate that in case of more than two homomorphic multiplications, all the secret keys of the randomly instantiated Li et al.’s encryption schemes can be very efficiently recovered, and the success probability is at least 98% for one homomorphic multiplication.", "title": "" }, { "docid": "02c1c424e4511219cc2e857a3c39de32", "text": "We propose a unified architecture for next generation cognitive, low cost, mobile internet. The end user platform is able to scale as per the application and network requirements. It takes computing out of the data center and into end user platform. Internet enables open standards, accessible computing and applications programmability on a commodity platform. The architecture is a super-set to present day infrastructure web computing. The Java virtual machine (JVM) derives from the stack architecture. Applications can be developed and deployed on a multitude of host platforms. O(1)→ O(N). Computing and the internet today are more accessible and available to the larger community. Machine learning has made extensive advances with the availability of modern computing. It is used widely in NLP, Computer Vision, Deep learning and AI. A prototype device for mobile could contain N compute and N MB of memory. Keywords— mobile, server, internet", "title": "" }, { "docid": "c8eb002905d817848ad7dadf31fb6875", "text": "摘要:为寻找路内停车泊位在道 路上巡游的车辆是造成交通拥堵 的原因之一。首先指出路内停车 低价与路外停车高价共同刺激驾 驶人选择巡游而非使用路外停车 场(库)。其次针对洛杉矶的巡游 开展调查,得到车辆平均巡游时耗 和距离,并得出每年由此产生的车 辆行驶里程、时间浪费、燃油消耗 及 CO2排放。然后通过分析路内 停车价格与车辆巡游的关系,提出 城市全日泊位使用率大约为 85% 时,停车泊位处于供需平衡状态, 停车价格较合理。接着以加利福 尼亚格雷伍德城停车价格制定及 停车收益使用为例,给出停车增量 融资的收益回馈方式。最后指出, 为了减少拥堵、降低温室气体排 放、改善街区环境等,城市应制定 合理的路内停车价格并将产生的 收益用于改善公共服务。 Abstract: Cruising for on-street parking causes traffic congestion. This paper first points out that the combination of low prices for on-street parking and high prices for off-street parking increases the incentive for drivers to search for on-street parking spaces. Through a survey of cruising for parking in Los Angeles, the author obtains average cruising time and distance, as well as corresponding vehicle-miles traveled, waste of time, extra fuel consumption, and CO2 emissions. The author concludes that when the occupancy rate of on-street parking is about 85%, the demand and supply of parking are in balance and the price is just right. Taking the parking charges and revenue usage in Redwood City in California as an example, the paper discusses parking increment finance. Finally, the paper emphasizes that it is necessary to charge properly for on-street parking and spend the resulting revenue to improve local public services in order to reduce traffic congestion and greenhouse gas emissions, and improve neighborhoods. 关键词:停车管理;路内停车;车辆 巡游;停车价格;收益管理", "title": "" }, { "docid": "06129167c187b96e3c064e05c2b475f8", "text": "Elderly patients with acute myeloid leukemia (AML) who are refractory to or relapse following frontline treatment constitute a poor-risk group with a poor long-term outcome. Host-related factors and unfavorable disease-related features contribute to early treatment failures following frontline therapy, thus making attainment of remission and long-term survival with salvage therapy particularly challenging for elderly patients. Currently, no optimal salvage strategy exists for responding patients, and allogeneic hematopoietic stem cell transplant is the only curative option in this setting; however, the vast majority of elderly patients are not candidates for this procedure due to poor functional status secondary to age and age-related comorbidities. Furthermore, the lack of effective salvage programs available for elderly patients with recurrent AML underscores the need for therapies that consistently yield durable remissions or durable control of their disease. The purpose of this review was to highlight the currently available strategies, as well as future strategies under development, for treating older patients with recurrent AML.", "title": "" }, { "docid": "63c842f58bdbbeecabaf6c61d8f891c4", "text": "iii Acknowledgements iv List of Tables viii List of Figures ix Chapters", "title": "" }, { "docid": "1e0a6e7dc26c4fa074044606e983b0dc", "text": "Automated and accurate classification of MR brain images is extremely important for medical analysis and interpretation. Over the last decade numerous methods have already been proposed. In this paper, we presented a novel method to classify a given MR brain image as normal or abnormal. The proposed method first employed wavelet transform to extract features from images, followed by applying principle component analysis (PCA) to reduce the dimensions of features. The reduced features were submitted to a kernel support vector machine (KSVM). The strategy of Kfold stratified cross validation was used to enhance generalization of KSVM. We chose seven common brain diseases (glioma, meningioma, Alzheimer’s disease, Alzheimer’s disease plus visual agnosia, Pick’s disease, sarcoma, and Huntington’s disease) as abnormal brains, and collected 160 MR brain images (20 normal and 140 abnormal) from Harvard Medical School website. We performed our proposed methods with four different kernels, and found that the GRB kernel achieves the highest classification accuracy as 99.38%. The LIN, HPOL, and IPOL kernel achieves 95%, 96.88%, and 98.12%, respectively. We also compared our method to those from literatures in the last decade, and the results showed our DWT+PCA+KSVM with GRB kernel still achieved the best accurate classification results. The averaged processing time for a 256× 256 size image on a laptop of P4 IBM with 3GHz processor and 2 GB RAM is 0.0448 s. From the experimental data, our method was effective and rapid. It could be applied to the field of MR brain image classification and can assist the doctors to diagnose where a patient is normal or abnormal to certain degrees. Received 14 June 2012, Accepted 23 July 2012, Scheduled 19 August 2012 * Corresponding author: Yudong Zhang (zhangyudongnuaa@gmail.com).", "title": "" }, { "docid": "f355ce69c36dc68fb6509528d92bf07c", "text": "The problem of position estimation in sensor networks using a combination of distance and angle information as well as pure angle information is discussed. For this purpose, a semidefinite programming relaxation based method that has been demonstrated on pure distance information is extended to solve the problem. Practical considerations such as the effect of noise and computational effort are also addressed. In particular, a random constraint selection method to minimize the number of constraints in the problem formulation is described. The performance evaluation of the technique with regard to estimation accuracy and computation time is also presented by the means of extensive simulations.", "title": "" }, { "docid": "cc7aa8b5b581c3e1996189411ca09235", "text": "Owing to a number of reasons, the deployment of encryption solutions are beginning to be ubiquitous at both organizational and individual levels. The most emphasized reason is the necessity to ensure confidentiality of privileged information. Unfortunately, it is also popular as cyber-criminals' escape route from the grasp of digital forensic investigations. The direct encryption of data or indirect encryption of storage devices, more often than not, prevents access to such information contained therein. This consequently leaves the forensics investigation team, and subsequently the prosecution, little or no evidence to work with, in sixty percent of such cases. However, it is unthinkable to jeopardize the successes brought by encryption technology to information security, in favour of digital forensics technology. This paper examines what data encryption contributes to information security, and then highlights its contributions to digital forensics of disk drives. The paper also discusses the available ways and tools, in digital forensics, to get around the problems constituted by encryption. A particular attention is paid to the Truecrypt encryption solution to illustrate ideas being discussed. It then compares encryption's contributions in both realms, to justify the need for introduction of new technologies to forensically defeat data encryption as the only solution, whilst maintaining the privacy goal of users. Keywords—Encryption; Information Security; Digital Forensics; Anti-Forensics; Cryptography; TrueCrypt", "title": "" }, { "docid": "cc92afb28b6b19179a57eb2e17ce500d", "text": "Frequency-related parameters derived from the uterine electromyogram (EMG) signals are widely used in many pregnancy monitoring and preterm delivery prediction studies. Although they are classical parameters, they are well suited for quantifying uterine EMG signals and have many advantages over amplitude-related parameters. The present work aims to compare various frequency-related parameters according to their classification performances (pregnancy vs. labor) using the receiver operating characteristic (ROC) curve analysis. The comparison between the parameters indicates that median frequency is the best frequency-related parameter that can be used for distinguishing between pregnancy and labor contractions. We conclude that median frequency can be the representative frequency-related parameter for classification problems of uterine EMG.", "title": "" }, { "docid": "b78d5e7047d340ebef8f4e80d28ab4d9", "text": "Light scattering and color change are two major sources of distortion for underwater photography. Light scattering is caused by light incident on objects reflected and deflected multiple times by particles present in the water before reaching the camera. This in turn lowers the visibility and contrast of the image captured. Color change corresponds to the varying degrees of attenuation encountered by light traveling in the water with different wavelengths, rendering ambient underwater environments dominated by a bluish tone. No existing underwater processing techniques can handle light scattering and color change distortions suffered by underwater images, and the possible presence of artificial lighting simultaneously. This paper proposes a novel systematic approach to enhance underwater images by a dehazing algorithm, to compensate the attenuation discrepancy along the propagation path, and to take the influence of the possible presence of an artifical light source into consideration. Once the depth map, i.e., distances between the objects and the camera, is estimated, the foreground and background within a scene are segmented. The light intensities of foreground and background are compared to determine whether an artificial light source is employed during the image capturing process. After compensating the effect of artifical light, the haze phenomenon and discrepancy in wavelength attenuation along the underwater propagation path to camera are corrected. Next, the water depth in the image scene is estimated according to the residual energy ratios of different color channels existing in the background light. Based on the amount of attenuation corresponding to each light wavelength, color change compensation is conducted to restore color balance. The performance of the proposed algorithm for wavelength compensation and image dehazing (WCID) is evaluated both objectively and subjectively by utilizing ground-truth color patches and video downloaded from the Youtube website. Both results demonstrate that images with significantly enhanced visibility and superior color fidelity are obtained by the WCID proposed.", "title": "" }, { "docid": "b0f5ccee91aa2c44f9050a85e7e514e6", "text": "Some of the physiological changes associated with the taper and their relationship with athletic performance are now known. Since the 1980s a number of studies have examined various physiological responses associated with the cardiorespiratory, metabolic, hormonal, neuromuscular and immunological systems during the pre-event taper across a number of sports. Changes in the cardiorespiratory system may include an increase in maximal oxygen uptake, but this is not a necessary prerequisite for taper-induced gains in performance. Oxygen uptake at a given submaximal exercise intensity can decrease during the taper, but this response is more likely to occur in less-skilled athletes. Resting, maximal and submaximal heart rates do not change, unless athletes show clear signs of overreaching before the taper. Blood pressure, cardiac dimensions and ventilatory function are generally stable, but submaximal ventilation may decrease. Possible haematological changes include increased blood and red cell volume, haemoglobin, haematocrit, reticulocytes and haptoglobin, and decreased red cell distribution width. These changes in the taper suggest a positive balance between haemolysis and erythropoiesis, likely to contribute to performance gains. Metabolic changes during the taper include: a reduced daily energy expenditure; slightly reduced or stable respiratory exchange ratio; increased peak blood lactate concentration; and decreased or unchanged blood lactate at submaximal intensities. Blood ammonia concentrations show inconsistent trends, muscle glycogen concentration increases progressively and calcium retention mechanisms seem to be triggered during the taper. Reduced blood creatine kinase concentrations suggest recovery from training stress and muscle damage, but other biochemical markers of training stress and performance capacity are largely unaffected by the taper. Hormonal markers such as testosterone, cortisol, testosterone : cortisol ratio, 24-hour urinary cortisol : cortisone ratio, plasma and urinary catecholamines, growth hormone and insulin-like growth factor-1 are sometimes affected and changes can correlate with changes in an athlete's performance capacity. From a neuromuscular perspective, the taper usually results in markedly increased muscular strength and power, often associated with performance gains at the muscular and whole body level. Oxidative enzyme activities can increase, along with positive changes in single muscle fibre size, metabolic properties and contractile properties. Limited research on the influence of the taper on athletes' immune status indicates that small changes in immune cells, immunoglobulins and cytokines are unlikely to compromise overall immunological protection. The pre-event taper may also be characterised by psychological changes in the athlete, including a reduction in total mood disturbance and somatic complaints, improved somatic relaxation and self-assessed physical conditioning scores, reduced perception of effort and improved quality of sleep. These changes are often associated with improved post-taper performances. Mathematical models indicate that the physiological changes associated with the taper are the result of a restoration of previously impaired physiological capacities (fatigue and adaptation model), and the capacity to tolerate training and respond effectively to training undertaken during the taper (variable dose-response model). Finally, it is important to note that some or all of the described physiological and psychological changes associated with the taper occur simultaneously, which underpins the integrative nature of relationships between these changes and performance enhancement.", "title": "" }, { "docid": "6bc611936d412dde15999b2eb179c9e2", "text": "Smith-Lemli-Opitz syndrome, a severe developmental disorder associated with multiple congenital anomalies, is caused by a defect of cholesterol biosynthesis. Low cholesterol and high concentrations of its direct precursor, 7-dehydrocholesterol, in plasma and tissues are the diagnostic biochemical hallmarks of the syndrome. The plasma sterol concentrations correlate with severity and disease outcome. Mutations in the DHCR7 gene lead to deficient activity of 7-dehydrocholesterol reductase (DHCR7), the final enzyme of the cholesterol biosynthetic pathway. The human DHCR7 gene is localised on chromosome 11q13 and its structure has been characterized. Ninetyone different mutations in the DHCR7 gene have been published to date. This paper is a review of the clinical, biochemical and molecular genetic aspects.", "title": "" }, { "docid": "09fc272a6d9ea954727d07075ecd5bfd", "text": "Deep generative models have recently shown great promise in imitation learning for motor control. Given enough data, even supervised approaches can do one-shot imitation learning; however, they are vulnerable to cascading failures when the agent trajectory diverges from the demonstrations. Compared to purely supervised methods, Generative Adversarial Imitation Learning (GAIL) can learn more robust controllers from fewer demonstrations, but is inherently mode-seeking and more difficult to train. In this paper, we show how to combine the favourable aspects of these two approaches. The base of our model is a new type of variational autoencoder on demonstration trajectories that learns semantic policy embeddings. We show that these embeddings can be learned on a 9 DoF Jaco robot arm in reaching tasks, and then smoothly interpolated with a resulting smooth interpolation of reaching behavior. Leveraging these policy representations, we develop a new version of GAIL that (1) is much more robust than the purely-supervised controller, especially with few demonstrations, and (2) avoids mode collapse, capturing many diverse behaviors when GAIL on its own does not. We demonstrate our approach on learning diverse gaits from demonstration on a 2D biped and a 62 DoF 3D humanoid in the MuJoCo physics environment.", "title": "" }, { "docid": "941df83e65700bc2e5ee7226b96e4f54", "text": "This paper presents design and analysis of a three phase induction motor drive using IGBT‟s at the inverter power stage with volts hertz control (V/F) in closed loop using dsPIC30F2010 as a controller. It is a 16 bit high-performance digital signal controller (DSC). DSC is a single chip embedded controller that integrates the controller attributes of a microcontroller with the computation and throughput capabilities of a DSP in a single core. A 1HP, 3-phase, 415V, 50Hz induction motor is used as load for the inverter. Digital Storage Oscilloscope Textronix TDS2024B is used to record and analyze the various waveforms. The experimental results for V/F control of 3Phase induction motor using dsPIC30F2010 chip clearly shows constant volts per hertz and stable inverter line to line output voltage. Keywords--DSC, constant volts per hertz, PWM inverter, ACIM.", "title": "" }, { "docid": "5201e874c8751cddba3a358a2f1df998", "text": "Ururau is a free and open-source software written in the Java programming language. The use of the software allows the construction of discrete simulation models on the three layers that constitute the structure of the software. It means that the models can be developed in the upper layer of the graphic interface faster and of simple programming in the core of the software (new commands for example) or in the lower layer of the software libraries. The use of the Ururau has made the accomplishment of research of new functions and algorithms in the field of discrete event systems simulation possible.", "title": "" }, { "docid": "703c3db31145f716ade2bb611f6a1352", "text": "With recent advances in mobile learning (m-learning), it is becoming possible for learning activities to occur everywhere. The learner model presented in our earlier work was partitioned into smaller elements in the form of learner profiles, which collectively represent the entire learning process. This paper presents an Adaptive Neuro-Fuzzy Inference System (ANFIS) for delivering adapted learning content to mobile learners. The ANFIS model was designed using trial and error based on various experiments. This study was conducted to illustrate that ANFIS is effective with hybrid learning, for the adaptation of learning content according to learners' needs. Study results show that ANFIS has been successfully implemented for learning content adaptation within different learning context scenarios. The performance of the ANFIS model was evaluated using standard error measurements which revealed the optimal setting necessary for better predictability. The MATLAB simulation results indicate that the performance of the ANFIS approach is valuable and easy to implement. The study results are based on analysis of different model settings; they confirm that the m-learning application is functional. However, it should be noted that an increase in the number of inputs being considered by the model will increase the system response time, and hence the delay for the mobile learner.", "title": "" }, { "docid": "93e43e11c10e39880c68d2fb0fccd634", "text": "In this paper, we present a novel probabilistic generative model for multi-object traffic scene understanding from movable platforms which reasons jointly about the 3D scene layout as well as the location and orientation of objects in the scene. In particular, the scene topology, geometry, and traffic activities are inferred from short video sequences. Inspired by the impressive driving capabilities of humans, our model does not rely on GPS, lidar, or map knowledge. Instead, it takes advantage of a diverse set of visual cues in the form of vehicle tracklets, vanishing points, semantic scene labels, scene flow, and occupancy grids. For each of these cues, we propose likelihood functions that are integrated into a probabilistic generative model. We learn all model parameters from training data using contrastive divergence. Experiments conducted on videos of 113 representative intersections show that our approach successfully infers the correct layout in a variety of very challenging scenarios. To evaluate the importance of each feature cue, experiments using different feature combinations are conducted. Furthermore, we show how by employing context derived from the proposed method we are able to improve over the state-of-the-art in terms of object detection and object orientation estimation in challenging and cluttered urban environments.", "title": "" } ]
scidocsrr
cd39b5d7eea9c74db401c30bbb4ed091
Human Effort and Machine Learnability in Computer Aided Translation
[ { "docid": "03d11f57ae5fbd09f10baaf4c9a29a55", "text": "The standard approach to computer-aided language translation is post-editing: a machine generates a single translation that a human translator corrects. Recent studies have shown this simple technique to be surprisingly effective, yet it underutilizes the complementary strengths of precision-oriented humans and recall-oriented machines. We present Predictive Translation Memory, an interactive, mixed-initiative system for human language translation. Translators build translations incrementally by considering machine suggestions that update according to the user's current partial translation. In a large-scale study, we find that professional translators are slightly slower in the interactive mode yet produce slightly higher quality translations despite significant prior experience with the baseline post-editing condition. Our analysis identifies significant predictors of time and quality, and also characterizes interactive aid usage. Subjects entered over 99% of characters via interactive aids, a significantly higher fraction than that shown in previous work.", "title": "" } ]
[ { "docid": "3bf9e696755c939308efbcca363d4f49", "text": "Robotic navigation requires that the robotic platform have an idea of its location and orientation within the environment. This localization is known as pose estimation, and has been a much researched topic. There are currently two main categories of pose estimation techniques: pose from hardware, and pose from video (PfV). Hardware pose estimation utilizes specialized hardware such as Global Positioning Systems (GPS) and Inertial Navigation Systems (INS) to estimate the position and orientation of the platform at the specified times. PfV systems use video cameras to estimate the pose of the system by calculating the inter-frame motion of the camera from features present in the images. These pose estimation systems are readily integrated, and can be used to augment and/or supplant each other according to the needs of the application. Both pose from video and hardware pose estimation have their uses, but each also has its degenerate cases in which they fail to provide reliable data. Hardware solutions can provide extremely accurate data, but are usually quite pricey and can be restrictive in their environments of operation. Pose from video solutions can be implemented with low-cost off-the-shelf components, but the accuracy of the PfV results can be degraded by noisy imagery, ambiguity in the feature matching process, and moving objects. This paper attempts to evaluate the cost/benefit comparison between pose from video and hardware pose estimation experimentally, and to provide a guide as to which systems should be used under certain scenarios.", "title": "" }, { "docid": "be989252cdad4886613f53c7831454cb", "text": "Stress and cortisol are known to impair memory retrieval of well-consolidated declarative material. The effects of cortisol on memory retrieval may in particular be due to glucocorticoid (GC) receptors in the hippocampus and prefrontal cortex (PFC). Therefore, effects of stress and cortisol should be observable on both hippocampal-dependent declarative memory retrieval and PFC-dependent working memory (WM). In the present study, it was tested whether psychosocial stress would impair both WM and memory retrieval in 20 young healthy men. In addition, the association between cortisol levels and cognitive performance was assessed. It was found that stress impaired WM at high loads, but not at low loads in a Sternberg paradigm. High cortisol levels at the time of testing were associated with slow WM performance at high loads, and with impaired recall of moderately emotional, but not of highly emotional paragraphs. Furthermore, performance at high WM loads was associated with memory retrieval. These data extend previous results of pharmacological studies in finding WM impairments after acute stress at high workloads and cortisol-related retrieval impairments.", "title": "" }, { "docid": "84a9af22a0fa5a755b750ddf914360f9", "text": "Pancreatic cancer has one of the worst survival rates amongst all forms of cancer because its symptoms manifest later into the progression of the disease. One of those symptoms is jaundice, the yellow discoloration of the skin and sclera due to the buildup of bilirubin in the blood. Jaundice is only recognizable to the naked eye in severe stages, but a ubiquitous test using computer vision and machine learning can detect milder forms of jaundice. We propose BiliScreen, a smartphone app that captures pictures of the eye and produces an estimate of a person's bilirubin level, even at levels normally undetectable by the human eye. We test two low-cost accessories that reduce the effects of external lighting: (1) a 3D-printed box that controls the eyes' exposure to light and (2) paper glasses with colored squares for calibration. In a 70-person clinical study, we found that BiliScreen with the box achieves a Pearson correlation coefficient of 0.89 and a mean error of -0.09 ± 2.76 mg/dl in predicting a person's bilirubin level. As a screening tool, BiliScreen identifies cases of concern with a sensitivity of 89.7% and a specificity of 96.8% with the box accessory.", "title": "" }, { "docid": "adfe05c7e0cebf76c3f6cf7f84c7523e", "text": "Mass detection from mammograms plays a crucial role as a pre- processing stage for mass segmentation and classification. The detection of masses from mammograms is considered to be a challenging problem due to their large variation in shape, size, boundary and texture and also because of their low signal to noise ratio compared to the surrounding breast tissue. In this paper, we present a novel approach for detecting masses in mammograms using a cascade of deep learning and random forest classifiers. The first stage classifier consists of a multi-scale deep belief network that selects suspicious regions to be further processed by a two-level cascade of deep convolutional neural networks. The regions that survive this deep learning analysis are then processed by a two-level cascade of random forest classifiers that use morphological and texture features extracted from regions selected along the cascade. Finally, regions that survive the cascade of random forest classifiers are combined using connected component analysis to produce state-of-the-art results. We also show that the proposed cascade of deep learning and random forest classifiers are effective in the reduction of false positive regions, while maintaining a high true positive detection rate. We tested our mass detection system on two publicly available datasets: DDSM-BCRP and INbreast. The final mass detection produced by our approach achieves the best results on these publicly available datasets with a true positive rate of 0.96 ± 0.03 at 1.2 false positive per image on INbreast and true positive rate of 0.75 at 4.8 false positive per image on DDSM-BCRP.", "title": "" }, { "docid": "776c532ef66ba1ed7b4709e6b4aaec4e", "text": "This paper presents a method of learning deep AND-OR Grammar (AOG) networks for visual recognition, which we term AOGNets. An AOGNet consists of a number of stages each of which is composed of a number of AOG building blocks. An AOG building block is designed based on a principled AND-OR grammar and represented by a hierarchical and compositional AND-OR graph [33, 46]. Each node applies some basic operation (e.g., Conv-BatchNormReLU) to its input. There are three types of nodes: an AND-node explores composition, whose input is computed by concatenating features of its child nodes; an OR-node represents alternative ways of composition in the spirit of exploitation, whose input is the element-wise sum of features of its child nodes; and a Terminal-node takes as input a channel-wise slice of the input feature map of the AOG building block. AOGNets aim to harness the best of two worlds (grammar models and deep neural networks) in representation learning with end-to-end training. In experiments, AOGNets are tested on three highly competitive image classification benchmarks: CIFAR-10, CIFAR-100 and ImageNet-1K. AOGNets obtain better performance than the widely used Residual Net [14] and most of its variants, and are comparable to the Dense Net [18]. AOGNets are also tested in object detection on the PASCAL VOC 2007 and 2012 [8] using the vanilla Faster RCNN [30] system and obtain better performance than the Residual Net.", "title": "" }, { "docid": "bae9ef12c49cf04f385103516656f7e7", "text": "Single image rain streaks removal has recently witnessed substantial progress due to the development of deep convolutional neural networks. However, existing deep learning based methods either focus on the entrance and exit of the network by decomposing the input image into high and low frequency information and employing residual learning to reduce the mapping range, or focus on the introduction of cascaded learning scheme to decompose the task of rain streaks removal into multi-stages. These methods treat the convolutional neural network as an encapsulated end-to-end mapping module without deepening into the rationality and superiority of neural network design. In this paper, we delve into an effective end-to-end neural network structure for stronger feature expression and spatial correlation learning. Specifically, we propose a non-locally enhanced encoder-decoder network framework, which consists of a pooling indices embedded encoder-decoder network to efficiently learn increasingly abstract feature representation for more accurate rain streaks modeling while perfectly preserving the image detail. The proposed encoder-decoder framework is composed of a series of non-locally enhanced dense blocks that are designed to not only fully exploit hierarchical features from all the convolutional layers but also well capture the long-distance dependencies and structural information. Extensive experiments on synthetic and real datasets demonstrate that the proposed method can effectively remove rain-streaks on rainy image of various densities while well preserving the image details, which achieves significant improvements over the recent state-of-the-art methods.", "title": "" }, { "docid": "777243cb514414dd225a9d5f41dc49b7", "text": "We have built and tested a decision tool which will help organisations properly select one business process maturity model (BPMM) over another. This prototype consists of a novel questionnaire with decision criteria for BPMM selection, linked to a unique data set of 69 BPMMs. Fourteen criteria (questions) were elicited from an international Delphi study, and weighed by the analytical hierarchy process. Case studies have shown (non-)profit and academic applications. Our purpose was to describe criteria that enable an informed BPMM choice (conform to decision-making theories, rather than ad hoc). Moreover, we propose a design process for building BPMM decision tools. 2013 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "79020f32ea93c9e9789bb3546cde1016", "text": "Within software engineering, requirements engineering starts from imprecise and vague user requirements descriptions and infers precise, formalized specifications. Techniques, such as interviewing by requirements engineers, are typically applied to identify the user's needs. We want to partially automate even this first step of requirements elicitation by methods of evolutionary computation. The idea is to enable users to specify their desired software by listing examples of behavioral descriptions. Users initially specify two lists of operation sequences, one with desired behaviors and one with forbidden behaviors. Then, we search for the appropriate formal software specification in the form of a deterministic finite automaton. We solve this problem known as grammatical inference with an active coevolutionary approach following Bongard and Lipson [2]. The coevolutionary process alternates between two phases: (A) additional training data is actively proposed by an evolutionary process and the user is interactively asked to label it; (B) appropriate automata are then evolved to solve this extended grammatical inference problem. Our approach leverages multi-objective evolution in both phases and outperforms the state-of-the-art technique [2] for input alphabet sizes of three and more, which are relevant to our problem domain of requirements specification.", "title": "" }, { "docid": "ee027c9ee2f66bc6cf6fb32a5697ee49", "text": "Patellofemoral pain (PFP) is a very common problem in athletes who participate in jumping, cutting and pivoting sports. Several risk factors may play a part in the pathogenesis of PFP. Overuse, trauma and intrinsic risk factors are particularly important among athletes. Physical examination has a key role in PFP diagnosis. Furthermore, common risk factors should be investigated, such as hip muscle dysfunction, poor core muscle endurance, muscular tightness, excessive foot pronation and patellar malalignment. Imaging is seldom needed in special cases. Many possible interventions are recommended for PFP management. Due to the multifactorial nature of PFP, the clinical approach should be individualized, and the contribution of different factors should be considered and managed accordingly. In most cases, activity modification and rehabilitation should be tried before any surgical interventions.", "title": "" }, { "docid": "6559d77de48d153153ce77b0e2969793", "text": "1 This paper is an invited chapter to be published in the Handbooks in Operations Research and Management Science: Supply Chain Management, edited by Steve Graves and Ton de Kok and published by North-Holland. I would like to thank the many people that carefully read and commented on the ...rst draft of this manuscript: Ravi Anupindi, Fangruo Chen, Charles Corbett, James Dana, Ananth Iyer, Ton de Kok, Yigal Gerchak, Mark Ferguson, Marty Lariviere, Serguei Netessine, Ediel Pinker, Nils Rudi, Sridhar Seshadri, Terry Taylor and Kevin Weng. I am, of course, responsible for all remaining errors. Comments, of course, are still quite welcomed.", "title": "" }, { "docid": "f6985bb539d71f569028bf31a87e4a90", "text": "Tinea capitis favosa, a chronic inflammatory dermatophyte infection of the scalp, affects over 90% of patients with anthropophilic Trichophyton schoenleinii. T. violaceum, T. verrucosum, zoophilic T. mentagrophytes (referred to as ‘var. quinckeanum’), Microsporum canis, and geophilic M. gypseum have also been recovered from favic lesions. Favus is typically a childhood disease, yet adult cases are not uncommon. Interestingly, favus is less contagious than other dermatophytoses, although intrafamilial infections are reported and have been widely discussed in the literature. Clinical presentation of T. schoenleinii infections is variable: this fungus can be isolated from tinea capitis lesions that appear as gray patches, but symptom-free colonization of the scalp also occurs. Although in the past T. schoenleinii was the dominant fungus recovered from dermatophytic scalp lesions, worldwide the incidence has decreased except in China, Nigeria, and Iran. Favus of the glabrous skin and nails are reported less frequently than favus of the scalp. This review discusses the clinical features of favus, as well as the etiological agents, global epidemiology, laboratory diagnosis, and a short history of medical mycology.", "title": "" }, { "docid": "c0767c58b4a5e81ddc35d045ccaa137f", "text": "A reinforcement learning agent that needs to pursue different goals across episodes requires a goal-conditional policy. In addition to their potential to generalize desirable behavior to unseen goals, such policies may also enable higher-level planning based on subgoals. In sparse-reward environments, the capacity to exploit information about the degree to which an arbitrary goal has been achieved while another goal was intended appears crucial to enable sample efficient learning. However, reinforcement learning agents have only recently been endowed with such capacity for hindsight. In this paper, we demonstrate how hindsight can be introduced to policy gradient methods, generalizing this idea to a broad class of successful algorithms. Our experiments on a diverse selection of sparse-reward environments show that hindsight leads to a remarkable increase in sample efficiency.", "title": "" }, { "docid": "e1f6e042174d5ca41445711a25903506", "text": "We define and study the link prediction problem in bipartite networks, specializing general link prediction algorithms to the bipartite case. In a graph, a link prediction function of two vertices denotes the similarity or proximity of the vertices. Common link prediction functions for general graphs are defined using paths of length two between two nodes. Since in a bipartite graph adjacency vertices can only be connected by paths of odd lengths, these functions do not apply to bipartite graphs. Instead, a certain class of graph kernels (spectral transformation kernels) can be generalized to bipartite graphs when the positivesemidefinite kernel constraint is relaxed. This generalization is realized by the odd component of the underlying spectral transformation. This construction leads to several new link prediction pseudokernels such as the matrix hyperbolic sine, which we examine for rating graphs, authorship graphs, folksonomies, document–feature networks and other types of bipartite networks.", "title": "" }, { "docid": "b7ee47b961eeba5fa4dd28ce56ab47ee", "text": "Virtual view synthesis from an array of cameras has been an essential element of three-dimensional video broadcasting/conferencing. In this paper, we propose a scheme based on a hybrid camera array consisting of four regular video cameras and one time-of-flight depth camera. During rendering, we use the depth image from the depth camera as initialization, and compute a view-dependent scene geometry using constrained plane sweeping from the regular cameras. View-dependent texture mapping is then deployed to render the scene at the desired virtual viewpoint. Experimental results show that the addition of the time-of-flight depth camera greatly improves the rendering quality compared with an array of regular cameras with similar sparsity. In the application of 3D video boardcasting/conferencing, our hybrid camera system demonstrates great potential in reducing the amount of data for compression/streaming while maintaining high rendering quality.", "title": "" }, { "docid": "945ac1f93e8bc636880a8ce3b1d1e18e", "text": "This paper presents the development of wideband power divider using radial stub for six-port interferometer. The performance of designed power divider is evaluated over Ultrawideband (UWB) frequency range across 3.1 GHz to 10.6 GHz. The design of power divider is started with conventional design of Wilkinson power divider. The operating bandwidth of the power divider is improved by introducing radial stub to the design. To observe the significant bandwidth enhancement, the performance of the power divider with radial stub is compared with the conventional design of power divider. The comparison is investigated using CST Microwave Studio 2010 simulation tool. The overall simulated percentage bandwidth of the radial stub power divider is 37.5%, ranging from 5 to 11 GHz frequency band. To validate the proposed design, the design is fabricated and its S-parameter performances are measured using vector network analyzer. The simulated and measured results of the proposed Wilkinson power divider is compared and analyzed.", "title": "" }, { "docid": "cd096d5e7c687facb8fa4edb0c1d3bbf", "text": "We introduce a novel variational method that allows to approximately integrate out kernel hyperparameters, such as length-scales, in Gaussian process regression. This approach consists of a novel variant of the variational framework that has been recently developed for the Gaussian process latent variable model which additionally makes use of a standardised representation of the Gaussian process. We consider this technique for learning Mahalanobis distance metrics in a Gaussian process regression setting and provide experimental evaluations and comparisons with existing methods by considering datasets with high-dimensional inputs.", "title": "" }, { "docid": "549d486d6ff362bc016c6ce449e29dc9", "text": "Aging is very often associated with magnesium (Mg) deficit. Total plasma magnesium concentrations are remarkably constant in healthy subjects throughout life, while total body Mg and Mg in the intracellular compartment tend to decrease with age. Dietary Mg deficiencies are common in the elderly population. Other frequent causes of Mg deficits in the elderly include reduced Mg intestinal absorption, reduced Mg bone stores, and excess urinary loss. Secondary Mg deficit in aging may result from different conditions and diseases often observed in the elderly (i.e. insulin resistance and/or type 2 diabetes mellitus) and drugs (i.e. use of hypermagnesuric diuretics). Chronic Mg deficits have been linked to an increased risk of numerous preclinical and clinical outcomes, mostly observed in the elderly population, including hypertension, stroke, atherosclerosis, ischemic heart disease, cardiac arrhythmias, glucose intolerance, insulin resistance, type 2 diabetes mellitus, endothelial dysfunction, vascular remodeling, alterations in lipid metabolism, platelet aggregation/thrombosis, inflammation, oxidative stress, cardiovascular mortality, asthma, chronic fatigue, as well as depression and other neuropsychiatric disorders. Both aging and Mg deficiency have been associated to excessive production of oxygen-derived free radicals and low-grade inflammation. Chronic inflammation and oxidative stress are also present in several age-related diseases, such as many vascular and metabolic conditions, as well as frailty, muscle loss and sarcopenia, and altered immune responses, among others. Mg deficit associated to aging may be at least one of the pathophysiological links that may help to explain the interactions between inflammation and oxidative stress with the aging process and many age-related diseases.", "title": "" }, { "docid": "031e3f1ae2537b603b4b2119f3dad572", "text": "Efficient storage and querying of large repositories of RDF content is important due to the widespread growth of Semantic Web and Linked Open Data initiatives. Many novel database systems that store RDF in its native form or within traditional relational storage have demonstrated their ability to scale to large volumes of RDF content. However, it is increasingly becoming obvious that the simple dyadic relationship captured through traditional triples alone is not sufficient for modelling multi-entity relationships, provenance of facts, etc. Such richer models are supported in RDF through two techniques - first, called reification which retains the triple nature of RDF and the second, a non-standard extension called N-Quads. In this paper, we explore the challenges of supporting such richer semantic data by extending the state-of-the-art RDF-3X system. We describe our implementation of RQ-RDF-3X, a reification and quad enhanced RDF-3X, which involved a significant re-engineering ranging from the set of indexes and their compression schemes to the query processing pipeline for queries over reified content. Using large RDF repositories such as YAGO2S and DBpedia, and a set of SPARQL queries that utilize reification model, we demonstrate that RQ-RDF-3X is significantly faster than RDF-3X.", "title": "" }, { "docid": "2116414a3e7996d4701b9003a6ccfd15", "text": "Informal genres such as tweets provide large quantities of data in real time, which can be exploited to obtain, through ranking and classification, a succinct summary of the events that occurred. Previous work on tweet ranking and classification mainly focused on salience and social network features or rely on web documents such as online news articles. In this paper, we exploit language independent journalism and content based features to identify news from tweets. We propose a novel newsworthiness classifier trained through active learning and investigate human assessment and automatic methods to encode it on both the tweet and trending topic levels. Our findings show that content and journalism based features proved to be effective for ranking and classifying content on Twitter.", "title": "" } ]
scidocsrr
7b78f2e6fd03c1e50014cf91061db02b
Recurrent convolutional neural networks for object-class segmentation of RGB-D video
[ { "docid": "8a77882cfe06eaa88db529432ed31b0c", "text": "We present an approach to interpret the major surfaces, objects, and support relations of an indoor scene from an RGBD image. Most existing work ignores physical interactions or is applied only to tidy rooms and hallways. Our goal is to parse typical, often messy, indoor scenes into floor, walls, supporting surfaces, and object regions, and to recover support relationships. One of our main interests is to better understand how 3D cues can best inform a structured 3D interpretation. We also contribute a novel integer programming formulation to infer physical support relations. We offer a new dataset of 1449 RGBD images, capturing 464 diverse indoor scenes, with detailed annotations. Our experiments demonstrate our ability to infer support relations in complex scenes and verify that our 3D scene cues and inferred support lead to better object segmentation.", "title": "" } ]
[ { "docid": "60ea2144687d867bb4f6b21e792a8441", "text": "Stochastic gradient descent is a simple approach to find the local minima of a cost function whose evaluations are corrupted by noise. In this paper, we develop a procedure extending stochastic gradient descent algorithms to the case where the function is defined on a Riemannian manifold. We prove that, as in the Euclidian case, the gradient descent algorithm converges to a critical point of the cost function. The algorithm has numerous potential applications, and is illustrated here by four examples. In particular a novel gossip algorithm on the set of covariance matrices is derived and tested numerically.", "title": "" }, { "docid": "f01970e430e1dce15efdd5a054bd9c2a", "text": "Hierarchical text categorization (HTC) refers to assigning a text document to one or more most suitable categories from a hierarchical category space. In this paper we present two HTC techniques based on kNN and SVM machine learning techniques for categorization process and byte n-gram based document representation. They are fully language independent and do not require any text preprocessing steps, or any prior information about document content or language. The effectiveness of the presented techniques and their language independence are demonstrated in experiments performed on five tree-structured benchmark category hierarchies that differ in many aspects: Reuters-Hier1, Reuters-Hier2, 15NGHier and 20NGHier in English and TanCorpHier in Chinese. The results obtained are compared with the corresponding flat categorization techniques applied to leaf level categories of the considered hierarchies. While kNN-based flat text categorization produced slightly better results than kNN-based HTC on the largest TanCorpHier and 20NGHier datasets, SVM-based HTC results do not considerably differ from the corresponding flat techniques, due to shallow hierarchies; still, they outperform both kNN-based flat and hierarchical categorization on all corpora except the smallest Reuters-Hier1 and Reuters-Hier2 datasets. Formal evaluation confirmed that the proposed techniques obtained state-of-the-art results.", "title": "" }, { "docid": "cbac071c932c73813630fd7384e4f98c", "text": "In this paper we propose a method that, given a query submitte d to a search engine, suggests a list of related queries. The rela t d queries are based in previously issued queries, and can be issued by the user to the search engine to tune or redirect the search process. The method proposed i s based on a query clustering process in which groups of semantically similar queries are identified. The clustering process uses the content of historical prefe renc s of users registered in the query log of the search engine. The method not onl y discovers the related queries, but also ranks them according to a relevanc criterion. Finally, we show with experiments over the query log of a search engine the ffectiveness of the method.", "title": "" }, { "docid": "ae6a02ee18e3599c65fb9db22706de44", "text": "We use a hierarchical Bayesian approach to model user preferences in different contexts or settings. Unlike many previous recommenders, our approach is content-based. We assume that for each context, a user has a different set of preference weights which are linked by a common, “generic context” set of weights. The approach uses Expectation Maximization (EM) to estimate both the generic context weights and the context specific weights. This improves upon many current recommender systems that do not incorporate context into the recommendations they provide. In this paper, we show that by considering contextual information, we can improve our recommendations, demonstrating that it is useful to consider context in giving ratings. Because the approach does not rely on connecting users via collaborative filtering, users are able to interpret contexts in different ways and invent their own", "title": "" }, { "docid": "00ae7d925a12b1f35f33213af08c82c9", "text": "Graph-based approaches have been successful in unsupervised and semi-supervised learning. In this paper, we focus on the real-world applications where the same instance can be represented by multiple heterogeneous features. The key point of utilizing the graph-based knowledge to deal with this kind of data is to reasonably integrate the different representations and obtain the most consistent manifold with the real data distributions. In this paper, we propose a novel framework via the reformulation of the standard spectral learning model, which can be used for multiview clustering and semisupervised tasks. Unlike other methods in the literature, the proposed methods can learn an optimal weight for each graph automatically without introducing an additive parameter as previous methods do. Furthermore, our objective under semisupervised learning is convex and the global optimal result will be obtained. Extensive empirical results on different real-world data sets demonstrate that the proposed methods achieve comparable performance with the state-of-the-art approaches and can be used more practically.", "title": "" }, { "docid": "aa74bb5c6dbb758e0a68e10b1f35f3c9", "text": "College students differ in their approaches to challenging course assignments. While some prefer to begin their assignments early, others postpone their work until the last minute. The present study adds to the procrastination literature by examining the links among self-compassionate attitudes, motivation, and procrastination tendency. A sample of college undergraduates completed four online surveys. Individuals with low, moderate, and high levels of self-compassion were compared on measures of motivation anxiety, achievement goal orientation, and procrastination tendency. Data analyses revealed that individuals with high self-compassion reported dramatically less motivation anxiety and procrastination tendency than those with low or moderate self-compassion. The practical importance of studying self-views as potential triggers for procrastination behavior and directions for future research are discussed.", "title": "" }, { "docid": "30aeb5f14438b03f7cdaee9783273d97", "text": "The status of English grammar teaching in English teaching has weakened and even once disappeared in part English class; until the late 1980s, foreign English teachers had a consistent view of the importance of grammar teaching. In recent years, more and more domestic scholars begin to think about the situation of China and explore the grammar teaching method. This article will review the explicit grammar instruction and implicit grammar teaching research, collect and analyze the integration of explicit grammar instruction and implicit grammar teaching strategy and its advantages in the grammar teaching.", "title": "" }, { "docid": "8fd97add7e3b48bad9fd82dc01422e59", "text": "Anaerobic nitrate-dependent Fe(II) oxidation is widespread in various environments and is known to be performed by both heterotrophic and autotrophic microorganisms. Although Fe(II) oxidation is predominantly biological under acidic conditions, to date most of the studies on nitrate-dependent Fe(II) oxidation were from environments of circumneutral pH. The present study was conducted in Lake Grosse Fuchskuhle, a moderately acidic ecosystem receiving humic acids from an adjacent bog, with the objective of identifying, characterizing and enumerating the microorganisms responsible for this process. The incubations of sediment under chemolithotrophic nitrate-dependent Fe(II)-oxidizing conditions have shown the enrichment of TM3 group of uncultured Actinobacteria. A time-course experiment done on these Actinobacteria showed a consumption of Fe(II) and nitrate in accordance with the expected stoichiometry (1:0.2) required for nitrate-dependent Fe(II) oxidation. Quantifications done by most probable number showed the presence of 1 × 104 autotrophic and 1 × 107 heterotrophic nitrate-dependent Fe(II) oxidizers per gram fresh weight of sediment. The analysis of microbial community by 16S rRNA gene amplicon pyrosequencing showed that these actinobacterial sequences correspond to ∼0.6% of bacterial 16S rRNA gene sequences. Stable isotope probing using 13CO2 was performed with the lake sediment and showed labeling of these Actinobacteria. This indicated that they might be important autotrophs in this environment. Although these Actinobacteria are not dominant members of the sediment microbial community, they could be of functional significance due to their contribution to the regeneration of Fe(III), which has a critical role as an electron acceptor for anaerobic microorganisms mineralizing sediment organic matter. To the best of our knowledge this is the first study to show the autotrophic nitrate-dependent Fe(II)-oxidizing nature of TM3 group of uncultured Actinobacteria.", "title": "" }, { "docid": "cc6161fd350ac32537dc704cbfef2155", "text": "The contribution of cloud computing and mobile computing technologies lead to the newly emerging mobile cloud computing paradigm. Three major approaches have been proposed for mobile cloud applications: 1) extending the access to cloud services to mobile devices; 2) enabling mobile devices to work collaboratively as cloud resource providers; 3) augmenting the execution of mobile applications on portable devices using cloud resources. In this paper, we focus on the third approach in supporting mobile data stream applications. More specifically, we study how to optimize the computation partitioning of a data stream application between mobile and cloud to achieve maximum speed/throughput in processing the streaming data.\n To the best of our knowledge, it is the first work to study the partitioning problem for mobile data stream applications, where the optimization is placed on achieving high throughput of processing the streaming data rather than minimizing the makespan of executions as in other applications. We first propose a framework to provide runtime support for the dynamic computation partitioning and execution of the application. Different from existing works, the framework not only allows the dynamic partitioning for a single user but also supports the sharing of computation instances among multiple users in the cloud to achieve efficient utilization of the underlying cloud resources. Meanwhile, the framework has better scalability because it is designed on the elastic cloud fabrics. Based on the framework, we design a genetic algorithm for optimal computation partition. Both numerical evaluation and real world experiment have been performed, and the results show that the partitioned application can achieve at least two times better performance in terms of throughput than the application without partitioning.", "title": "" }, { "docid": "ed414502134a7423af6b54f17db72e8e", "text": "Chatbots have been used in different scenarios for getting people interested in CS for decades. However, their potential for teaching basic concepts and their engaging effect has not been measured. In this paper we present a software platform called Chatbot designed to foster engagement while teaching basic CS concepts such as variables, conditionals and finite state automata, among others. We carried out two experiences using Chatbot and the well known platform Alice: 1) an online nation-wide competition, and 2) an in-class 15-lesson pilot course in 2 high schools. Data shows that retention and girl interest are higher with Chatbot than with Alice, indicating student engagement.", "title": "" }, { "docid": "0b7eef6d7207fa6bacaab9f4b60a93ed", "text": "Male genital self-mutilation is extremely rare and may be associated with severe psychopathology. This study reports the case of a 26-year-old man who presented after incising his prepuce with a knife and placing a rubber band around his foreskin. A plastic ring was also found underneath the prepuce. Clinical examination revealed a lateral preputial laceration with gross preputial oedema and dark red discoloration with a clear demarcation where the elastic band had been placed. A pelvic X-ray revealed no other foreign bodies. Following 72 h of observation he developed signs of preputial necrosis, which prompted urgent circumcision; this revealed a healthy underlying glans. In conclusion, male genital self-harm requires urgent urological and psychiatric assessments to prevent surgical and psychiatric sequelae including necrotizing fasciitis and suicide.", "title": "" }, { "docid": "5739713d17ec5cc6952832644b2a1386", "text": "Group Support Systems (GSS) can improve the productivity of Group Work by offering a variety of tools to assist a virtual group across geographical distances. Experience shows that the value of a GSS depends on how purposefully and skillfully it is used. We present a framework for a universal GSS based on a thinkLet- and thinXel-based Group Process Modeling Language (GPML). Our framework approach uses the GPML to describe different kinds of group processes in an unambiguous and compact representation and to guide the participants automatically through these processes. We assume that a GSS based on this GPML can provide the following advantages: to support the user by designing and executing a collaboration process and to increase the applicability of GSSs for different kinds of group processes. We will present a prototype and use different kinds of group processes to illustrate the application of a GPML for a universal GSS.", "title": "" }, { "docid": "f9c37f460fc0a4e7af577ab2cbe7045b", "text": "Declines in various cognitive abilities, particularly executive control functions, are observed in older adults. An important goal of cognitive training is to slow or reverse these age-related declines. However, opinion is divided in the literature regarding whether cognitive training can engender transfer to a variety of cognitive skills in older adults. In the current study, the authors trained older adults in a real-time strategy video game for 23.5 hr in an effort to improve their executive functions. A battery of cognitive tasks, including tasks of executive control and visuospatial skills, were assessed before, during, and after video-game training. The trainees improved significantly in the measures of game performance. They also improved significantly more than the control participants in executive control functions, such as task switching, working memory, visual short-term memory, and reasoning. Individual differences in changes in game performance were correlated with improvements in task switching. The study has implications for the enhancement of executive control processes of older adults.", "title": "" }, { "docid": "8204e456dfb8d0f8dc39a20166df9798", "text": "A sketch combination system is introduced and tested: a crowd of 1047 participated in an iterative process of design, evaluation and combination. Specifically, participants in a crowdsourcing marketplace sketched chairs for children. One crowd created a first generation of chairs, and then successive crowds created new generations by combining the chairs made by previous crowds. Other participants evaluated the chairs. The crowd judged the chairs from the third generation more creative than those from the first generation. An analysis of the design evolution shows that participants inherited and modified presented features, and also added new features. These findings suggest that crowd based design processes may be effective, and point the way toward computer-human interactions that might further encourage crowd creativity.", "title": "" }, { "docid": "27464fdcd9a56975bf381773fd4da76d", "text": "Although evidence with respect to its prevalence is mixed, it is clear that fathers perpetrate a serious proportion of filicide. There also seems to be a consensus that paternal filicide has attracted less research attention than its maternal counterpart and is therefore less well understood. National registries are a very rich source of data, but they generally provide limited information about the perpetrator as psychiatric, psychological and behavioral data are often lacking. This paper presents a fully documented case of a paternal filicide. Noteworthy is that two motives were present: spousal revenge as well as altruism. The choice of the victim was in line with emerging evidence indicating that children with disabilities in general and with autism in particular are frequent victims of filicide-suicide. Finally, a schizoid personality disorder was diagnosed. Although research is quite scarce on that matter, some research outcomes have showed an association between schizoid personality disorder and homicide and violence.", "title": "" }, { "docid": "f935bdde9d4571f50e47e48f13bfc4b8", "text": "BACKGROUND\nThe incidence of microcephaly in Brazil in 2015 was 20 times higher than in previous years. Congenital microcephaly is associated with genetic factors and several causative agents. Epidemiological data suggest that microcephaly cases in Brazil might be associated with the introduction of Zika virus. We aimed to detect and sequence the Zika virus genome in amniotic fluid samples of two pregnant women in Brazil whose fetuses were diagnosed with microcephaly.\n\n\nMETHODS\nIn this case study, amniotic fluid samples from two pregnant women from the state of Paraíba in Brazil whose fetuses had been diagnosed with microcephaly were obtained, on the recommendation of the Brazilian health authorities, by ultrasound-guided transabdominal amniocentesis at 28 weeks' gestation. The women had presented at 18 weeks' and 10 weeks' gestation, respectively, with clinical manifestations that could have been symptoms of Zika virus infection, including fever, myalgia, and rash. After the amniotic fluid samples were centrifuged, DNA and RNA were extracted from the purified virus particles before the viral genome was identified by quantitative reverse transcription PCR and viral metagenomic next-generation sequencing. Phylogenetic reconstruction and investigation of recombination events were done by comparing the Brazilian Zika virus genome with sequences from other Zika strains and from flaviviruses that occur in similar regions in Brazil.\n\n\nFINDINGS\nWe detected the Zika virus genome in the amniotic fluid of both pregnant women. The virus was not detected in their urine or serum. Tests for dengue virus, chikungunya virus, Toxoplasma gondii, rubella virus, cytomegalovirus, herpes simplex virus, HIV, Treponema pallidum, and parvovirus B19 were all negative. After sequencing of the complete genome of the Brazilian Zika virus isolated from patient 1, phylogenetic analyses showed that the virus shares 97-100% of its genomic identity with lineages isolated during an outbreak in French Polynesia in 2013, and that in both envelope and NS5 genomic regions, it clustered with sequences from North and South America, southeast Asia, and the Pacific. After assessing the possibility of recombination events between the Zika virus and other flaviviruses, we ruled out the hypothesis that the Brazilian Zika virus genome is a recombinant strain with other mosquito-borne flaviviruses.\n\n\nINTERPRETATION\nThese findings strengthen the putative association between Zika virus and cases of microcephaly in neonates in Brazil. Moreover, our results suggest that the virus can cross the placental barrier. As a result, Zika virus should be considered as a potential infectious agent for human fetuses. Pathogenesis studies that confirm the tropism of Zika virus for neuronal cells are warranted.\n\n\nFUNDING\nConsellho Nacional de Desenvolvimento e Pesquisa (CNPq), Fundação de Amparo a Pesquisa do Estado do Rio de Janeiro (FAPERJ).", "title": "" }, { "docid": "0d9affda4d9f7089d76a492676ab3f9e", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR' s Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR' s Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission. The American Political Science Review is published by American Political Science Association. Please contact the publisher for further permissions regarding the use of this work. Publisher contact information may be obtained at http://www.jstor.org/joumals/apsa.html.", "title": "" }, { "docid": "193cb03ebb59935ea33d23daaebbfb74", "text": "We study semi-supervised learning when the data consists of multiple intersecting manifolds. We give a finite sample analysis to quantify the potential gain of using unlabeled data in this multi-manifold setting. We then propose a semi-supervised learning algorithm that separates different manifolds into decision sets, and performs supervised learning within each set. Our algorithm involves a novel application of Hellinger distance and size-constrained spectral clustering. Experiments demonstrate the benefit of our multimanifold semi-supervised learning approach.", "title": "" }, { "docid": "3c8e85a977df74c2fd345db9934d4699", "text": "The abstract paragraph should be indented 1/2 inch (3 picas) on both left and righthand margins. Use 10 point type, with a vertical spacing of 11 points. The word Abstract must be centered, bold, and in point size 12. Two line spaces precede the abstract. The abstract must be limited to one paragraph.", "title": "" }, { "docid": "4c995fce3bc3a0f9c8a06ebaec446ee7", "text": "We introduce a facial animation system that produces real-time animation sequences including speech synchronization and non-verbal speech-related facial expressions from plain text input. A state-of-the-art text-to-speech synthesis component performs linguistic analysis of the text input and creates a speech signal from phonetic and intonation information. The phonetic transcription is additionally used to drive a speech synchronization method for the physically based facial animation. Further high-level information from the linguistic analysis such as different types of accents or pauses as well as the type of the sentence is used to generate non-verbal speech-related facial expressions such as movement of head, eyes, and eyebrows or voluntary eye blinks. Moreover, emoticons are translated into XML markup that triggers emotional facial expressions.", "title": "" } ]
scidocsrr
9e71b693bc47ad63ec0cc765206f56b0
Accurate pedestrian localization in overhead depth images via Height-Augmented HOG
[ { "docid": "295c6a54db24bf28f5970e60e6bf5971", "text": "This thesis presents a learning based approach for detecting classes of objects and patterns with variable image appearance but highly predictable image boundaries. It consists of two parts. In part one, we introduce our object and pattern detection approach using a concrete human face detection example. The approach rst builds a distribution-based model of the target pattern class in an appropriate feature space to describe the target's variable image appearance. It then learns from examples a similarity measure for matching new patterns against the distribution-based target model. The approach makes few assumptions about the target pattern class and should therefore be fairly general, as long as the target class has predictable image boundaries. Because our object and pattern detection approach is very much learning-based, how well a system eventually performs depends heavily on the quality of training examples it receives. The second part of this thesis looks at how one can select high quality examples for function approximation learning tasks. We propose an active learning formulation for function approximation, and show for three speci c approximation function classes, that the active example selection strategy learns its target with fewer data samples than random sampling. We then simplify the original active learning formulation, and show how it leads to a tractable example selection paradigm, suitable for use in many object and pattern detection problems. Copyright c Massachusetts Institute of Technology, 1995 This report describes research done at the Arti cial Intelligence Laboratory and within the Center for Biological and Computational Learning. This research is sponsored by grants from the O ce of Naval Research under contracts N00014-91-J-1270 and N00014-92-J-1879; by a grant from the National Science Foundation under contract ASC-9217041. Support for the A.I. Laboratory's arti cial intelligence research is provided by ONR contract N00014-91-J-4038. Learning and Example Selection for Object and Pattern Detection", "title": "" } ]
[ { "docid": "d02f628b11798261d6a6f4a04a0252a9", "text": "This paper presents investigations of voltage-sharing stabilization with the use of passive RLC circuit in switch-mode flying capacitor DC-DC converters. Practical and simulation results and also a mathematical analysis of the balancing process in boost and buck-boost converters are presented. Analyzed converters use additional capacitors (flying capacitors), charged to proper value, for decreasing the voltage on switches and increasing the inductor-current frequency. Such advantages are achieved under proper voltage sharing on the flying capacitors. The voltages are stabilized in a natural way by the load current and with the use of external RLC circuit to force the current that flows through the converters' capacitors under unbalance state. This paper focuses on the analysis of the balancing phenomenon with the use of the external RLC circuit in these topologies. The balancing booster improves the balancing process in these converters, making it independent of the load. It can also reduce oscillations that arise in the converters in transient states.", "title": "" }, { "docid": "443625fc92f8a2cf29000ce8a13b3182", "text": "A visual display of stripes was used to examine cardiovagal response to motion sickness. Heart rate variability (HRV) was investigated using dynamic methods to discern instantaneous fluctuations in reaction to stimulus and perception-based events. A novel point process adaptive recursive algorithm was applied to the R-R series to compute instantaneous heart rate, HRV, and high frequency (HF) power as a marker of vagal activity. Results show interesting dynamic trends in each of the considered subjects. HF power averaged across ten subjects indicates a significant decrease 20s to 60s following the transition from “no nausea” to “mild.” Conversely, right before “strong” nausea, the group average shows a transient trending increase in HF power. Findings confirm gradual sympathetic activation with increasing nausea, and further evidence transitory increases in vagal tone before flushes of strong nausea.", "title": "" }, { "docid": "843968fe4adff16e160c75105505db66", "text": "As user-generated Web content increases, the amount of inappropriate and/or objectionable content also grows. Several scholarly communities are addressing how to detect and manage such content: research in computer vision focuses on detection of inappropriate images, natural language processing technology has advanced to recognize insults. However, profanity detection systems remain flawed. Current list-based profanity detection systems have two limitations. First, they are easy to circumvent and easily become stale - that is, they cannot adapt to misspellings, abbreviations, and the fast pace of profane slang evolution. Secondly, they offer a one-size fits all solution; they typically do not accommodate domain, community and context specific needs. However, social settings have their own normative behaviors - what is deemed acceptable in one community may not be in another. In this paper, through analysis of comments from a social news site, we provide evidence that current systems are performing poorly and evaluate the cases on which they fail. We then address community differences regarding creation/tolerance of profanity and suggest a shift to more contextually nuanced profanity detection systems.", "title": "" }, { "docid": "89d05b1f40431af3cc6e2a8e71880e6f", "text": "Many test series have been developed to assess dog temperament and aggressive behavior, but most of them have been criticized for their relatively low predictive validity or being too long, stressful, and/or problematic to carry out. We aimed to develop a short and effective series of tests that corresponds with (a) the dog's bite history, and (b) owner evaluation of the dog's aggressive tendencies. Seventy-three pet dogs were divided into three groups by their biting history; non-biter, bit once, and multiple biter. All dogs were exposed to a short test series modeling five real-life situations: friendly greeting, take away bone, threatening approach, tug-of-war, and roll over. We found strong correlations between the in-test behavior and owner reports of dogs' aggressive tendencies towards strangers; however, the test results did not mirror the reported owner-directed aggressive tendencies. Three test situations (friendly greeting, take-away bone, threatening approach) proved to be effective in evoking specific behavioral differences according to dog biting history. Non-biters differed from biters, and there were also specific differences related to aggression and fear between the two biter groups. When a subsample of dogs was retested, the test revealed consistent results over time. We suggest that our test is adequate for a quick, general assessment of human-directed aggression in dogs, particularly to evaluate their tendency for aggressive behaviors towards strangers. Identifying important behavioral indicators of aggressive tendencies, this test can serve as a useful tool to study the genetic or neural correlates of human-directed aggression in dogs.", "title": "" }, { "docid": "db7919b45d36456b612683ee4ff2585d", "text": "A plain well-trained deep learning model often does not have the ability to learn new knowledge without forgetting the previously learned knowledge, which is known as catastrophic forgetting. Here we propose a novel method, SupportNet, to efficiently and effectively solve the catastrophic forgetting problem in the class incremental learning scenario. SupportNet combines the strength of deep learning and support vector machine (SVM), where SVM is used to identify the support data from the old data, which are fed to the deep learning model together with the new data for further training so that the model can review the essential information of the old data when learning the new information. Two powerful consolidation regularizers are applied to stabilize the learned representation and ensure the robustness of the learned model. We validate our method with comprehensive experiments on various tasks, which show that SupportNet drastically outperforms the state-of-theart incremental learning methods and even reaches similar performance as the deep learning model trained from scratch on both old and new data. Our program is accessible at: https://github.com/lykaust15/SupportNet", "title": "" }, { "docid": "73ddacb2ed1eaa8670c777959cda0260", "text": "The current study examined normative beliefs about aggression as a mediator between narcissistic exploitativeness and cyberbullying using two Asian adolescent samples from Singapore and Malaysia. Narcissistic exploitativeness was significantly and positively associated with cyberbullying and normative beliefs about aggression and normative beliefs about aggression were significantly and positively associated with cyberbullying. Normative beliefs about aggression were a significant partial mediator in both samples; these beliefs about aggression served as one possible mechanism of action by which narcissistic exploitativeness could exert its influence on cyberbullying. Findings extended previous empirical research by showing that such beliefs can be the mechanism of action not only in offline but also in online contexts and across cultures. Cyberbullying prevention and intervention efforts should include modification of norms and beliefs supportive of the legitimacy and acceptability of cyberbullying.", "title": "" }, { "docid": "fda40e94b771e6ac4d0390236fd4eb56", "text": "How does users’ freedom of choice, or the lack thereof, affect interface preferences? The research reported in this article approaches this question from two theoretical perspectives. The first of these argues that an interface with a dominant market share benefits from the absence of competition because users acquire skills that are specific to that particular interface, which in turn reduces the probability that they will switch to a new competitor interface in the future. By contrast, the second perspective proposes that the advantage that a market leader has in being able to install a set of non-transferable skills in its user base is offset by a psychological force that causes humans to react against perceived constraints on their freedom of choice. We test a research model that incorporates the key predictions of these two theoretical perspectives in an experiment involving consequential interface choices. We find strong support for the second perspective, which builds upon the theory of psychological reactance.", "title": "" }, { "docid": "86846cd0bc21747e651191a170ad6af7", "text": "Recent advances in deep learning have enabled researchers across many disciplines to uncover new insights about large datasets. Deep neural networks have shown applicability to image, time-series, textual, and other data, all of which are available in a plethora of research fields. However, their computational complexity and large memory overhead requires advanced software and hardware technologies to train neural networks in a reasonable amount of time. To make this possible, there has been an influx in development of deep learning software that aim to leverage advanced hardware resources. In order to better understand the performance implications of deep learning frameworks over these different resources, we analyze the performance of three different frameworks, Caffe, TensorFlow, and Apache SINGA, over several hardware environments. This includes scaling up and out with single-and multi-node setups using different CPU and GPU technologies. Notably, we investigate the performance characteristics of NVIDIA's state-of-the-art hardware technology, NVLink, and also Intel's Knights Landing, the most advanced Intel product for deep learning, with respect to training time and utilization. To our best knowledge, this is the first work concerning deep learning bench-marking with NVLink and Knights Landing. Through these experiments, we provide analysis of the frameworks' performance over different hardware environments in terms of speed and scaling. As a result of this work, better insight is given towards both using and developing deep learning tools that cater to current and upcoming hardware technologies.", "title": "" }, { "docid": "f927b88e140c710f77f45d3f5e35904f", "text": "Prosthetic components and control interfaces for upper limb amputees have barely changed in the past 40 years. Many transradial prostheses have been developed in the past, nonetheless most of them would be inappropriate if/when a large bandwidth human-machine interface for control and perception would be available, due to either their limited (or inexistent) sensorization or limited dexterity. SmartHand tackles this issue as is meant to be clinically experimented in amputees employing different neuro-interfaces, in order to investigate their effectiveness. This paper presents the design and on bench evaluation of the SmartHand. SmartHand design was bio-inspired in terms of its physical appearance, kinematics, sensorization, and its multilevel control system. Underactuated fingers and differential mechanisms were designed and exploited in order to fit all mechatronic components in the size and weight of a natural human hand. Its sensory system was designed with the aim of delivering significant afferent information to the user through adequate interfaces. SmartHand is a five fingered self-contained robotic hand, with 16 degrees of freedom, actuated by 4 motors. It integrates a bio-inspired sensory system composed of 40 proprioceptive and exteroceptive sensors and a customized embedded controller both employed for implementing automatic grasp control and for potentially delivering sensory feedback to the amputee. It is able to perform everyday grasps, count and independently point the index. The weight (530 g) and speed (closing time: 1.5 seconds) are comparable to actual commercial prostheses. It is able to lift a 10 kg suitcase; slippage tests showed that within particular friction and geometric conditions the hand is able to stably grasp up to 3.6 kg cylindrical objects. Due to its unique embedded features and human-size, the SmartHand holds the promise to be experimentally fitted on transradial amputees and employed as a bi-directional instrument for investigating -during realistic experiments- different interfaces, control and feedback strategies in neuro-engineering studies.", "title": "" }, { "docid": "2fc645ec4f9fe757be65f3f02b803b50", "text": "Multicast communication plays a crucial role in Mobile Adhoc Networks (MANETs). MANETs provide low cost, self configuring devices for multimedia data communication in military battlefield scenarios, disaster and public safety networks (PSN). Multicast communication improves the network performance in terms of bandwidth consumption, battery power and routing overhead as compared to unicast for same volume of data communication. In recent past, a number of multicast routing protocols (MRPs) have been proposed that tried to resolve issues and challenges in MRP. Multicast based group communication demands dynamic construction of efficient and reliable route for multimedia data communication during high node mobility, contention, routing and channel overhead. This paper gives an insight into the merits and demerits of the currently known research techniques and provides a better environment to make reliable MRP. It presents a ample study of various Quality of Service (QoS) techniques and existing enhancement in mesh based MRPs. Mesh topology based MRPs are classified according to their enhancement in routing mechanism and QoS modification on On-Demand Multicast Routing Protocol (ODMRP) protocol to improve performance metrics. This paper covers the most recent, robust and reliable QoS and Mesh based MRPs, classified based on their operational features, with their advantages and limitations, and provides comparison of their performance parameters.", "title": "" }, { "docid": "dcbec6eea7b3157285298f303eb78840", "text": "Osteochondral tissue engineering has shown an increasing development to provide suitable strategies for the regeneration of damaged cartilage and underlying subchondral bone tissue. For reasons of the limitation in the capacity of articular cartilage to self-repair, it is essential to develop approaches based on suitable scaffolds made of appropriate engineered biomaterials. The combination of biodegradable polymers and bioactive ceramics in a variety of composite structures is promising in this area, whereby the fabrication methods, associated cells and signalling factors determine the success of the strategies. The objective of this review is to present and discuss approaches being proposed in osteochondral tissue engineering, which are focused on the application of various materials forming bilayered composite scaffolds, including polymers and ceramics, discussing the variety of scaffold designs and fabrication methods being developed. Additionally, cell sources and biological protein incorporation methods are discussed, addressing their interaction with scaffolds and highlighting the potential for creating a new generation of bilayered composite scaffolds that can mimic the native interfacial tissue properties, and are able to adapt to the biological environment.", "title": "" }, { "docid": "432149654abdfdabb9147a830f50196d", "text": "In this paper, an advanced High Voltage (HV) IGBT technology, which is focused on low loss and is the ultimate device concept for HV IGBT, is presented. CSTBTTM technology utilizing “ULSI technology” and “Light Punch-Through (LPT) II technology” (i.e. narrow Wide Cell Pitch LPT(II)-CSTBT(III)) for the first time demonstrates breaking through the limitation of HV IGBT's characteristics with voltage ratings ranging from 2500 V up to 6500 V. The improved significant trade-off characteristic between on-state voltage (VCE(sat)) and turn-off loss (EOFF) is achieved by means of a “narrow Wide Cell Pitch CSTBT(III) cell”. In addition, this device achieves a wide operating junction temperature (@218 ∼ 448K) and excellent short circuit behavior with the new cell and vertical designs. The LPT(II) concept is utilized for ensuring controllable IGBT characteristics and achieving a thin N− drift layer. Our results cover design of the Wide Cell Pitch LPT(II)-CSTBT(III) technology and demonstrate high total performance with a great improvement potential.", "title": "" }, { "docid": "dfbb168e1d98b3348ac3e0750d824fc3", "text": "The goal of Multiple Kernel Learning (MKL) is to combine kernels derived from multiple sources in a data-driven way with the aim to enhance the accuracy of a target kernel machine. State-of-the-art methods of MKL have the drawback that the time required to solve the associated optimization problem grows (typically more than linearly) with the number of kernels to combine. Moreover, it has been empirically observed that even sophisticated methods often do not significantly outperform the simple average of kernels. In this paper, we propose a time and space efficient MKL algorithm that can easily cope with hundreds of thousands of kernels and more. The proposed method has been compared with other baselines (random, average, etc.) and three state-of-the-art MKL methods showing that our approach is often superior. We show empirically that the advantage of using the method proposed in this paper is even clearer when noise features are added. Finally, we have analyzed how our algorithm changes its performance with respect to the number of examples in the training set and the number of kernels combined. & 2015 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "15570847902d1d6c1799b8e52f080492", "text": "Virtualization is a rapidly evolving technology that can be used to provide a range of benefits to computing systems, including improved resource utilization, software portability, and reliability. Virtualization also has the potential to enhance security by providing isolated execution environments for different applications that require different levels of security. For security-critical applications, it is highly desirable to have a small trusted computing base (TCB), since it minimizes the surface of attacks that could jeopardize the security of the entire system. In traditional virtualization architectures, the TCB for an application includes not only the hardware and the virtual machine monitor (VMM), but also the whole management operating system (OS) that contains the device drivers and virtual machine (VM) management functionality. For many applications, it is not acceptable to trust this management OS, due to its large code base and abundance of vulnerabilities. For example, consider the \"computing-as-a-service” scenario where remote users execute a guest OS and applications inside a VM on a remote computing platform. It would be preferable for many users to utilize such a computing service without being forced to trust the management OS on the remote platform. In this paper, we address the problem of providing a secure execution environment on a virtualized computing platform under the assumption of an untrusted management OS. We propose a secure virtualization architecture that provides a secure runtime environment, network interface, and secondary storage for a guest VM. The proposed architecture significantly reduces the TCB of security-critical guest VMs, leading to improved security in an untrusted management environment. We have implemented a prototype of the proposed approach using the Xen virtualization system, and demonstrated how it can be used to facilitate secure remote computing services. We evaluate the performance penalties incurred by the proposed architecture, and demonstrate that the penalties are minimal.", "title": "" }, { "docid": "3e974f6838a652cf19e4dac68b119286", "text": "Interrupted time series (ITS) analysis is a valuable study design for evaluating the effectiveness of population-level health interventions that have been implemented at a clearly defined point in time. It is increasingly being used to evaluate the effectiveness of interventions ranging from clinical therapy to national public health legislation. Whereas the design shares many properties of regression-based approaches in other epidemiological studies, there are a range of unique features of time series data that require additional methodological considerations. In this tutorial we use a worked example to demonstrate a robust approach to ITS analysis using segmented regression. We begin by describing the design and considering when ITS is an appropriate design choice. We then discuss the essential, yet often omitted, step of proposing the impact model a priori. Subsequently, we demonstrate the approach to statistical analysis including the main segmented regression model. Finally we describe the main methodological issues associated with ITS analysis: over-dispersion of time series data, autocorrelation, adjusting for seasonal trends and controlling for time-varying confounders, and we also outline some of the more complex design adaptations that can be used to strengthen the basic ITS design.", "title": "" }, { "docid": "637d700bcb162dff3e6342cab1bc0f85", "text": "This paper introduces a factorization-based approach that efficiently segments textured images. We use local spectral histograms as features, and construct an M × N feature matrix using M-dimensional feature vectors in an N-pixel image. Based on the observation that each feature can be approximated by a linear combination of several representative features, we factor the feature matrix into two matrices-one consisting of the representative features and the other containing the weights of representative features at each pixel used for linear combination. The factorization method is based on singular value decomposition and nonnegative matrix factorization. The method uses local spectral histograms to discriminate region appearances in a computationally efficient way and at the same time accurately localizes region boundaries. The experiments conducted on public segmentation data sets show the promise of this simple yet powerful approach.", "title": "" }, { "docid": "457c0f8f7cde71eb65e83f4d859e6c94", "text": "In cloud computing environment the new data management model is in use now a days that enables data integration and access on a large scale cloud computing as a service termed as Database-as-a-service (DAAS). Through which service provider offers customer management functionalities as well as the expensive hardware. Data privacy is the major security determinant in DAAS because data will be shared with a third party; an un-trusted server is dangerous and unsafe for the user. This paper shows a concern on the security element in cloud environment. It suggests a technique to enhance the security of cloud database. This technique provides the flexible multilevel and hybrid security. It uses RSA, Triple DES and Random Number generator algorithms as an encrypting tool.", "title": "" }, { "docid": "a5f926bc15c7b3dd75b3e67c8537c3fb", "text": "Practical and theoretical issues are presented concerning the design, implementation, and use of a good, minimal standard random number generator that will port to virtually all systems.", "title": "" }, { "docid": "8b0870c8e975eeff8597eb342cd4f3f9", "text": "We propose a novel recursive partitioning method for identifying subgroups of subjects with enhanced treatment effects based on a differential effect search algorithm. The idea is to build a collection of subgroups by recursively partitioning a database into two subgroups at each parent group, such that the treatment effect within one of the two subgroups is maximized compared with the other subgroup. The process of data splitting continues until a predefined stopping condition has been satisfied. The method is similar to 'interaction tree' approaches that allow incorporation of a treatment-by-split interaction in the splitting criterion. However, unlike other tree-based methods, this method searches only within specific regions of the covariate space and generates multiple subgroups of potential interest. We develop this method and provide guidance on key topics of interest that include generating multiple promising subgroups using different splitting criteria, choosing optimal values of complexity parameters via cross-validation, and addressing Type I error rate inflation inherent in data mining applications using a resampling-based method. We evaluate the operating characteristics of the procedure using a simulation study and illustrate the method with a clinical trial example.", "title": "" }, { "docid": "127bbcb3df6c43c2d791929426e5e087", "text": "Uplift modeling is a classification method that determines the incremental impact of an action on a given population. Uplift modeling aims at maximizing the area under the uplift curve, which is the difference between the subject and control sets’ area under the lift curve. Lift and uplift curves are seldom used outside of the marketing domain, whereas the related ROC curve is frequently used in multiple areas. Achieving a good uplift using an ROC-based model instead of lift may be more intuitive in several areas, and may help uplift modeling reach a wider audience. We alter SAYL, an uplift-modeling statistical relational learner, to use ROC instead of lift. We test our approach on a screening mammography dataset. SAYL-ROC outperforms SAYL on our data, though not significantly, suggesting that ROC can be used for uplift modeling. On the other hand, SAYL-ROC returns larger models, reducing interpretability.", "title": "" } ]
scidocsrr
be505da8cc855003c3a8554edd612e9d
Secure set intersection cardinality with application to association rule mining
[ { "docid": "60c1963a6d8f4f84d2bdc09d9f6f8e23", "text": "This paper studies how to build a decision tree classifier under the following scenario: a database is vertically partitioned into two pieces, with one piece owned by Alice and the other piece owned by Bob. Alice and Bob want to build a decision tree classifier based on such a database, but due to the privacy constraints, neither of them wants to disclose their private pieces to the other party or to any third party. We present a protocol that allows Alice and Bob to conduct such a classifier building without having to compromise their privacy. Our protocol uses an untrusted third-party server, and is built upon a useful building block, the scalar product protocol. Our solution to the scalar product protocol is more efficient than any existing solutions.", "title": "" } ]
[ { "docid": "e11a1e3ef5093aa77797463b7b8994ea", "text": "Video understanding of robot-assisted surgery (RAS) videos is an active research area. Modeling the gestures and skill level of surgeons presents an interesting problem. The insights drawn may be applied in effective skill acquisition, objective skill assessment, real-time feedback, and human–robot collaborative surgeries. We propose a solution to the tool detection and localization open problem in RAS video understanding, using a strictly computer vision approach and the recent advances of deep learning. We propose an architecture using multimodal convolutional neural networks for fast detection and localization of tools in RAS videos. To the best of our knowledge, this approach will be the first to incorporate deep neural networks for tool detection and localization in RAS videos. Our architecture applies a region proposal network (RPN) and a multimodal two stream convolutional network for object detection to jointly predict objectness and localization on a fusion of image and temporal motion cues. Our results with an average precision of 91% and a mean computation time of 0.1 s per test frame detection indicate that our study is superior to conventionally used methods for medical imaging while also emphasizing the benefits of using RPN for precision and efficiency. We also introduce a new data set, ATLAS Dione, for RAS video understanding. Our data set provides video data of ten surgeons from Roswell Park Cancer Institute, Buffalo, NY, USA, performing six different surgical tasks on the daVinci Surgical System (dVSS) with annotations of robotic tools per frame.", "title": "" }, { "docid": "89609e5b7f1d78851d0bc068e6d57de9", "text": "BACKGROUND\nDiffusion weighted imaging (DWI) is a non-invasive method for investigating the brain white matter structure and can be used to evaluate fiber bundles. However, due to practical constraints, DWI data acquired in clinics are low resolution.\n\n\nNEW METHOD\nThis paper proposes a method for interpolation of orientation distribution functions (ODFs). To this end, fuzzy clustering is applied to segment ODFs based on the principal diffusion directions (PDDs). Next, a cluster is modeled by a tensor so that an ODF is represented by a mixture of tensors. For interpolation, each tensor is rotated separately.\n\n\nRESULTS\nThe method is applied on the synthetic and real DWI data of control and epileptic subjects. Both experiments illustrate capability of the method in increasing spatial resolution of the data in the ODF field properly. The real dataset show that the method is capable of reliable identification of differences between temporal lobe epilepsy (TLE) patients and normal subjects.\n\n\nCOMPARISON WITH EXISTING METHODS\nThe method is compared to existing methods. Comparison studies show that the proposed method generates smaller angular errors relative to the existing methods. Another advantage of the method is that it does not require an iterative algorithm to find the tensors.\n\n\nCONCLUSIONS\nThe proposed method is appropriate for increasing resolution in the ODF field and can be applied to clinical data to improve evaluation of white matter fibers in the brain.", "title": "" }, { "docid": "70becc434885af8f59ad39a3cedc8b6d", "text": "The trajectory of the heel and toe during the swing phase of human gait were analyzed on young adults. The magnitude and variability of minimum toe clearance and heel-contact velocity were documented on 10 repeat walking trials on 11 subjects. The energetics that controlled step length resulted from a separate study of 55 walking trials conducted on subjects walking at slow, natural, and fast cadences. A sensitivity analysis of the toe clearance and heel-contact velocity measures revealed the individual changes at each joint in the link-segment chain that could be responsible for changes in those measures. Toe clearance was very small (1.29 cm) and had low variability (about 4 mm). Heel-contact velocity was negligible vertically and small (0.87 m/s) horizontally. Six joints in the link-segment chain could, with very small changes (+/- 0.86 degrees - +/- 3.3 degrees), independently account for toe clearance variability. Only one muscle group in the chain (swing-phase hamstring muscles) could be responsible for altering the heel-contact velocity prior to heel contact. Four mechanical power phases in gait (ankle push-off, hip pull-off, knee extensor eccentric power at push-off, and knee flexor eccentric power prior to heel contact) could alter step length and cadence. These analyses demonstrate that the safe trajectory of the foot during swing is a precise endpoint control task that is under the multisegment motor control of both the stance and swing limbs.", "title": "" }, { "docid": "6d8156b2952cc83701b06c24c2e7b162", "text": "Even when working on a well-modularized software system, programmers tend to spend more time navigating the code than working with it. This phenomenon arises because it is impossible to modularize the code for all tasks that occur over the lifetime of a system. We describe the use of a degree-of-interest (DOI) model to capture the task context of program elements scattered across a code base. The Mylar tool that we built encodes the DOI of program elements by monitoring the programmer's activity, and displays the encoded DOI model in views of Java and AspectJ programs. We also present the results of a preliminary diary study in which professional programmers used Mylar for their daily work on enterprise-scale Java systems.", "title": "" }, { "docid": "d118a5d9904a88ffd84a7f7c08970343", "text": "We present FingOrbits, a wearable interaction technique using synchronized thumb movements. A thumb-mounted ring with an inertial measurement unit and a contact microphone are used to capture thumb movements when rubbing against the other fingers. Spectral information of the movements are extracted and fed into a classification backend that facilitates gesture discrimination. FingOrbits enables up to 12 different gestures through detecting three rates of movement against each of the four fingers. Through a user study with 10 participants (7 novices, 3 experts), we demonstrate that FingOrbits can distinguish up to 12 thumb gestures with an accuracy of 89% to 99% rendering the approach applicable for practical applications.", "title": "" }, { "docid": "54cef03846f090678efd5b67d3cb5b17", "text": "This paper based on the speed control of induction motor (IM) using proportional integral controller (PI controller) and proportional integral derivative controller (PID controller) with the use of vector control technique. The conventional PID controller is compared with the conventional PI controller for full load condition. MATLAB simulation is carried out and results are investigated for speed control of Induction Motor without any controller, with PI controller and with PID controller on full load condition.", "title": "" }, { "docid": "a39834162b2072c69b03745cfdbe2f1a", "text": "AI has seen great advances of many kinds recently, but there is one critical area where progress has been extremely slow: ordinary commonsense.", "title": "" }, { "docid": "28899946726bc1e665298f09ea9e654d", "text": "This paper presents a simple and robust mechanism, called change-point monitoring (CPM), to detect denial of service (DoS) attacks. The core of CPM is based on the inherent network protocol behavior and is an instance of the sequential change point detection. To make the detection mechanism insensitive to sites and traffic patterns, a nonparametric cumulative sum (CUSUM) method is applied, thus making the detection mechanism robust, more generally applicable, and its deployment much easier. CPM does not require per-flow state information and only introduces a few variables to record the protocol behaviors. The statelessness and low computation overhead of CPM make itself immune to any flooding attacks. As a case study, the efficacy of CPM is evaluated by detecting a SYN flooding attack - the most common DoS attack. The evaluation results show that CPM has short detection latency and high detection accuracy", "title": "" }, { "docid": "9018c146d532071e7953cdc79d8ba2c0", "text": "The development of intrusion detection systems (IDS) that are adapted to allow routers and network defence systems to detect malicious network traffic disguised as network protocols or normal access is a critical challenge. This paper proposes a novel approach called SCDNN, which combines spectral clustering (SC) and deep neural network (DNN) algorithms. First, the dataset is divided into k subsets based on sample similarity using cluster centres, as in SC. Next, the distance between data points in a testing set and the training set is measured based on similarity features and is fed into the deep neural network algorithm for intrusion detection. Six KDD-Cup99 and NSL-KDD datasets and a sensor network dataset were employed to test the performance of the model. These experimental results indicate that the SCDNN classifier not only performs better than backpropagation neural network (BPNN), support vector machine (SVM), random forest (RF) and Bayes tree models in detection accuracy and the types of abnormal attacks found. It also provides an effective tool of study and analysis of intrusion detection in large networks.", "title": "" }, { "docid": "b8819099c285b531de22ddc03971f130", "text": "About 14% of the global burden of disease has been attributed to neuropsychiatric disorders, mostly due to the chronically disabling nature of depression and other common mental disorders, alcohol-use and substance-use disorders, and psychoses. Such estimates have drawn attention to the importance of mental disorders for public health. However, because they stress the separate contributions of mental and physical disorders to disability and mortality, they might have entrenched the alienation of mental health from mainstream efforts to improve health and reduce poverty. The burden of mental disorders is likely to have been underestimated because of inadequate appreciation of the connectedness between mental illness and other health conditions. Because these interactions are protean, there can be no health without mental health. Mental disorders increase risk for communicable and non-communicable diseases, and contribute to unintentional and intentional injury. Conversely, many health conditions increase the risk for mental disorder, and comorbidity complicates help-seeking, diagnosis, and treatment, and influences prognosis. Health services are not provided equitably to people with mental disorders, and the quality of care for both mental and physical health conditions for these people could be improved. We need to develop and evaluate psychosocial interventions that can be integrated into management of communicable and non-communicable diseases. Health-care systems should be strengthened to improve delivery of mental health care, by focusing on existing programmes and activities, such as those which address the prevention and treatment of HIV, tuberculosis, and malaria; gender-based violence; antenatal care; integrated management of childhood illnesses and child nutrition; and innovative management of chronic disease. An explicit mental health budget might need to be allocated for such activities. Mental health affects progress towards the achievement of several Millennium Development Goals, such as promotion of gender equality and empowerment of women, reduction of child mortality, improvement of maternal health, and reversal of the spread of HIV/AIDS. Mental health awareness needs to be integrated into all aspects of health and social policy, health-system planning, and delivery of primary and secondary general health care.", "title": "" }, { "docid": "26e81c8256df36fb02bffe2b17140d3a", "text": "BACKGROUND\nBruton's tyrosine kinase (BTK) is a mediator of the B-cell-receptor signaling pathway implicated in the pathogenesis of B-cell cancers. In a phase 1 study, ibrutinib, a BTK inhibitor, showed antitumor activity in several types of non-Hodgkin's lymphoma, including mantle-cell lymphoma.\n\n\nMETHODS\nIn this phase 2 study, we investigated oral ibrutinib, at a daily dose of 560 mg, in 111 patients with relapsed or refractory mantle-cell lymphoma. Patients were enrolled into two groups: those who had previously received at least 2 cycles of bortezomib therapy and those who had received less than 2 complete cycles of bortezomib or had received no prior bortezomib therapy. The primary end point was the overall response rate. Secondary end points were duration of response, progression-free survival, overall survival, and safety.\n\n\nRESULTS\nThe median age was 68 years, and 86% of patients had intermediate-risk or high-risk mantle-cell lymphoma according to clinical prognostic factors. Patients had received a median of three prior therapies. The most common treatment-related adverse events were mild or moderate diarrhea, fatigue, and nausea. Grade 3 or higher hematologic events were infrequent and included neutropenia (in 16% of patients), thrombocytopenia (in 11%), and anemia (in 10%). A response rate of 68% (75 patients) was observed, with a complete response rate of 21% and a partial response rate of 47%; prior treatment with bortezomib had no effect on the response rate. With an estimated median follow-up of 15.3 months, the estimated median response duration was 17.5 months (95% confidence interval [CI], 15.8 to not reached), the estimated median progression-free survival was 13.9 months (95% CI, 7.0 to not reached), and the median overall survival was not reached. The estimated rate of overall survival was 58% at 18 months.\n\n\nCONCLUSIONS\nIbrutinib shows durable single-agent efficacy in relapsed or refractory mantle-cell lymphoma. (Funded by Pharmacyclics and others; ClinicalTrials.gov number, NCT01236391.)", "title": "" }, { "docid": "cf867b46e3a2b1f5938e4ceb4d6c6f74", "text": "Over the past two decades a relatively large number of studies have investigated the functional neuroanatomy of posttraumatic stress disorder (PTSD). However, findings are often inconsistent, thus challenging traditional neurocircuitry models of PTSD. As evidence mounts that cognition and behavior is an emergent property of interacting brain networks, the question arises whether PTSD can be understood by examining dysfunction in large-scale, spatially distributed neural networks. We used the activation likelihood estimation quantitative meta-analytic technique to synthesize findings across functional neuroimaging studies of PTSD that either used a non-trauma (N=20) or trauma-exposed (N=19) comparison control group. In line with neurocircuitry models, our findings support hyperactive amygdala and hypoactive medial prefrontal regions, but suggest hyperactive hippocampi. Characterization of additional regions under a triple network model showed functional alterations that largely overlapped with the salience network, central executive network, and default network. However, heterogeneity was observed within and across the neurocircuitry and triple network models, and between results based on comparisons to non-trauma and trauma-exposed control groups. Nonetheless, these results warrant further exploration of the neurocircuitry and large-scale network models in PTSD using connectivity analyses.", "title": "" }, { "docid": "7a2c19e94d07afbfe81c7875aed1ff23", "text": "We combine linear discriminant analysis (LDA) and K-means clustering into a coherent framework to adaptively select the most discriminative subspace. We use K-means clustering to generate class labels and use LDA to do subspace selection. The clustering process is thus integrated with the subspace selection process and the data are then simultaneously clustered while the feature subspaces are selected. We show the rich structure of the general LDA-Km framework by examining its variants and their relationships to earlier approaches. Relations among PCA, LDA, K-means are clarified. Extensive experimental results on real-world datasets show the effectiveness of our approach.", "title": "" }, { "docid": "01567456e5990d328fdd0ccd05cec2f1", "text": "Recently, a new syndrome, namely the \"Autoimmune/inflammatory syndrome induced by adjuvants\" (ASIA) has been defined. In this syndrome different conditions characterized by common signs and symptoms and induced by the presence of an adjuvant are included. The adjuvant is a substance capable of boosting the immune response and of acting as a trigger in the development of autoimmune diseases. Post-vaccination autoimmune phenomena represent a major issue of ASIA. Indeed, despite vaccines represent a mainstay in the improvement of human health, several of these have been implicated as a potential trigger for autoimmune diseases. Sjogren's Syndrome (SjS) is a systemic chronic autoimmune inflammatory disease characterized by the presence of an inflammatory involvement of exocrine glands accompanied by systemic manifestations. Own to the straight association between infectious agents exposure (mainly viruses) and sicca syndrome development, the possible link between vaccine and SjS is not surprising. Indeed, a few cases of SjS following vaccine delivery have been reported. At the same extent, the induction of SjS following silicone exposure has been described too. Thus, the aim of this review was to focus on SjS and its possible development following vaccine or silicone exposure in order to define another possible facet of the ASIA syndrome.", "title": "" }, { "docid": "323eec69e6cd558ade788070cff58452", "text": "OBJECTIVE\nTo report clinical signs, diagnostic and surgical or necropsy findings, and outcome in 2 calves with spinal epidural abscess (SEA).\n\n\nSTUDY DESIGN\nClinical report.\n\n\nANIMALS\nCalves (n=2).\n\n\nMETHODS\nCalves had neurologic examination, analysis and antimicrobial culture of cerebrospinal fluid (CSF), vertebral column radiographs, myelography, and in 1 calf, magnetic resonance imaging (MRI). A definitive diagnosis of SEA was confirmed by necropsy in 1 calf and during surgery and histologic examination of vertebral canal tissue in 1 calf.\n\n\nRESULTS\nClinical signs were difficulty in rising, ataxia, fever, apparent spinal pain, hypoesthesia, and paresis/plegia which appeared 15 days before admission. Calf 1 had pelvic limb weakness and difficulty standing and calf 2 had severe ataxia involving both thoracic and pelvic limbs. Extradural spinal cord compression was identified by myelography. SEA suspected in calf 1 with discospondylitis was confirmed at necropsy whereas calf 2 had MRI identification of the lesion and was successfully decompressed by laminectomy and SEA excision. Both calves had peripheral neutrophilia and calf 2 had neutrophilic pleocytosis in CSF. Bacteria were not isolated from CSF, from the surgical site or during necropsy. Calf 2 improved neurologically and had a good long-term outcome.\n\n\nCONCLUSION\nGood outcome in a calf with SEA was obtained after adequate surgical decompression and antibiotic administration.\n\n\nCLINICAL RELEVANCE\nSEA should be included in the list of possible causes of fever, apparent spinal pain, and signs of myelopathy in calves.", "title": "" }, { "docid": "6c00347ffa60b09692bbae45a0c01fc1", "text": "OBJECTIVES:Eosinophilic gastritis (EG), defined by histological criteria as marked eosinophilia in the stomach, is rare, and large studies in children are lacking. We sought to describe the clinical, endoscopic, and histopathological features of EG, assess for any concurrent eosinophilia at other sites of the gastrointestinal (GI) tract, and evaluate response to dietary and pharmacological therapies.METHODS:Pathology files at our medical center were searched for histological eosinophilic gastritis (HEG) with ≥70 gastric eosinophils per high-power field in children from 2005 to 2011. Pathology slides were evaluated for concurrent eosinophilia in the esophagus, duodenum, and colon. Medical records were reviewed for demographic characteristics, symptoms, endoscopic findings, comorbidities, and response to therapy.RESULTS:Thirty children with severe gastric eosinophilia were identified, median age 7.5 years, 14 of whom had both eosinophilia limited to the stomach and clinical symptoms, fulfilling the clinicopathological definition of EG. Symptoms and endoscopic features were highly variable. History of atopy and food allergies was common. A total of 22% had protein-losing enteropathy (PLE). Gastric eosinophilia was limited to the fundus in two patients. Many patients had associated eosinophilic esophagitis (EoE, 43%) and 21% had eosinophilic enteritis. Response to dietary restriction therapy was high (82% clinical response and 78% histological response). Six out of sixteen patients had persistent EoE despite resolution of their gastric eosinophilia; two children with persistent HEG post therapy developed de novo concurrent EoE.CONCLUSIONS:HEG in children can be present in the antrum and/or fundus. Symptoms and endoscopic findings vary, highlighting the importance of biopsies for diagnosis. HEG is associated with PLE, and with eosinophilia elsewhere in the GI tract including the esophagus. The disease is highly responsive to dietary restriction therapies in children, implicating an allergic etiology. Associated EoE is more resistant to therapy.", "title": "" }, { "docid": "a830d1d83361c3432cd02c4bd0d57956", "text": "Recent fMRI evidence has detected increased medial prefrontal activation during contemplation of personal moral dilemmas compared to impersonal ones, which suggests that this cortical region plays a role in personal moral judgment. However, functional imaging results cannot definitively establish that a brain area is necessary for a particular cognitive process. This requires evidence from lesion techniques, such as studies of human patients with focal brain damage. Here, we tested 7 patients with lesions in the ventromedial prefrontal cortex and 12 healthy individuals in personal moral dilemmas, impersonal moral dilemmas and non-moral dilemmas. Compared to normal controls, patients were more willing to judge personal moral violations as acceptable behaviors in personal moral dilemmas, and they did so more quickly. In contrast, their performance in impersonal and non-moral dilemmas was comparable to that of controls. These results indicate that the ventromedial prefrontal cortex is necessary to oppose personal moral violations, possibly by mediating anticipatory, self-focused, emotional reactions that may exert strong influence on moral choice and behavior.", "title": "" }, { "docid": "51743d233ec269cfa7e010d2109e10a6", "text": "Stress is a part of every life to varying degrees, but individuals differ in their stress vulnerability. Stress is usefully viewed from a biological perspective; accordingly, it involves activation of neurobiological systems that preserve viability through change or allostasis. Although they are necessary for survival, frequent neurobiological stress responses increase the risk of physical and mental health problems, perhaps particularly when experienced during periods of rapid brain development. Recently, advances in noninvasive measurement techniques have resulted in a burgeoning of human developmental stress research. Here we review the anatomy and physiology of stress responding, discuss the relevant animal literature, and briefly outline what is currently known about the psychobiology of stress in human development, the critical role of social regulation of stress neurobiology, and the importance of individual differences as a lens through which to approach questions about stress experiences during development and child outcomes.", "title": "" }, { "docid": "d047231a67ca02c525d174b315a0838d", "text": "The goal of this article is to review the progress of three-electron spin qubits from their inception to the state of the art. We direct the main focus towards the exchange-only qubit (Bacon et al 2000 Phys. Rev. Lett. 85 1758-61, DiVincenzo et al 2000 Nature 408 339) and its derived versions, e.g. the resonant exchange (RX) qubit, but we also discuss other qubit implementations using three electron spins. For each three-spin qubit we describe the qubit model, the envisioned physical realization, the implementations of single-qubit operations, as well as the read-out and initialization schemes. Two-qubit gates and decoherence properties are discussed for the RX qubit and the exchange-only qubit, thereby completing the list of requirements for quantum computation for a viable candidate qubit implementation. We start by describing the full system of three electrons in a triple quantum dot, then discuss the charge-stability diagram, restricting ourselves to the relevant subsystem, introduce the qubit states, and discuss important transitions to other charge states (Russ et al 2016 Phys. Rev. B 94 165411). Introducing the various qubit implementations, we begin with the exchange-only qubit (DiVincenzo et al 2000 Nature 408 339, Laird et al 2010 Phys. Rev. B 82 075403), followed by the RX qubit (Medford et al 2013 Phys. Rev. Lett. 111 050501, Taylor et al 2013 Phys. Rev. Lett. 111 050502), the spin-charge qubit (Kyriakidis and Burkard 2007 Phys. Rev. B 75 115324), and the hybrid qubit (Shi et al 2012 Phys. Rev. Lett. 108 140503, Koh et al 2012 Phys. Rev. Lett. 109 250503, Cao et al 2016 Phys. Rev. Lett. 116 086801, Thorgrimsson et al 2016 arXiv:1611.04945). The main focus will be on the exchange-only qubit and its modification, the RX qubit, whose single-qubit operations are realized by driving the qubit at its resonant frequency in the microwave range similar to electron spin resonance. Two different types of two-qubit operations are presented for the exchange-only qubits which can be divided into short-ranged and long-ranged interactions. Both of these interaction types are expected to be necessary in a large-scale quantum computer. The short-ranged interactions use the exchange coupling by placing qubits next to each other and applying exchange-pulses (DiVincenzo et al 2000 Nature 408 339, Fong and Wandzura 2011 Quantum Inf. Comput. 11 1003, Setiawan et al 2014 Phys. Rev. B 89 085314, Zeuch et al 2014 Phys. Rev. B 90 045306, Doherty and Wardrop 2013 Phys. Rev. Lett. 111 050503, Shim and Tahan 2016 Phys. Rev. B 93 121410), while the long-ranged interactions use the photons of a superconducting microwave cavity as a mediator in order to couple two qubits over long distances (Russ and Burkard 2015 Phys. Rev. B 92 205412, Srinivasa et al 2016 Phys. Rev. B 94 205421). The nature of the three-electron qubit states each having the same total spin and total spin in z-direction (same Zeeman energy) provides a natural protection against several sources of noise (DiVincenzo et al 2000 Nature 408 339, Taylor et al 2013 Phys. Rev. Lett. 111 050502, Kempe et al 2001 Phys. Rev. A 63 042307, Russ and Burkard 2015 Phys. Rev. B 91 235411). The price to pay for this advantage is an increase in gate complexity. We also take into account the decoherence of the qubit through the influence of magnetic noise (Ladd 2012 Phys. Rev. B 86 125408, Mehl and DiVincenzo 2013 Phys. Rev. B 87 195309, Hung et al 2014 Phys. Rev. B 90 045308), in particular dephasing due to the presence of nuclear spins, as well as dephasing due to charge noise (Medford et al 2013 Phys. Rev. Lett. 111 050501, Taylor et al 2013 Phys. Rev. Lett. 111 050502, Shim and Tahan 2016 Phys. Rev. B 93 121410, Russ and Burkard 2015 Phys. Rev. B 91 235411, Fei et al 2015 Phys. Rev. B 91 205434), fluctuations of the energy levels on each dot due to noisy gate voltages or the environment. Several techniques are discussed which partly decouple the qubit from magnetic noise (Setiawan et al 2014 Phys. Rev. B 89 085314, West and Fong 2012 New J. Phys. 14 083002, Rohling and Burkard 2016 Phys. Rev. B 93 205434) while for charge noise it is shown that it is favorable to operate the qubit on the so-called '(double) sweet spots' (Taylor et al 2013 Phys. Rev. Lett. 111 050502, Shim and Tahan 2016 Phys. Rev. B 93 121410, Russ and Burkard 2015 Phys. Rev. B 91 235411, Fei et al 2015 Phys. Rev. B 91 205434, Malinowski et al 2017 arXiv: 1704.01298), which are least susceptible to noise, thus providing a longer lifetime of the qubit.", "title": "" } ]
scidocsrr
12bf1a4d6b8a7734ebd8b8f66881e47e
Exploration of efficient symmetric algorithms
[ { "docid": "abdf1edfb2b93b3991d04d5f6d3d63d3", "text": "With the rapid growing of internet and networks applications, data security becomes more important than ever before. Encryption algorithms play a crucial role in information security systems. In this paper, we have a study of the two popular encryption algorithms: DES and Blowfish. We overviewed the base functions and analyzed the security for both algorithms. We also evaluated performance in execution speed based on different memory sizes and compared them. The experimental results show the relationship between function run speed and memory size.", "title": "" }, { "docid": "604362129b2ed5510750cc161cf54bbf", "text": "The principal goal guiding the design of any encryption algorithm must be security against unauthorized attacks. However, for all practical applications, performance and speed are also important concerns. These are the two main characteristics that differentiate one encryption algorithm from another. This paper provides the performance comparison between four of the most commonly used encryption algorithms: DES(Data Encryption Standard), 3DES(Triple DES), BLOWFISH and AES (Rijndael). The comparison has been conducted by running several setting to process different sizes of data blocks to evaluate the algorithms encryption and decryption speed. Based on the performance analysis of these algorithms under different hardware and software platform, it has been concluded that the Blowfish is the best performing algorithm among the algorithms under the security against unauthorized attack and the speed is taken into consideration.", "title": "" } ]
[ { "docid": "e18d85d20fb633ae0c3d641fdddfc6d6", "text": "We propose a model for the neuronal implementation of selective visual attention based on temporal correlation among groups of neurons. Neurons in primary visual cortex respond to visual stimuli with a Poisson distributed spike train with an appropriate, stimulus-dependent mean firing rate. The spike trains of neurons whose receptive fields donot overlap with the “focus of attention” are distributed according to homogeneous (time-independent) Poisson process with no correlation between action potentials of different neurons. In contrast, spike trains of neurons with receptive fields within the focus of attention are distributed according to non-homogeneous (time-dependent) Poisson processes. Since the short-term average spike rates of all neurons with receptive fields in the focus of attention covary, correlations between these spike trains are introduced which are detected by inhibitory interneurons in V4. These cells, modeled as modified integrate-and-fire neurons, function as coincidence detectors and suppress the response of V4 cells associated with non-attended visual stimuli. The model reproduces quantitatively experimental data obtained in cortical area V4 of monkey by Moran and Desimone (1985).", "title": "" }, { "docid": "226750535735e3a13363e98594851f71", "text": "Synthesizing high resolution photorealistic images has been a long-standing challenge in machine learning. In this paper we introduce new methods for the improved training of generative adversarial networks (GANs) for image synthesis. We construct a variant of GANs employing label conditioning that results in 128× 128 resolution image samples exhibiting global coherence. We expand on previous work for image quality assessment to provide two new analyses for assessing the discriminability and diversity of samples from class-conditional image synthesis models. These analyses demonstrate that high resolution samples provide class information not present in low resolution samples. Across 1000 ImageNet classes, 128 × 128 samples are more than twice as discriminable as artificially resized 32× 32 samples. In addition, 84.7% of the classes have samples exhibiting diversity comparable to real ImageNet data.", "title": "" }, { "docid": "020f05ae272da143bc6e0d3096925582", "text": "In this study we describe a new approach to relate simulator sickness ratings with the main frequency component of the simulator motion mismatch, that is, the computed difference between the time histories of simulator motion and vehicle motion, respectively. During two driving simulator experiments in the TNO moving-base driving simulatorthat were performed for other reasons than the purpose of this studywe collected simulator sickness questionnaires from in total 58 subjects. The main frequency component was computed by means of the power spectrum density of the computed mismatch signal. We hypothesized that simulator sickness incidence depends on this frequency component, in a similar way as the incidence of real motion sickness, such as sea sickness, depends on motion frequency. The results show that the simulator sickness ratings differed between both driving simulator experiments. The experiment with its main frequency component of the mismatch signal of 0.08 Hz had significantly higher simulator sickness incidence than the experiment with its main frequency at 0.46 Hz. Since the experimental design differed between both experiments, we cannot exclusively attribute the difference in sickness ratings to the frequency component, but the observation does suggest that quantitative analysis of the mismatch between the motion profiles of the simulator and the vehicle may greatly improve our understanding of the causal mechanism of simulator sickness.", "title": "" }, { "docid": "f00b9a311fb8b14100465c187c9e4659", "text": "We propose a framework for solving combinatorial optimization problems of which the output can be represented as a sequence of input elements. As an alternative to the Pointer Network, we parameterize a policy by a model based entirely on (graph) attention layers, and train it efficiently using REINFORCE with a simple and robust baseline based on a deterministic (greedy) rollout of the best policy found during training. We significantly improve over state-of-the-art results for learning algorithms for the 2D Euclidean TSP, reducing the optimality gap for a single tour construction by more than 75% (to 0.33%) and 50% (to 2.28%) for instances with 20 and 50 nodes respectively.", "title": "" }, { "docid": "e99c12645fd14528a150f915b3849c2b", "text": "Teaching in the cyberspace classroom requires moving beyond old models of. pedagogy into new practices that are more facilitative. It involves much more than simply taking old models of pedagogy and transferring them to a different medium. Unlike the face-to-face classroom, in online distance education, attention needs to be paid to the development of a sense of community within the group of participants in order for the learning process to be successful. The transition to the cyberspace classroom can be successfully achieved if attention is paid to several key areas. These include: ensuring access to and familiarity with the technology in use; establishing guidelines and procedures which are relatively loose and free-flowing, and generated with significant input from participants; striving to achieve maximum participation and \"buy-in\" from the participants; promoting collaborative learning; and creating a double or triple loop in the learning process to enable participants to reflect on their learning process. All of these practices significantly contribute to the development of an online learning community, a powerful tool for enhancing the learning experience. Each of these is reviewed in detail in the paper. (AEF) Reproductions supplied by EDRS are the best that can be made from the original document. Making the Transition: Helping Teachers to Teach Online Rena M. Palloff, Ph.D. Crossroads Consulting Group and The Fielding Institute Alameda, CA", "title": "" }, { "docid": "7931fa9541efa9a006a030655c59c5f4", "text": "Natural language generation (NLG) is a critical component in a spoken dialogue system. This paper presents a Recurrent Neural Network based Encoder-Decoder architecture, in which an LSTM-based decoder is introduced to select, aggregate semantic elements produced by an attention mechanism over the input elements, and to produce the required utterances. The proposed generator can be jointly trained both sentence planning and surface realization to produce natural language sentences. The proposed model was extensively evaluated on four different NLG datasets. The experimental results showed that the proposed generators not only consistently outperform the previous methods across all the NLG domains but also show an ability to generalize from a new, unseen domain and learn from multi-domain datasets.", "title": "" }, { "docid": "bad5040a740421b3079c3fa7bf598d71", "text": "Deep Convolutional Neural Networks (CNNs) are a special type of Neural Networks, which have shown state-of-the-art results on various competitive benchmarks. The powerful learning ability of deep CNN is largely achieved with the use of multiple non-linear feature extraction stages that can automatically learn hierarchical representation from the data. Availability of a large amount of data and improvements in the hardware processing units have accelerated the research in CNNs and recently very interesting deep CNN architectures are reported. The recent race in deep CNN architectures for achieving high performance on the challenging benchmarks has shown that the innovative architectural ideas, as well as parameter optimization, can improve the CNN performance on various vision-related tasks. In this regard, different ideas in the CNN design have been explored such as use of different activation and loss functions, parameter optimization, regularization, and restructuring of processing units. However, the major improvement in representational capacity is achieved by the restructuring of the processing units. Especially, the idea of using a block as a structural unit instead of a layer is gaining substantial appreciation. This survey thus focuses on the intrinsic taxonomy present in the recently reported CNN architectures and consequently, classifies the recent innovations in CNN architectures into seven different categories. These seven categories are based on spatial exploitation, depth, multipath, width, feature map exploitation, channel boosting and attention. Additionally, it covers the elementary understanding of the CNN components and sheds light on the current challenges and applications of CNNs.", "title": "" }, { "docid": "e72f8ad61a7927fee8b0a32152b0aa4b", "text": "Geolocation prediction is vital to geospatial applications like localised search and local event detection. Predominately, social media geolocation models are based on full text data, including common words with no geospatial dimension (e.g. today) and noisy strings (tmrw), potentially hampering prediction and leading to slower/more memory-intensive models. In this paper, we focus on finding location indicative words (LIWs) via feature selection, and establishing whether the reduced feature set boosts geolocation accuracy. Our results show that an information gain ratiobased approach surpasses other methods at LIW selection, outperforming state-of-the-art geolocation prediction methods by 10.6% in accuracy and reducing the mean and median of prediction error distance by 45km and 209km, respectively, on a public dataset. We further formulate notions of prediction confidence, and demonstrate that performance is even higher in cases where our model is more confident, striking a trade-off between accuracy and coverage. Finally, the identified LIWs reveal regional language differences, which could be potentially useful for lexicographers.", "title": "" }, { "docid": "f0dbe9ad4934ff4d5857cfc5a876bcb6", "text": "Although pricing fraud is an important issue for improving service quality of online shopping malls, research on automatic fraud detection has been limited. In this paper, we propose an unsupervised learning method based on a finite mixture model to identify pricing frauds. We consider two states, normal and fraud, for each item according to whether an item description is relevant to its price by utilizing the known number of item clusters. Two states of an observed item are modeled as hidden variables, and the proposed models estimate the state by using an expectation maximization (EM) algorithm. Subsequently, we suggest a special case of the proposed model, which is applicable when the number of item clusters is unknown. The experiment results show that the proposed models are more effective in identifying pricing frauds than the existing outlier detection methods. Furthermore, it is presented that utilizing the number of clusters is helpful in facilitating the improvement of pricing fraud detection", "title": "" }, { "docid": "3f21b07dcdf17a220440db3d0336f989", "text": "Estimation of phase noise (PN) has become economically feasible for integration in many of today's semiconductors. Several contributions propose concepts that obtain a good estimate of the PN's power spectrum without even requiring a reference oscillator. Differently, in this work we aim to estimate the time domain representation of the residual PN in the intermediate frequency domain of a frequency modulated continuous wave (FMCW) radar transceiver. For that, an artificial on-chip target is utilized, which is to be incorporated into an existing monolithic microwave integrated circuit (MMIC). The estimated decorrelated phase noise is required for cancelation of short-range leakage originating from an unwanted signal reflection superimposing the overall channel response of the radar. We determine the minimum required delay such that the residual PN of the on-chip target exceeds the intrinsic noise. Further, three different realizations of the delay line in the MMIC are compared. We verify our analytical derivations with a full FMCW radar system simulation.", "title": "" }, { "docid": "9547ec27942f9439d18dbfecdda83e1c", "text": "Inverted pendulum system is a complicated, unstable and multivariable nonlinear system. In order to control the angle and displacement of inverted pendulum system effectively, a novel double-loop digital PID control strategy is presented in this paper. Based on impulse transfer function, the model of the single linear inverted pendulum system is divided into two parts according to the controlled parameters. The inner control loop that is formed by the digital PID feedback control can control the angle of the pendulum, while in order to control the cart displacement, the digital PID series control is adopted to form the outer control loop. The simulation results show the digital control strategy is very effective to single inverted pendulum and when the sampling period is selected as 50 ms, the performance of the digital control system is similar to that of the analog control system. Copyright © 2013 IFSA.", "title": "" }, { "docid": "f8aeaf04486bdbc7254846d95e3cab24", "text": "In this paper, we present a novel wearable RGBD camera based navigation system for the visually impaired. The system is composed of a smartphone user interface, a glass-mounted RGBD camera device, a real-time navigation algorithm, and haptic feedback system. A smartphone interface provides an effective way to communicate to the system using audio and haptic feedback. In order to extract orientational information of the blind users, the navigation algorithm performs real-time 6-DOF feature based visual odometry using a glass-mounted RGBD camera as an input device. The navigation algorithm also builds a 3D voxel map of the environment and analyzes 3D traversability. A path planner of the navigation algorithm integrates information from the egomotion estimation and mapping and generates a safe and an efficient path to a waypoint delivered to the haptic feedback system. The haptic feedback system consisting of four micro-vibration motors is designed to guide the visually impaired user along the computed path and to minimize cognitive loads. The proposed system achieves real-time performance faster than 30Hz in average on a laptop, and helps the visually impaired extends the range of their activities and improve the mobility performance in a cluttered environment. The experiment results show that navigation in indoor environments with the proposed system avoids collisions successfully and improves mobility performance of the user compared to conventional and state-of-the-art mobility aid devices.", "title": "" }, { "docid": "330bbffaefd9f5d165b8eca16db1f991", "text": "1 Pharmacist, Professor, and Researcher at the College of Pharmacy at the Federal Fluminense University. 2 Physiatric Doctor at the Institute of Instituto de Medicina Física e Reabilitação do Hospital da Clínicas da Faculdade de Medicina da Universidade de São Paulo (Physical Medicine and Rehabilitation at the Hospital of the Clinics of the College of Medicine of the University of São Paulo). Coordinator of Teaching and Research of the Instituto Brasil de Tecnologias da Saúde (Brazilian Institute of Health Technologies). 3 Orthopediatric Doctor and Physiatrist, CSO of the Instituto Brasil de Tecnologias da Saúde (Brazilian Institute of Health Technologies). Peripheral vascular diseases (PVDS) are characterized as a circulation problem in the veins, arteries, and lymphatic system. The main therapy consists of changes in lifestyle such as diet and physical activity. The pharmacological therapy includes the use of vasoactive drugs, which are used in arteriopathies and venolymphatic disorders. The goal of this study was to research the scientific literature on the use and pharmacology of vasoactive drugs, emphasizing the efficacy of their local actions and administration.", "title": "" }, { "docid": "883a22f7036514d87ce3af86b5853de3", "text": "A wideband integrated RF duplexer supports 3G/4G bands I, II, III, IV, and IX, and achieves a TX-to-RX isolation of more than 55dB in the transmit-band, and greater than 45dB in the corresponding receive-band across 200MHz of bandwidth. A 65nm CMOS duplexer/LNA achieves a transmit insertion loss of 2.5dB, and a cascaded receiver noise figure of 5dB with more than 27dB of gain, exceeding the commercial external duplexers performance at considerably lower cost and area.", "title": "" }, { "docid": "2c3768ce9b2801e8a2ef50ccdfbfa3d3", "text": "Kelly's Criterion is well known among gamblers and investors as a method for maximizing the returns one would expect to observe over long periods of betting or investing. These ideas are conspicuously absent from portfolio optimization problems in the financial and automation literature. This paper will show how Kelly's Criterion can be incorporated into standard portfolio optimization models. The model developed here combines risk and return into a single objective function by incorporating a risk parameter. This model is then solved for a portfolio of 10 stocks from a major stock exchange using a differential evolution algorithm. Monte Carlo calculations are used to verify the accuracy of the results obtained from differential evolution. The results show that evolutionary algorithms can be successfully applied to solve a portfolio optimization problem where returns are calculated by applying Kelly's Criterion to each of the assets in the portfolio.", "title": "" }, { "docid": "1601469a8a05ede558d9e39f26dc1c61", "text": "machine code", "title": "" }, { "docid": "76e01466b9d7d4cbea714ce29f13759a", "text": "In this survey we review the image processing literature on the various approaches and models investigators have used for texture. These include statistical approaches of autocorrelation function, optical transforms, digital transforms, textural edgeness, structural element, gray tone cooccurrence, run lengths, and autoregressive models. We discuss and generalize some structural approaches to texture based on more complex primitives than gray tone. We conclude with some structural-statistical generalizations which apply the statistical techniques to the structural primitives.", "title": "" }, { "docid": "0e459d7e3ffbf23c973d4843f701a727", "text": "The role of psychological flexibility in mental health stigma and psychological distress for the stigmatizer.", "title": "" }, { "docid": "81f71bf0f923ff07a770ae30321382f6", "text": "The growth rate of scientific publication has been studied from 1907 to 2007 using available data from a number of literature databases, including Science Citation Index (SCI) and Social Sciences Citation Index (SSCI). Traditional scientific publishing, that is publication in peer-reviewed journals, is still increasing although there are big differences between fields. There are no indications that the growth rate has decreased in the last 50 years. At the same time publication using new channels, for example conference proceedings, open archives and home pages, is growing fast. The growth rate for SCI up to 2007 is smaller than for comparable databases. This means that SCI was covering a decreasing part of the traditional scientific literature. There are also clear indications that the coverage by SCI is especially low in some of the scientific areas with the highest growth rate, including computer science and engineering sciences. The role of conference proceedings, open access archives and publications published on the net is increasing, especially in scientific fields with high growth rates, but this has only partially been reflected in the databases. The new publication channels challenge the use of the big databases in measurements of scientific productivity or output and of the growth rate of science. Because of the declining coverage and this challenge it is problematic that SCI has been used and is used as the dominant source for science indicators based on publication and citation numbers. The limited data available for social sciences show that the growth rate in SSCI was remarkably low and indicate that the coverage by SSCI was declining over time. National Science Indicators from Thomson Reuters is based solely on SCI, SSCI and Arts and Humanities Citation Index (AHCI). Therefore the declining coverage of the citation databases problematizes the use of this source.", "title": "" }, { "docid": "cadc9c709b6bd8675d11f23a87d1165b", "text": "A restricted Boltzmann machine (RBM) is often used as a building block for constructing deep neural networks and deep generative models which have gained popularity recently as one way to learn complex and large probabilistic models. In these deep models, it is generally known that the layer-wise pretraining of RBMs facilitates finding a more accurate model for the data. It is, hence, important to have an efficient learning method for RBM. The conventional learning is mostly performed using the stochastic gradients, often, with the approximate method such as contrastive divergence (CD) learning to overcome the computational difficulty. Unfortunately, training RBMs with this approach is known to be difficult, as learning easily diverges after initial convergence. This difficulty has been reported recently by many researchers. This thesis contributes important improvements that address the difficulty of training RBMs. Based on an advanced Markov-Chain Monte-Carlo sampling method called parallel tempering (PT), the thesis proposes a PT learning which can replace CD learning. In terms of both the learning performance and the computational overhead, PT learning is shown to be superior to CD learning through various experiments. The thesis also tackles the problem of choosing the right learning parameter by proposing a new algorithm, the adaptive learning rate, which is able to automatically choose the right learning rate during learning. A closer observation into the update rules suggested that learning by the traditional update rules is easily distracted depending on the representation of data sets. Based on this observation, the thesis proposes a new set of gradient update rules that are more robust to the representation of training data sets and the learning parameters. Extensive experiments on various data sets confirmed that the proposed rules indeed improve learning significantly. Additionally, a Gaussian-Bernoulli RBM (GBRBM) which is a variant of an RBM that can learn continuous real-valued data sets is reviewed, and the proposed improvements are tested upon it. The experiments showed that the improvements could also be made for GBRBMs. It is impossible for me to express my appreciation to Dr. Alexander Ilin and Dr. Tapani Raiko enough for their enormous help. If it were not them, this thesis would not have been made possible. I would like to thank my fellow MACADAMIA students and wish best for them all. Also, my three Korean friends–Byungjin Cho, Sungin Cho, and Eunah Cho– in Finland who happen to share the same last name with me have …", "title": "" } ]
scidocsrr
3c0d9d6073ad225dab47f67211d8ba31
A framework for inverse tone mapping
[ { "docid": "09c7331d77c5a9a2812df90e6e9256ea", "text": "We present a technique for approximating a light probe image as a constellation of light sources based on a median cut algorithm. The algorithm is efficient, simple to implement, and can realistically represent a complex lighting environment with as few as 64 point light sources.", "title": "" } ]
[ { "docid": "f9eed4f99d70c51dc626a61724540d3c", "text": "A soft-start circuit with soft-recovery function for DC-DC converters is presented in this paper. The soft-start strategy is based on a linearly ramped-up reference and an error amplifier with minimum selector implemented with a three-limb differential pair skillfully. The soft-recovery strategy is based on a compact clamp circuit. The ramp voltage would be clamped once the feedback voltage is detected lower than a threshold, which could control the output to be recovered slowly and linearly. A monolithic DC-DC buck converter with proposed circuit has been fabricated with a 0.5μm CMOS process for validation. The measurement result shows that the ramp-based soft-start and soft-recovery circuit have good performance and agree well with the theoretical analysis.", "title": "" }, { "docid": "3bca53a2d1aabf99a9a3744f651f5be1", "text": "One of the most valuable assets of an organization is its organizational data. The analysis and mining of this potential hidden treasure can lead to much added-value for the organization. Process mining is an emerging area that can be useful in helping organizations understand the status quo, check for compliance and plan for improving their processes. The aim of process mining is to extract knowledge from event logs of today's organizational information systems. Process mining includes three main types: discovering process models from event logs, conformance checking and organizational mining. In this paper, we briefly introduce process mining and review some of its most important techniques. Also, we investigate some of the applications of process mining in industry and present some of the most important challenges that are faced in this area.", "title": "" }, { "docid": "27e60092f83e7572a5a7776113d8c97c", "text": "Although cuckoo hashing has significant applications in both theoretical and practical settings, a relevant downside is that it requires lookups to multiple locations. In many settings, where lookups are expensive, cuckoo hashing becomes a less compelling alternative. One such standard setting is when memory is arranged in large pages, and a major cost is the number of page accesses. We propose the study of cuckoo hashing with pages, advocating approaches where each key has several possible locations, or cells, on a single page, and additional choices on a second backup page. We show experimentally that with k cell choices on one page and a single backup cell choice, one can achieve nearly the same loads as when each key has k+1 random cells to choose from, with most lookups requiring just one page access, even when keys are placed online using a simple algorithm. While our results are currently experimental, they suggest several interesting new open theoretical questions for cuckoo hashing with pages.", "title": "" }, { "docid": "f0689103ff20ab25d6d13a6ef44fb8f1", "text": "Nowadays, the exponential advancement of social networks is creating new application areas for recommender systems (RSs). People-to-people RSs aim to exploit user’s interests for suggesting relevant people to follow. However, traditional recommenders do not consider that people may share similar interests but might have different feelings or opinions about them. In this paper we propose a novel recommendation engine which relies on the identification of semantic attitudes, that is, sentiment, volume, and objectivity extracted from user-generated content. In order to do this at large-scale on traditional social networks, we devise a three-dimensional matrix factorization, one for each attitude. Potential temporal alterations of users’ attitudes are also taken into consideration in the factorization model. Extensive offline experiments on different real world datasets, reveal the benefits of the proposed approach compared with some state-of-the-art techniques.", "title": "" }, { "docid": "1394eaac58304e5d6f951ca193e0be40", "text": "We introduce low-cost hardware for performing non-invasive side-channel attacks on Radio Frequency Identi cation Devices (RFID) and develop techniques for facilitating a correlation power analysis (CPA) in the presence of the eld of an RFID reader. We practically verify the e ectiveness of the developed methods by analysing the security of commercial contactless smartcards employing strong cryptography, pinpointing weaknesses in the protocol and revealing a vulnerability towards side-channel attacks. Employing the developed hardware, we present the rst successful key-recovery attack on commercially available contactless smartcards based on the Data Encryption Standard (DES) or TripleDES (3DES) cipher that are widely used for security-sensitive applications, e.g., payment purposes.", "title": "" }, { "docid": "4772fb61d2a967470bdd0e9b3f2ead07", "text": "This study examined the relationships of three levels of reading fluency, the individual word, the syntactic unit, and the whole passage, to reading comprehension among 278 fifth graders heterogeneous in reading ability. Hierarchical regression analyses revealed that reading fluency at each level related uniquely to performance on a standardized reading comprehension test in a model including inferencing skill and background knowledge. The study supported an automaticity effect for word recognition speed and an automaticity-like effect related to syntactic processing skill. Additionally, hierarchical regressions using longitudinal data suggested that fluency and reading comprehension had a bidirectional relationship. The discussion emphasizes the theoretical expansion of reading fluency to three levels of cognitive processes and the relations of these processes to reading comprehension.", "title": "" }, { "docid": "d46434bbbf73460bf422ebe4bd65b590", "text": "We present an efficient block-diagonal approximation to the Gauss-Newton matrix for feedforward neural networks. Our resulting algorithm is competitive against state-of-the-art first-order optimisation methods, with sometimes significant improvement in optimisation performance. Unlike first-order methods, for which hyperparameter tuning of the optimisation parameters is often a laborious process, our approach can provide good performance even when used with default settings. A side result of our work is that for piecewise linear transfer functions, the network objective function can have no differentiable local maxima, which may partially explain why such transfer functions facilitate effective optimisation.", "title": "" }, { "docid": "74e05090fa4b82202ca19db38f46e171", "text": "In this paper, we present an automatic seeded region growing algorithm for color image segmentation. First, the input RGB color image is transformed into YCbCr color space. Second, the initial seeds are automatically selected. Third, the color image is segmented into regions where each region corresponds to a seed. Finally, region-merging is used to merge similar or small regions. Experimental results show that our algorithm can produce good results as favorably compared to some existing algorithms. q 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "e418ccca35d3480145b79e129537e43c", "text": "Smart eyewear computing is a relatively new subcategory in ubiquitous computing research, which has enormous potential. In this paper we present a first evaluation of soon commercially available Electrooculography (EOG) glasses (J!NS MEME) for the use in activity recognition. We discuss the potential of EOG glasses and other smart eye-wear. Afterwards, we show a first signal level assessment of MEME, and present a classification task using the glasses. We are able to distinguish of 4 activities for 2 users (typing, reading, eating and talking) using the sensor data (EOG and acceleration) from the glasses with an accuracy of 70 % for 6 sec. windows and up to 100 % for a 1 minute majority decision. The classification is done user-independent.\n The results encourage us to further explore the EOG glasses as platform for more complex, real-life activity recognition systems.", "title": "" }, { "docid": "c3182fada2dc486fb338654b885cbbfe", "text": "Traditional syllogisms involve sentences of the following simple forms: All X are Y , Some X are Y , No X are Y ; similar sentences with proper names as subjects, and identities between names. These sentences come with the natural semantics using subsets of a given universe, and so it is natural to ask about complete proof systems. Logical systems are important in this area due to the prominence of syllogistic arguments in human reasoning, and also to the role they have played in logic from Aristotle onwards. We present complete systems for the entire syllogistic fragment and many sub-fragments. These begin with the fragment of All sentences, for which we obtain one of the easiest completeness theorems in logic. The last system extends syllogistic reasoning with the classical boolean operations and cardinality comparisons.", "title": "" }, { "docid": "e6ba843b871f6783fb486ab598fd1027", "text": "To prevent the further loss of species from landscapes used for productive enterprises such as agriculture, forestry, and grazing, it is necessary to determine the composition, quantity, and configuration of landscape elements required to meet the needs of the species present. I present a multi-species approach for defining the attributes required to meet the needs of the biota in a landscape and the management regimes that should be applied. The approach builds on the concept of umbrella species, whose requirements are believed to encapsulate the needs of other species. It identifies a suite of “focal species,” each of which is used to define different spatial and compositional attributes that must be present in a landscape and their appropriate management regimes. All species considered at risk are grouped according to the processes that threaten their persistence. These threats may include habitat loss, habitat fragmentation, weed invasion, and fire. Within each group, the species most sensitive to the threat is used to define the minimum acceptable level at which that threat can occur. For example, the area requirements of the species most limited by the availability of particular habitats will define the minimum suitable area of those habitat types; the requirements of the most dispersal-limited species will define the attributes of connecting vegetation; species reliant on critical resources will define essential compositional attributes; and species whose populations are limited by processes such as fire, predation, or weed invasion will define the levels at which these processes must be managed. For each relevant landscape parameter, the species with the most demanding requirements for that parameter is used to define its minimum acceptable value. Because the most demanding species are selected, a landscape designed and managed to meet their needs will encompass the requirements of all other species. Especies Focales: Una Sombrilla Multiespecífica para Conservar la Naturaleza Resumen: Para evitar mayores pérdidas de especies en paisajes utilizados para actividades productivas como la agricultura, la ganadería y el pastoreo, es necesario determinar la composición, cantidad y configuración de elementos del paisaje que se requieren para satisfacer las necesidades de las especies presentes. Propongo un enfoque multiespecífico para definir los atributos requeridos para satisfacer las necesidades de la biota en un paisaje y los regímenes de manejo que deben ser aplicados. El enfoque se basa en el concepto de las especies sombrilla, de las que se piensa que sus requerimientos engloban a las necesidades de otras especies. El concepto identifica una serie de “especies focales”, cada una de las cuales se utiliza para definir distintos atributos espaciales y de composición que deben estar presentes en un paisaje, así como sus requerimientos adecuados de manejo. Todas las especies consideradas en riesgo se agrupan de acuerdo con los procesos que amenazan su persistencia. Estas amenazas pueden incluir pérdida de hábitat, fragmentación de hábitat, invasión de hierbas y fuego. Dentro de cada grupo, se utiliza a la especie más sensible a la amenaza para definir el nivel mínimo aceptable en que la amenaza ocurre. Por ejemplo, los requerimientos espaciales de especies limitadas por la disponibilidad de hábitats particulares definirán el área mínima adecuada de esos tipos de hábitat; los requerimientos de la especie más limitada en su dispersión definirán los atributos de la vegetación conectante, las especies dependientes de recursos críticos definirán los atributos de composición esenciales; y especies cuyas poblaciones están limitadas por procesos como el fuego, la depredación o invasión de hierbas definirán los niveles en que deberán manejarse estos procesos. Para cada parámetro relevante del Paper submitted September 19, 1996; revised manuscript accepted February 24, 1997. 850 Focal Species for Nature Conservation Lambeck Conservation Biology Volume 11, No. 4, August 1997 Introduction Throughout the world, changing patterns of land use have resulted in the loss of natural habitat and the increasing fragmentation of that which remains. Not only have these changes altered habitat composition and configuration, but they have modified the rates and intensities of many ecological processes essential for ecosystems to retain their integrity. As a consequence, many landscapes that are being used for productive purposes such as agriculture, grazing, and forestry, are suffering species declines and losses (Saunders 1989; Saunders et al. 1991; Hobbs et al. 1993). Attempts to prevent further loss of biological diversity from such landscapes requires a capacity to define the spatial, compositional, and functional attributes that must be present if the needs of the plants and animals are to be met. There has been considerable debate in the ecological literature about whether the requirements of single species should serve as the basis for defining conservation requirements or whether the analysis of landscape pattern and process should underpin conservation planning (Franklin 1993; Hansen et al. 1993; Orians 1993; Franklin 1994; Hobbs 1994; Tracy & Brussard 1994). Speciesbased approaches have taken the form of either singlespecies studies, often targeted at rare or vulnerable species, or the study of groups of species considered to represent components of biodiversity (Soulé & Wilcox 1980; Simberloff 1988; Wilson & Peter 1988; Pimm & Gilpin 1989; Brussard 1991; Kohm 1991). Species-based approaches have been criticized on the grounds that they do not provide whole-landscape solutions to conservation problems, that they cannot be conducted at a rate sufficient to deal with the urgency of the threats, and that they consume a disproportionate amount of conservation funding (Franklin 1993; Hobbs 1994; Walker 1995). Consequently, critics of single-species studies are calling for approaches that consider higher levels of organization such as ecosystems and landscapes (Noss 1983; Noss & Harris 1986; Noss 1987; Gosselink et al . 1990; Dyer & Holland 1991; Salwasser 1991; Franklin 1993; Hobbs 1994). These alternative approaches place a greater emphasis on the relationship between landscape pattern and processes and community measures such as species diversity or species richness (Janzen 1983; Newmark 1985; Saunders et al . 1991; Anglestam 1992; Hobbs 1993, 1994). Although approaches that consider pattern and processes at a landscape scale help to identify the elements that need to be present in a landscape, they are unable to define the appropriate quantity and distribution of those elements. Such approaches have tended, by and large, to be descriptive. They can identify relationships between landscape patterns and measures such as species richness, but they are unable to define the composition, configuration, and quantity of landscape features required for a landscape to retain its biota. Ultimately, questions such as what type of pattern is required in a landscape, or at what rate a given process should proceed, cannot be answered without reference to the needs of the species in that landscape. Therefore, we cannot ignore the requirements of species if we wish to define the characteristics of a landscape that will ensure their retention. The challenge then is to find an efficient means of meeting the needs of all species without studying each one individually. In order to overcome this dilemma, proponents of single-species studies have developed the concept of umbrella species (Murphy & Wilcox 1986; Noss 1990; Cutler 1991; Ryti 1992; Hanley 1993; Launer & Murphy 1994; Williams & Gaston 1994). These are species whose requirements for persistence are believed to encapsulate those of an array of additional species. The attractiveness of umbrella species to land managers is obvious. If it is indeed possible to manage a whole community or ecosystem by focusing on the needs of one or a few species, then the seemingly intractable problem of considering the needs of all species is resolved. Species as diverse as Spotted Owls (Franklin 1994), desert tortoises (Tracy & Brussard 1994), black-tailed deer (Hanley 1993) and butterflies (Launer & Murphy 1994) have been proposed to serve an umbrella function for the ecosystems in which they occur. But given that the majority of species within an ecosystem have widely differing habitat requirements, it seems unlikely that any single species could serve as an umbrella for all others. As Franklin (1994) points out, landscapes designed and managed around the needs of single species may fail to capture other critical elements of the ecosystems in which they occur. It would therefore appear that if the concept of umbrella species is to be useful, it will be necessary to search for multi-species approaches that identify a set of species whose spatial, compositional, and functional requirements encompass those of all other species in the region. I present a method for selecting, from the total pool of species in a landscape, a subset of “focal species” whose paisaje, se utiliza a la especies con los mayores requerimientos para ese parámetro para definir su valor aceptable mínimo. Debido a que se seleccionan las especies más demandantes, un paisaje diseñado y manejado para satisfacer sus necesidades abarcará los requerimientos de todas las demás especies.", "title": "" }, { "docid": "00549502ab17ccdc5dad6c14a42c73e6", "text": "This paper examined the relationships between the experiences and perceptions of racism and the physical and mental health status of African Americans. The study was based upon thirteen year (1979 to 1992), four wave, national panel data (n = 623) from the National Survey of Black Americans. Personal experiences of racism were found to have both adverse and salubrious immediate and cumulative effects on the physical and mental well-being of African Americans. In 1979-80, reports of poor treatment due to race were inversely related to subjective well-being and positively associated with the number of reported physical health problems. Reports of negative racial encounters over the 13-year period were weakly predictive of poor subjective well-being in 1992. A more general measure of racial beliefs, perceiving that whites want to keep blacks down, was found to be related to poorer physical health in 1979-80, better physical health in 1992, and predicted increased psychological distress, as well as, lower levels of subjective well-being in 1992. In conclusion, the authors suggested future research on possible factors contributing to the relationship between racism and health status among African Americans.", "title": "" }, { "docid": "272ea1688bad5400ef61bc27480383f0", "text": "Syntax Description Remarks and examples References Also see Syntax Cluster analysis of data cluster subcommand. .. Cluster analysis of a dissimilarity matrix clustermat subcommand. .. Description Stata's cluster-analysis routines provide several hierarchical and partition clustering methods, postclustering summarization methods, and cluster-management tools. This entry presents an overview of cluster analysis, the cluster and clustermat commands (also see [MV] clustermat), as well as Stata's cluster-analysis management tools. The hierarchical clustering methods may be applied to the data by using the cluster command or to a user-supplied dissimilarity matrix by using the clustermat command. The cluster command has the following subcommands, which are detailed in their respective manual entries. Partition-clustering methods for observations kmeans [MV] cluster kmeans and kmedians Kmeans cluster analysis kmedians [MV] cluster kmeans and kmedians Kmedians cluster analysis Hierarchical clustering methods for observations singlelinkage [MV] cluster linkage Single-linkage cluster analysis averagelinkage [MV] cluster linkage Average-linkage cluster analysis completelinkage [MV] cluster linkage Complete-linkage cluster analysis waveragelinkage [MV] cluster linkage Weighted-average linkage cluster analysis medianlinkage [MV] cluster linkage Median-linkage cluster analysis centroidlinkage [MV] cluster linkage Centroid-linkage cluster analysis wardslinkage [MV] cluster linkage Ward's linkage cluster analysis Postclustering commands stop [MV] cluster stop Cluster-analysis stopping rules dendrogram [MV] cluster dendrogram Dendrograms for hierarchical cluster analysis generate [MV] cluster generate Generate summary or grouping variables from a cluster analysis 1 2 cluster — Introduction to cluster-analysis commands User utilities notes [MV] cluster notes Place notes in cluster analysis dir [MV] cluster utility Directory list of cluster analyses list [MV] cluster utility List cluster analyses drop [MV] cluster utility Drop cluster analyses use [MV] cluster utility Mark cluster analysis as most recent one rename [MV] cluster utility Rename cluster analyses renamevar [MV] cluster utility Rename cluster-analysis variables Programmer utilities [MV] cluster programming subroutines Add cluster-analysis routines query [MV] cluster programming utilities Obtain cluster-analysis attributes set [MV] cluster programming utilities Set cluster-analysis attributes delete [MV] cluster programming utilities Delete cluster-analysis attributes parsedistance [MV] cluster programming utilities Parse (dis)similarity measure names measures [MV] cluster programming utilities Compute (dis)similarity measures The clustermat command has the following subcommands, which are detailed along with the related cluster command manual entries. Also see [MV] clustermat. Hierarchical clustering methods for matrices singlelinkage [MV] cluster linkage Single-linkage cluster analysis averagelinkage [MV] cluster linkage Average-linkage cluster analysis completelinkage [MV] cluster linkage Complete-linkage cluster analysis waveragelinkage [MV] cluster linkage Weighted-average linkage cluster analysis medianlinkage [MV] cluster linkage Median-linkage cluster analysis centroidlinkage [MV] cluster linkage …", "title": "" }, { "docid": "b18af8eaea5dd26cdac5d13c7aa6ce4c", "text": "Botnet is one of the most serious threats to cyber security as it provides a distributed platform for several illegal activities. Regardless of the availability of numerous methods proposed to detect botnets, still it is a challenging issue as botmasters are continuously improving bots to make them stealthier and evade detection. Most of the existing detection techniques cannot detect modern botnets in an early stage, or they are specific to command and control protocol and structures. In this paper, we propose a novel approach to detect botnets irrespective of their structures, based on network traffic flow behavior analysis and machine learning techniques. The experimental evaluation of the proposed method with real-world benchmark datasets shows the efficiency of the method. Also, the system is able to identify the new botnets with high detection accuracy and low false positive rate. © 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "c852c9db867670be2b1d0e85f3a80246", "text": "This study investigated how celebrities' self-disclosure on personal social media accounts, particularly Twitter, affects fans' perceptions. An online survey was utilized among a sample of 429 celebrity followers on Twitter. Results demonstrated that celebrities' professional self-disclosure (e.g., sharing their work-related life), personal self-disclosure (e.g., sharing their personal life such as friends and family), and fans' retweeting behavior, enhanced fans’ feeling of social presence, thereby positively affecting parasocial interaction with celebrities. Further, the study found that the effects of self-disclosure and retweeting on parasocial interaction were mediated by social presence. Implications and future research directions are provided. © 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "745a89e24f439b6f31cdadea25386b17", "text": "Developmental imaging studies show that cortical grey matter decreases in volume during childhood and adolescence. However, considerably less research has addressed the development of subcortical regions (caudate, putamen, pallidum, accumbens, thalamus, amygdala, hippocampus and the cerebellar cortex), in particular not in longitudinal designs. We used the automatic labeling procedure in FreeSurfer to estimate the developmental trajectories of the volume of these subcortical structures in 147 participants (age 7.0-24.3years old, 94 males; 53 females) of whom 53 participants were scanned twice or more. A total of 223 magnetic resonance imaging (MRI) scans (acquired at 1.5-T) were analyzed. Substantial diversity in the developmental trajectories was observed between the different subcortical gray matter structures: the volume of caudate, putamen and nucleus accumbens decreased with age, whereas the volume of hippocampus, amygdala, pallidum and cerebellum showed an inverted U-shaped developmental trajectory. The thalamus showed an initial small increase in volume followed by a slight decrease. All structures had a larger volume in males than females over the whole age range, except for the cerebellum that had a sexually dimorphic developmental trajectory. Thus, subcortical structures appear to not yet be fully developed in childhood, similar to the cerebral cortex, and continue to show maturational changes into adolescence. In addition, there is substantial heterogeneity between the developmental trajectories of these structures.", "title": "" }, { "docid": "8f6add3adeb6b1b5a6aa4fb01e5de2a0", "text": "Growing evidence demonstrates that psychological risk variables can contribute to physical disease. In an effort to thoroughly investigate potential etiological origins and optimal interventions, this broad review is divided into five sections: the stress response, chronic diseases, mind-body theoretical models, psychophysiological interventions, and integrated health care solutions. The stress response and its correlation to chronic disorders such as cardiovascular, gastrointestinal, autoimmune, metabolic syndrome, and chronic pain are comprehensively explored. Current mind-body theoretical models, including peripheral nerve pathway, neurophysiological, and integrative theories, are reviewed to elucidate the biological mechanisms behind psychophysiological interventions. Specific interventions included are psychotherapy, mindfulness meditation, yoga, and psychopharmacology. Finally, the author advocates for an integrated care approach as a means by which to blur the sharp distinction between physical and psychological health. Integrated care approaches can utilize psychiatric nurse practitioners for behavioral assessment, intervention, research, advocacy, consultation, and education to optimize health outcomes.", "title": "" }, { "docid": "2777fdcc4442c3d63b51b92710f3914d", "text": "Non-invasive pressure simulators that regenerate oscillometric waveforms promise an alternative to expensive clinical trials for validating oscillometric noninvasive blood pressure devices. However, existing simulators only provide oscillometric pressure in cuff and thus have a limited accuracy. It is promising to build a physical simulator that contains a synthetic arm with a built-in brachial artery and an affiliated hydraulic model of cardiovascular system. To guide the construction of this kind of simulator, this paper presents a computer model of cardiovascular system with a relatively simple structure, where the distribution of pressures and flows in aorta root and brachial artery can be simulated, and the produced waves are accordant with the physical data. This model can be used to provide the parameters and structure that will be needed to build the new simulator.", "title": "" } ]
scidocsrr
8d8cb052d946217a5717475058bdf29f
Monocular precrash vehicle detection: features and classifiers
[ { "docid": "33e41cf93ec8bb99c215dbce4afc34f8", "text": "This paper presents a general, trainable system for object detection in unconstrained, cluttered scenes. The system derives much of its power from a representation that describes an object class in terms of an overcomplete dictionary of local, oriented, multiscale intensity differences between adjacent regions, efficiently computable as a Haar wavelet transform. This example-based learning approach implicitly derives a model of an object class by training a support vector machine classifier using a large set of positive and negative examples. We present results on face, people, and car detection tasks using the same architecture. In addition, we quantify how the representation affects detection performance by considering several alternate representations including pixels and principal components. We also describe a real-time application of our person detection system as part of a driver assistance system.", "title": "" } ]
[ { "docid": "25063f744836d2b245bfe3c658ff5285", "text": "Nowadays security has become an important aspect in information systems engineering. A mainstream method for information system security is Role-based Access Control (RBAC), which restricts system access to authorised users. While the benefits of RBAC are widely acknowledged, the implementation and administration of RBAC policies remains a human intensive activity, typically postponed until the implementation and maintenance phases of system development. This deferred security engineering approach makes it difficult for security requirements to be accurately captured and for the system’s implementation to be kept aligned with these requirements as the system evolves. In this paper we propose a model-driven approach to manage SQL database access under the RBAC paradigm. The starting point of the approach is an RBAC model captured in SecureUML. This model is automatically translated to Oracle Database views and instead-of triggers code, which implements the security constraints. The approach has been fully instrumented as a prototype and its effectiveness has been validated by means of a case study.", "title": "" }, { "docid": "f6deeee48e0c8f1ed1d922093080d702", "text": "Foreword: The ACM SIGCHI (Association for Computing Machinery Special Interest Group in Computer Human Interaction) community conducted a deliberative process involving a high-visibility committee, a day-long workshop at CHI99 (Pittsburgh, PA, May 15, 1999) and a collaborative authoring process. This interim report is offered to produce further discussion and input leading to endorsement by the SIGCHI Executive Committee and then other professional societies. The scope of this research agenda included advanced information and communications technology research that could yield benefits over the next two to five years.", "title": "" }, { "docid": "30b77626604d8d258ad77146e3ff7a2d", "text": "A compact single-feed circularly-polarized (CP) wide beam microstrip antenna is proposed for CNSS application. The antenna is designed with a double-layer structure, comprising a circular patch with two rectangular stubs along the diameter direction and a parasitic ring right above it. The resonance frequency and the CP characteristics are mainly controlled by the circular patch and the rectangular stubs, respectively. The vertical HPBW (half power beam width) could be widened by the parasitic ring. Experimental results show that the measured vertical HPBW is approximately 140° and the measured out-of-roundness for the horizontal radiation pattern is only 1.1 dB. Besides, it could maintain good low-profile characteristics.", "title": "" }, { "docid": "252f5488232f7437ff886b79e2e7014e", "text": "Typical video footage captured using an off-the-shelf camcorder suffers from limited dynamic range. This paper describes our approach to generate high dynamic range (HDR) video from an image sequence of a dynamic scene captured while rapidly varying the exposure of each frame. Our approach consists of three parts: automatic exposure control during capture, HDR stitching across neighboring frames, and tonemapping for viewing. HDR stitching requires accurately registering neighboring frames and choosing appropriate pixels for computing the radiance map. We show examples for a variety of dynamic scenes. We also show how we can compensate for scene and camera movement when creating an HDR still from a series of bracketed still photographs.", "title": "" }, { "docid": "597ce6d64e8a65f20e605533f4602eba", "text": "Detailed scanning of indoor scenes is tedious for humans. We propose autonomous scene scanning by a robot to relieve humans from such a laborious task. In an autonomous setting, detailed scene acquisition is inevitably coupled with scene analysis at the required level of detail. We develop a framework for object-level scene reconstruction coupled with object-centric scene analysis. As a result, the autoscanning and reconstruction will be object-aware, guided by the object analysis. The analysis is, in turn, gradually improved with progressively increased object-wise data fidelity. In realizing such a framework, we drive the robot to execute an iterative analyze-and-validate algorithm which interleaves between object analysis and guided validations.\n The object analysis incorporates online learning into a robust graph-cut based segmentation framework, achieving a global update of object-level segmentation based on the knowledge gained from robot-operated local validation. Based on the current analysis, the robot performs proactive validation over the scene with physical push and scan refinement, aiming at reducing the uncertainty of both object-level segmentation and object-wise reconstruction. We propose a joint entropy to measure such uncertainty based on segmentation confidence and reconstruction quality, and formulate the selection of validation actions as a maximum information gain problem. The output of our system is a reconstructed scene with both object extraction and object-wise geometry fidelity.", "title": "" }, { "docid": "35c5f8dcf914b7381041b2e5f6a17507", "text": "This paper proposes a new high efficiency single phase transformerless grid-tied photovoltaic (PV) inverter by using super-junction MOSFETs as main power switches. No reverse recovery issues are required for the main power switches and the blocking voltages across the switches are half of the DC input voltage in the proposed topology. Therefore, the super-junction MOSFETs have been used to improve the efficiency. Two additional switches with the conventional full H-bridge topology, make sure the disconnection of PV module from the grid at the freewheeling mode. As a result, the high frequency common mode (CM) voltage which leads leakage current is minimized. PWM dead time is not necessary for the proposed topology which reduces the distortion of the AC output current. The efficiency at light load is increased by using MOSFET as main power switches which increases the European Union (EU) efficiency of the proposed topology. The proposed inverter can also operate with high frequency by retaining high efficiency which enables reduced cooling system. The total semiconductor device losses for the proposed topology and several existing topologies are calculated and compared. Finally, the proposed new topology is simulated by MATLAB/Simulink software to validate the accuracy of the theoretical explanation. It is being manufactured to verify the experimental results.", "title": "" }, { "docid": "565efa7a51438990b3d8da6222dca407", "text": "The collection of huge amount of tracking data made possible by the widespread use of GPS devices, enabled the analysis of such data for several applications domains, ranging from traffic management to advertisement and social studies. However, the raw positioning data, as it is detected by GPS devices, lacks of semantic information since this data does not natively provide any additional contextual information like the places that people visited or the activities performed. Traditionally, this information is collected by hand filled questionnaire where a limited number of users are asked to annotate their tracks with the activities they have done. With the purpose of getting large amount of semantically rich trajectories, we propose an algorithm for automatically annotating raw trajectories with the activities performed by the users. To do this, we analyse the stops points trying to infer the Point Of Interest (POI) the user has visited. Based on the category of the POI and a probability measure based on the gravity law, we infer the activity performed. We experimented and evaluated the method in a real case study of car trajectories, manually annotated by users with their activities. Experimental results are encouraging and will drive our future works.", "title": "" }, { "docid": "f08d5e22264bf287355308330f67d564", "text": "Group-by is a core database operation that is used extensively in OLTP, OLAP, and decision support systems. In many application scenarios, it is required to group similar but not necessarily equal values. In this paper we propose a new SQL construct that supports similarity-based Group-by (SGB). SGB is not a new clustering algorithm, but rather is a practical and fast similarity grouping query operator that is compatible with other SQL operators and can be combined with them to answer similarity-based queries efficiently. In contrast to expensive clustering algorithms, the proposed similarity group-by operator maintains low execution times while still generating meaningful groupings that address many application needs. The paper presents a general definition of the similarity group-by operation and gives three instances of this definition. The paper also discusses how optimization techniques for the regular group-by can be extended to the case of SGB. The proposed operators are implemented inside PostgreSQL. The performance study shows that the proposed similarity-based group-by operators have good scalability properties with at most only 25% increase in execution time over the regular group-by.", "title": "" }, { "docid": "27bf126da661051da506926f7d9de632", "text": "In this paper, we propose an novel implementation of a simultaneous localization and mapping (SLAM) system based on a monocular camera from an unmanned aerial vehicle (UAV) using Depth prediction performed with Capsule Networks (CapsNet), which possess improvements over the drawbacks of the more widely-used Convolutional Neural Networks (CNN). An Extended Kalman Filter will assist in estimating the position of the UAV so that we are able to update the belief for the environment. Results will be evaluated on a benchmark dataset to portray the accuracy of our intended approach.", "title": "" }, { "docid": "d7102755d7934532e1de73815e282f27", "text": "We present an application of Monte Carlo tree search (MCTS) for the game of Ms Pac-Man. Contrary to most applications of MCTS to date, Ms Pac-Man requires almost real-time decision making and does not have a natural end state. We approached the problem by performing Monte Carlo tree searches on a five player maxn tree representation of the game with limited tree search depth. We performed a number of experiments using both the MCTS game agents (for pacman and ghosts) and agents used in previous work (for ghosts). Performance-wise, our approach gets excellent scores, outperforming previous non-MCTS opponent approaches to the game by up to two orders of magnitude.", "title": "" }, { "docid": "ac7b607cc261654939868a62822a58eb", "text": "Interdigitated capacitors (IDC) are extensively used for a variety of chemical and biological sensing applications. Printing and functionalizing these IDC sensors on bendable substrates will lead to new innovations in healthcare and medicine, food safety inspection, environmental monitoring, and public security. The synthesis of an electrically conductive aqueous graphene ink stabilized in deionized water using the polymer Carboxymethyl Cellulose (CMC) is introduced in this paper. CMC is a nontoxic hydrophilic cellulose derivative used in food industry. The water-based graphene ink is then used to fabricate IDC sensors on mechanically flexible polyimide substrates. The capacitance and frequency response of the sensors are analyzed, and the effect of mechanical stress on the electrical properties is examined. Experimental results confirm low thin film resistivity (~6;.6×10-3 Ω-cm) and high capacitance (>100 pF). The printed sensors are then used to measure water content of ethanol solutions to demonstrate the proposed conductive ink and fabrication methodology for creating chemical sensors on thin membranes.", "title": "" }, { "docid": "e12d800b09f2f8f19a138b25d8a8d363", "text": "This paper proposes a corpus-based approach for answering why-questions. Conventional systems use hand-crafted patterns to extract and evaluate answer candidates. However, such hand-crafted patterns are likely to have low coverage of causal expressions, and it is also difficult to assign suitable weights to the patterns by hand. In our approach, causal expressions are automatically collected from corpora tagged with semantic relations. From the collected expressions, features are created to train an answer candidate ranker that maximizes the QA performance with regards to the corpus of why-questions and answers. NAZEQA, a Japanese why-QA system based on our approach, clearly outperforms a baseline that uses hand-crafted patterns with a Mean Reciprocal Rank (top-5) of 0.305, making it presumably the best-performing fully implemented why-QA system.", "title": "" }, { "docid": "38c922ff8763d1a03b8beb37cc7bd4bb", "text": "As the number of devices connected to the Internet has been exponentially increasing, the degree of threats to those devices and networks has been also increasing. Various network scanning tools, which use fingerprinting techniques, have been developed to make the devices and networks secure by providing the information on its status. However, the tools may be used for malicious purposes. Using network scanning tools, attackers can not only obtain the information of devices such as the name of OS, version, and sessions but also find its vulnerabilities which can be used for further cyber-attacks. In this paper, we compare and analyze the performances of widely used network scanning tools such as Nmap and Nessus. The existing researches on the network scanning tools analyzed a specific scanning tools and they assumed there are only small number of network devices. In this paper, we compare and analyze the performances of several tools in practical network environments with the number of devices more than 40. The results of this paper provide the direction to prevent possible attacks when they are utilized as attack tools as well as the practical understanding of the threats by network scanning tools and fingerprinting techniques.", "title": "" }, { "docid": "c74290691708a5ef66209369c8a377af", "text": "Network traffic has traditionally exhibited temporal locality in the header field of packets. Such locality is intuitive and is a consequence of the semantics of network protocols. However, in contrast, the locality in the packet payload has not been studied in significant detail. In this work we study temporal locality in the packet payload. Temporal locality can also be viewed as redundancy, and we observe significant redundancy in the packet payload. We investigate mechanisms to exploit it in a networking application. We choose Intrusion Detection Systems (IDS) as a case study. An IDS like the popular Snort operates by scanning packet payload for known attack strings. It first builds a Finite State Machine (FSM) from a database of attack strings, and traverses this FSM using bytes from the packet payload. So temporal locality in network traffic provides us an opportunity to accelerate this FSM traversal. Our mechanism dynamically identifies redundant bytes in the packet and skips their redundant FSM traversal. We further parallelize our mechanism by performing the redundancy identification concurrently with stages of Snort packet processing. IDS are commonly deployed in commodity processors, and we evaluate our mechanism on an Intel Core i3. Our performance study indicates that the length of the redundant chunk is a key factor in performance. We also observe important performance benefits in deploying our redundancy-aware mechanism in the Snort IDS[32].", "title": "" }, { "docid": "01a86d89b47da1a0d49791e2ea5fa96d", "text": "Prediction of heart attack is an important task in medical science. There are several factors are responsible for heart attack problem. Prediction of heart attack problem from different responsible factor is a difficult task. Data mining classification algorithm plays a vital role in several real life applications. In this research we paper present the study of various classification techniques including Decision Tree Induction, Bayesian Classification, Support Vector Machines, Rule-based classification, Neural Network Classifier and KNearest Neighbor Classifier. There are three important which are always considering for classifiers accuracy, Speed and Scalability. Index Terms – Prediction, Classification Diagnosis, Heart Attack, Symptoms.", "title": "" }, { "docid": "441f80a25e7a18760425be5af1ab981d", "text": "This paper proposes efficient algorithms for group sparse optimization with mixed `2,1-regularization, which arises from the reconstruction of group sparse signals in compressive sensing, and the group Lasso problem in statistics and machine learning. It is known that encoding the group information in addition to sparsity will lead to better signal recovery/feature selection. The `2,1-regularization promotes group sparsity, but the resulting problem, due to the mixed-norm structure and possible grouping irregularity, is considered more difficult to solve than the conventional `1-regularized problem. Our approach is based on a variable splitting strategy and the classic alternating direction method (ADM). Two algorithms are presented, one derived from the primal and the other from the dual of the `2,1-regularized problem. The convergence of the proposed algorithms is guaranteed by the existing ADM theory. General group configurations such as overlapping groups and incomplete covers can be easily handled by our approach. Computational results show that on random problems the proposed ADM algorithms exhibit good efficiency, and strong stability and robustness.", "title": "" }, { "docid": "c0ee7bd21a1a261a73f7b831c655ca00", "text": "NMDA receptors are preeminent neurotransmitter-gated channels in the CNS, which respond to glutamate in a manner that integrates multiple external and internal cues. They belong to the ionotropic glutamate receptor family and fulfil unique and crucial roles in neuronal development and function. These roles depend on characteristic response kinetics, which reflect the operation of the receptors. Here, we review biologically salient features of the NMDA receptor signal and its mechanistic origins. Knowledge of distinctive NMDA receptor biophysical properties, their structural determinants and physiological roles is necessary to understand the physiological and neurotoxic actions of glutamate and to design effective therapeutics.", "title": "" }, { "docid": "02fb56856d8151e9ad3f620b7a5d5ceb", "text": "BACKGROUND\nSurgical complications represent a significant cause of morbidity and mortality with the rate of major complications after inpatient surgery estimated at 3-17% in industrialised countries. The purpose of this review was to summarise experience with surgical checklist use and efficacy for improving patient safety.\n\n\nMETHODS\nA search of four databases (MEDLINE, CINAHL, EMBASE and the Cochrane Database of Controlled Trials) was conducted from 1 January 2000 to 26 October 2012. Articles describing actual use of the WHO checklist, the Surgical Patient Safety System (SURPASS) checklist, a wrong-site surgery checklist or an anaesthesia equipment checklist were eligible for inclusion (this manuscript summarises all but the anaesthesia equipment checklists, which are described in the Agency for Healthcare Research and Quality publication).\n\n\nRESULTS\nWe included a total of 33 studies. We report a variety of outcomes including avoidance of adverse events, facilitators and barriers to implementation. Checklists have been adopted in a wide variety of settings and represent a promising strategy for improving the culture of patient safety and perioperative care in a wide variety of settings. Surgical checklists were associated with increased detection of potential safety hazards, decreased surgical complications and improved communication among operating staff. Strategies for successful checklist implementation included enlisting institutional leaders as local champions, incorporating staff feedback for checklist adaptation and avoiding redundancies with existing systems for collecting information.\n\n\nCONCLUSIONS\nSurgical checklists represent a relatively simple and promising strategy for addressing surgical patient safety worldwide. Further studies are needed to evaluate to what degree checklists improve clinical outcomes and whether improvements may be more pronounced in particular settings.", "title": "" }, { "docid": "89e91d9c74421124c19ea573eef15b0c", "text": "A cavity-backed triangular-complimentary-split-ring-slot (TCSRS) antenna based on substrate integrated waveguide (SIW) is proposed in this communication. Proposed antenna element is designed and implemented at 28 and 45 GHz for the fifth generation (5G) of wireless communications. The principle of the proposed antenna element is investigated first then the arrays with two and four elements are designed for high-gain operation. Antennas prototype along with their arrays are fabricated using standard printed circuit board (PCB) process at both frequencies-28 and 45 GHz. Measured result shows that the 16.67% impedance bandwidth at 28 GHz and 22.2% impedance bandwidth at 45 GHz is achieved, while maintaining the same substrate height at both frequencies. The measured peak gains of the 2 × 2 antenna array at 30 or 50 GHz are 13.5 or 14.4 dBi, respectively.", "title": "" } ]
scidocsrr
a4e9a4b4d943c69ff6070f5ffd71b0b2
Cooperative navigation in robotic swarms
[ { "docid": "dce8b7c654a8f034f51d651ad3eabb28", "text": "We characterize and improve an existing infrared relative localization/communication module used to find range and bearing between robots in small-scale multi-robot systems. Modifications to the algorithms of the original system are suggested which offer better performance. A mathematical model which accurately describes the system is presented and allows us to predict the performance of modules with augmented sensorial capabilities. Finally, the usefulness of the module is demonstrated in a multi-robot self-localization task using both a realistic robotic simulator and real robots, and the performance is analyzed", "title": "" } ]
[ { "docid": "f7acaf8d00ad974954b1ed551699c0df", "text": "A major challenge in biometrics is performing the test at the client side, where hardware resources are often limited. Deep learning approaches pose a unique challenge: while such architectures dominate the field of face recognition with regard to accuracy, they require elaborate, multi-stage computations. Recently, there has been some work on compressing networks for the purpose of reducing run time and network size. However, it is not clear that these compression methods would work in deep face nets, which are, generally speaking, less redundant than the object recognition networks, i.e., they are already relatively lean. We propose two novel methods for compression: one based on eliminating lowly active channels and the other on coupling pruning with repeated use of already computed elements. Pruning of entire channels is an appealing idea, since it leads to direct saving in run time in almost every reasonable architecture.", "title": "" }, { "docid": "80bfe7bcea0a8db5667f3f5d2c85b16b", "text": "We present a non-photorealistic algorithm for automatically retargeting images for a variety of display devices, while preserving the images' important features and qualities. Image manipulation techniques such as linear resizing and cropping work well for images containing a single important object. However, problems such as degradation of image quality and important information loss occur when these techniques have been automatically applied to images with multiple objects. Our algorithm addresses the case of multiple important objects in an image. We first segment the image, and generate an importance map based on both saliency and face detection. Regions are then resized and repositioned to fit within a specified size based on the importance map. NSF 01416284/0415083", "title": "" }, { "docid": "8557c77501fbdc29a4cd0f161224ca8c", "text": "We present a preliminary analysis of the fundamental viability of meta-learning, revisiting the No Free Lunch (NFL) theorem. The analysis shows that given some simple and very basic assumptions, the NFL theorem is of little relevance to research in Machine Learning. We augment the basic NFL framework to illustrate that the notion of an Ultimate Learning Algorithm is well defined. We show that, although cross-validation still is not a viable way to construct general-purpose learning algorithms, meta-learning offers a natural alternative. We still have to pay for our lunch, but the cost is reasonable: the necessary fundamental assumptions are ones we all make anyway.", "title": "" }, { "docid": "c21e39d4cf8d3346671ae518357c8edb", "text": "The success of deep learning depends on finding an architecture to fit the task. As deep learning has scaled up to more challenging tasks, the architectures have become difficult to design by hand. This paper proposes an automated method, CoDeepNEAT, for optimizing deep learning architectures through evolution. By extending existing neuroevolution methods to topology, components, and hyperparameters, this method achieves results comparable to best human designs in standard benchmarks in object recognition and language modeling. It also supports building a real-world application of automated image captioning on a magazine website. Given the anticipated increases in available computing power, evolution of deep networks is promising approach to constructing deep learning applications in the future.", "title": "" }, { "docid": "4a44eb4cacf4427996461b0d5facf0dc", "text": "Cloud Radio Access Networks (Cloud RAN) is an emerging architectural paradigm that attempts to exploit operational efficiencies through centralization of baseband functions, pooling efficiencies for RAN baseband processing, and air-interface performance gains by fast-time-scale multi-cell coordination. In this paper, we aim to derive insights underlying key technologies and tradeoffs that drive Cloud RAN. We highlight principles that explain the impact of Cloud RAN architectures and functional splits on multi-cell coordination gains. We show that the ability to extract pooling gains can differ significantly between RAN functions, depending on whether they are per-user or per-cell functions. We propose a method for elastic scaling of RAN functions that takes into account both real-time and non-real-time needs. We conclude with a proposal for a methodology for operators to evaluate the overall tradeoffs they will face in deciding whether to adopt Cloud RAN.", "title": "" }, { "docid": "c002eab17c87343b5d138b34e3be73f3", "text": "Finding semantically rich and computer-understandable representations for textual dialogues, utterances and words is crucial for dialogue systems (or conversational agents), as their performance mostly depends on understanding the context of conversations. In recent research approaches, responses have been generated utilizing a decoder architecture, given the distributed vector representation (embedding) of the current conversation. In this paper, the utilization of embeddings for answer retrieval is explored by using Locality-Sensitive Hashing Forest (LSH Forest), an Approximate Nearest Neighbor (ANN) model, to find similar conversations in a corpus and rank possible candidates. Experimental results on the well-known Ubuntu Corpus (in English) and a customer service chat dataset (in Dutch) show that, in combination with a candidate selection method, retrieval-based approaches outperform generative ones and reveal promising future research directions towards the usability of such a system.", "title": "" }, { "docid": "5c04a1b9179b883e021daab86fd76763", "text": "Obsessive compulsive disorder (OCD) and attention deficit hyperactivity disorder (ADHD) are two of the most common neuropsychiatric diseases in paediatric populations. The high comorbidity of ADHD and OCD with each other, especially of ADHD in paediatric OCD, is well described. OCD and ADHD often follow a chronic course with persistent rates of at least 40-50 %. Family studies showed high heritability in ADHD and OCD, and some genetic findings showed similar variants for both disorders of the same pathogenetic mechanisms, whereas other genetic findings may differentiate between ADHD and OCD. Neuropsychological and neuroimaging studies suggest that partly similar executive functions are affected in both disorders. The deficits in the corresponding brain networks may be responsible for the perseverative, compulsive symptoms in OCD but also for the disinhibited and impulsive symptoms characterizing ADHD. This article reviews the current literature of neuroimaging, neurochemical circuitry, neuropsychological and genetic findings considering similarities as well as differences between OCD and ADHD.", "title": "" }, { "docid": "cbf5019b1363b20c15c284d6d76f3281", "text": "Matching articulated shapes represented by voxel-sets reduces to maximal sub-graph isomorphism when each set is described by a weighted graph. Spectral graph theory can be used to map these graphs onto lower dimensional spaces and match shapes by aligning their embeddings in virtue of their invariance to change of pose. Classical graph isomorphism schemes relying on the ordering of the eigenvalues to align the eigenspaces fail when handling large data-sets or noisy data. We derive a new formulation that finds the best alignment between two congruent K-dimensional sets of points by selecting the best subset of eigenfunctions of the Laplacian matrix. The selection is done by matching eigenfunction signatures built with histograms, and the retained set provides a smart initialization for the alignment problem with a considerable impact on the overall performance. Dense shape matching casted into graph matching reduces then, to point registration of embeddings under orthogonal transformations; the registration is solved using the framework of unsupervised clustering and the EM algorithm. Maximal subset matching of non identical shapes is handled by defining an appropriate outlier class. Experimental results on challenging examples show how the algorithm naturally treats changes of topology, shape variations and different sampling densities.", "title": "" }, { "docid": "841d18c8ea3633e39b8e561c252c2a10", "text": "The annual incidence of insider attacks continues to grow, and there are indications this trend will continue. While there are a number of existing tools that can accurately identify known attacks, these are reactive (as opposed to proactive) in their enforcement, and may be eluded by previously unseen, adversarial behaviors. This paper proposes an approach that combines Structural Anomaly Detection (SA) from social and information networks and Psychological Profiling (PP) of individuals. SA uses technologies including graph analysis, dynamic tracking, and machine learning to detect structural anomalies in large-scale information network data, while PP constructs dynamic psychological profiles from behavioral patterns. Threats are finally identified through a fusion and ranking of outcomes from SA and PP. The proposed approach is illustrated by applying it to a large data set from a massively multi-player online game, World of War craft (WoW). The data set contains behavior traces from over 350,000 characters observed over a period of 6 months. SA is used to predict if and when characters quit their guild (a player association with similarities to a club or workgroup in non-gaming contexts), possibly causing damage to these social groups. PP serves to estimate the five-factor personality model for all characters. Both threads show good results on the gaming data set and thus validate the proposed approach.", "title": "" }, { "docid": "ad49ca31e92eaeb44cbb24206e10c9ee", "text": "PESQ, Perceptual Evaluation of Speech Quality [5], and POLQA, Perceptual Objective Listening Quality Assessment [1] , are standards comprising a test methodology for automated assessment of voice quality of speech as experienced by human beings. The predictions of those objective measures should come as close as possible to subjective quality scores as obtained in subjective listening tests, usually, a Mean Opinion Score (MOS) is predicted. Wavenet [6] is a deep neural network originally developed as a deep generative model of raw audio waveforms. Wavenet architecture is based on dilated causal convolutions, which exhibit very large receptive fields. In this short paper we suggest using the Wavenet architecture, in particular its large receptive filed in order to mimic PESQ algorithm. By doing so we can use it as a differentiable loss function for speech enhancement. 1 Problem formulation and related work In statistics, the Mean Squared Error (MSE) or Peak Signal to Noise Ratio (PSNR) of an estimator are widely used objective measures and are good distortion indicators (loss functions) between the estimators output and the size that we want to estimate. those loss functions are used for many reconstruction tasks. However, PSNR and MSE do not have good correlation with reliable subjective methods such as Mean Opinion Score (MOS) obtained from expert listeners. A more suitable speech quality assessment can by achieved by using tests that aim to achieve high correlation with MOS tests such as PEAQ or POLQA. However those algorithms are hard to represent as a differentiable function such as MSE moreover, as opposed to MSE that measures the average", "title": "" }, { "docid": "1131fcb3035af23fda3f1c06d18c7d6f", "text": "Goals are central to current treatments of work motivation, and goal commitment is a critical construct in understanding the relationship between goals and task performance. Despite this importance, there is confusion about the role of goal commitment and only recently has this key construct received the empirical attention it warrants. This meta-analysis, based on 83 independent samples, updates the goal commitment literature by summarizing the accumulated evidence on the antecedents and consequences of goal commitment. Using this aggregate empirical evidence, the role of goal commitment in the goal-setting process is clarified and key areas for future research are identified.", "title": "" }, { "docid": "15f7718c561aa3add15e43f1319d4bda", "text": "While there have been significant advances in detecting emotions from speech and image recognition, emotion detection on text is still under-explored and remained as an active research field. This paper introduces a corpus for text-based emotion detection on multiparty dialogue as well as deep neural models that outperform the existing approaches for document classification. We first present a new corpus that provides annotation of seven emotions on consecutive utterances in dialogues extracted from the show, Friends. We then suggest four types of sequence-based convolutional neural network models with attention that leverage the sequence information encapsulated in dialogue. Our best model shows the accuracies of 37.9% and 54% for fineand coarsegrained emotions, respectively. Given the difficulty of this task, this is promising.", "title": "" }, { "docid": "e6548454f46962b5ce4c5d4298deb8e7", "text": "The use of SVM (Support Vector Machines) in detecting e-mail as spam or nonspam by incorporating feature selection using GA (Genetic Algorithm) is investigated. An GA approach is adopted to select features that are most favorable to SVM classifier, which is named as GA-SVM. Scaling factor is exploited to measure the relevant coefficients of feature to the classification task and is estimated by GA. Heavy-bias operator is introduced in GA to promote sparse in the scaling factors of features. So, feature selection is performed by eliminating irrelevant features whose scaling factor is zero. The experiment results on UCI Spam database show that comparing with original SVM classifier, the number of support vector decreases while better classification results are achieved based on GA-SVM.", "title": "" }, { "docid": "e6f5d30da2203b57acc87d3207e451d0", "text": "Personalized recommendation systems can help people to find interesting things and they are widely used with the development of electronic commerce. Many recommendation systems employ the collaborative filtering technology, which has been proved to be one of the most successful techniques in recommender systems in recent years. With the gradual increase of customers and products in electronic commerce systems, the time consuming nearest neighbor collaborative filtering search of the target customer in the total customer space resulted in the failure of ensuring the real time requirement of recommender system. At the same time, it suffers from its poor quality when the number of the records in the user database increases. Sparsity of source data set is the major reason causing the poor quality. To solve the problems of scalability and sparsity in the collaborative filtering, this paper proposed a personalized recommendation approach joins the user clustering technology and item clustering technology. Users are clustered based on users’ ratings on items, and each users cluster has a cluster center. Based on the similarity between target user and cluster centers, the nearest neighbors of target user can be found and smooth the prediction where necessary. Then, the proposed approach utilizes the item clustering collaborative filtering to produce the recommendations. The recommendation joining user clustering and item clustering collaborative filtering is more scalable and more accurate than the traditional one.", "title": "" }, { "docid": "0074ae2f7753c0ba51657dfb8ee76ccb", "text": "Cooley and Tukey have disclosed a procedure for synthesizing and analyzing Fourier series for discrete periodic complex functions. For functions of period <i>N</i>, where <i>N</i> is a power of 2, computation times are proportional to <i>N</i> log<sub>2</sub> <i>N</i> as expressed in Eq. (0).", "title": "" }, { "docid": "c789d4e880f48735e5d19a41f694fe50", "text": "The no-fit polygon is a construct that can be used between pairs of shapes for fast and efficient handling of geometry within irregular two-dimensional stock cutting problems. Previously, the no-fit polygon (NFP) has not been widely applied because of the perception that it is difficult to implement and because of the lack of generic approaches that can cope with all problem cases without specific case-by-case handling. This paper introduces a robust orbital method for the creation of no-fit polygons which does not suffer from the typical problem cases found in the other approaches from the literature. Furthermore, the algorithm only involves two simple geometric stages so it is easily understood and implemented. We demonstrate how the approach handles known degenerate cases such as holes, interlocking concavities and jigsaw type pieces and we give generation times for 32 irregular packing benchmark problems from the literature, including real world datasets, to allow further comparison with existing and future approaches.", "title": "" }, { "docid": "524ed6f753bb059130a6076323e8aa63", "text": "Deep generative models provide a powerful and flexible means to learn complex distributions over data by incorporating neural networks into latent-variable models. Variational approaches to training such models introduce a probabilistic encoder that casts data, typically unsupervised, into an entangled representation space. While unsupervised learning is often desirable, sometimes even necessary, when we lack prior knowledge about what to represent, being able to incorporate domain knowledge in characterising certain aspects of variation in the data can often help learn better disentangled representations. Here, we introduce a new formulation of semi-supervised learning in variational autoencoders that allows precisely this. It permits flexible specification of probabilistic encoders as directed graphical models via a stochastic computation graph, containing both continuous and discrete latent variables, with conditional distributions parametrised by neural networks. We demonstrate how the provision of dependency structures, along with a few labelled examples indicating plausible values for some components of the latent space, can help quickly learn disentangled representations. We then evaluate its ability to do so, both qualitatively by exploring its generative capacity, and quantitatively by using the disentangled representation to perform classification, on a variety of models and datasets.", "title": "" }, { "docid": "c84d41e54b12cca847135dfc2e9e13f8", "text": "PURPOSE\nBaseline restraint prevalence for surgical step-down unit was 5.08%, and for surgical intensive care unit, it was 25.93%, greater than the National Database of Nursing Quality Indicators (NDNQI) mean. Project goal was sustained restraint reduction below the NDNQI mean and maintaining patient safety.\n\n\nBACKGROUND/RATIONALE\nSoft wrist restraints are utilized for falls reduction and preventing device removal but are not universally effective and may put patients at risk of injury. Decreasing use of restrictive devices enhances patient safety and decreases risk of injury.\n\n\nDESCRIPTION\nPhase 1 consisted of advanced practice nurse-facilitated restraint rounds on each restrained patient including multidisciplinary assessment and critical thinking with bedside clinicians including reevaluation for treatable causes of agitation and restraint indications. Phase 2 evaluated less restrictive mitts, padded belts, and elbow splint devices. Following a 4-month trial, phase 3 expanded the restraint initiative including critical care requiring education and collaboration among advanced practice nurses, physician team members, and nurse champions.\n\n\nEVALUATION AND OUTCOMES\nPhase 1 decreased surgical step-down unit restraint prevalence from 5.08% to 3.57%. Phase 2 decreased restraint prevalence from 3.57% to 1.67%, less than the NDNQI mean. Phase 3 expansion in surgical intensive care units resulted in wrist restraint prevalence from 18.19% to 7.12% within the first year, maintained less than the NDNQI benchmarks while preserving patient safety.\n\n\nINTERPRETATION/CONCLUSION\nThe initiative produced sustained reduction in acute/critical care well below the NDNQI mean without corresponding increase in patient medical device removal.\n\n\nIMPLICATIONS\nBy managing causes of agitation, need for restraints is decreased, protecting patients from injury and increasing patient satisfaction. Follow-up research may explore patient experiences with and without restrictive device use.", "title": "" }, { "docid": "e5bea734149b69a05455c5fec2d802e3", "text": "This article introduces a collection of essays on continuity and discontinuity in cognitive development. In his lead essay, J. Kagan (2008) argues that limitations in past research (e.g., on number concepts, physical solidarity, and object permanence) render conclusions about continuity premature. Commentaries respectively (1) argue that longitudinal contexts are essential for interpreting developmental data, (2) illustrate the value of converging measures, (3) identify qualitative change via dynamical systems theory, (4) redirect the focus from states to process, and (5) review epistemological premises of alternative research traditions. Following an overview of the essays, this introductory article discusses how the search for developmental structures, continuity, and process differs between mechanistic-contextualist and organismic-contextualist metatheoretical frameworks, and closes by highlighting continuities in Kagan's scholarship over the past half century.", "title": "" }, { "docid": "d6d0069b9903860bef39f812471d2946", "text": "Internet content has become one of the most important resources of information. Much of this information is in the form of natural language text and one of the important components of natural language text is named entities. So automatic recognition and classification of named entities has attracted researchers for many years. Named entities are mentioned in different textual forms in different documents. Also, the same textual mention may refer to different named entities. This problem is well known in NLP as a disambiguation problem. Named Entity Disambiguation (NED) refers to the task of mapping different named entity mentions in running text to their correct interpretations in a specific knowledge base (KB). NED is important for many applications like search engines and software agents that aim to aggregate information on real world entities from sources such as the Web. The main goal of this research is to develop new methods for named entity disambiguation, emphasising the importance of interdependency of named entity candidates of different textual mentions in the document. The thesis focuses on two connected problems related to disambiguation. The first is Candidates Generation, the process of finding a small set of named entity candidate entries in the knowledge base for a specific textual mention, where this set contains the correct entry in the knowledge base. The second problem is Collective Disambiguation, where all named entity textual mentions in the document are disambiguated jointly, using interdependence and semantic relations between the different NE candidates of different textual mentions. Wikipedia is used as a reference knowledge base in this research. An information retrieval framework is used to generate the named entity candidates for a textual mention. A novel document similarity function (NEBSim) based on NE co-occurrence", "title": "" } ]
scidocsrr
1e7c7fea5e738b8eba77a15950a400a4
A Data Mining and CIDF Based Approach for Detecting Novel and Distributed Intrusions
[ { "docid": "4bce6150e9bc23716a19a0d7c02640c0", "text": "A Data Mining Framework for Constructing Features and Models for Intrusion Detection Systems", "title": "" } ]
[ { "docid": "fca76468b4d72fd5ef7c85b5d56548b9", "text": "Cloud providers, like Amazon, offer their data centers' computational and storage capacities for lease to paying customers. High electricity consumption, associated with running a data center, not only reflects on its carbon footprint, but also increases the costs of running the data center itself. This paper addresses the problem of maximizing the revenues of Cloud providers by trimming down their electricity costs. As a solution allocation policies which are based on the dynamic powering servers on and off are introduced and evaluated. The policies aim at satisfying the conflicting goals of maximizing the users' experience while minimizing the amount of consumed electricity. The results of numerical experiments and simulations are described, showing that the proposed scheme performs well under different traffic conditions.", "title": "" }, { "docid": "0b6f3498022abdf0407221faba72dcf1", "text": "A broadband coplanar waveguide (CPW) to coplanar strip (CPS) transmission line transition directly integrated with an RF microelectromechanical systems reconfigurable multiband antenna is presented in this paper. This transition design exhibits very good performance up to 55 GHz, and uses a minimum number of dissimilar transmission line sections and wire bonds, achieving a low-loss and low-cost balancing solution to feed planar antenna designs. The transition design methodology that was followed is described and measurement results are presented.", "title": "" }, { "docid": "b206560e0c9f3e59c8b9a8bec6f12462", "text": "A symmetrical microstrip directional coupler design using the synthesis technique without prior knowledge of the physical geometry of the directional coupler is analytically given. The introduced design method requires only the information of the port impedances, the coupling level, and the operational frequency. The analytical results are first validated by using a planar electromagnetic simulation tool and then experimentally verified. The error between the experimental and analytical results is found to be within 3% for the worst case. The design charts that give all the physical dimensions, including the length of the directional coupler versus frequency and different coupling levels, are given for alumina, Teflon, RO4003, FR4, and RF-60, which are widely used in microwave applications. The complete design of symmetrical two-line microstrip directional couplers can be obtained for the first time using our results in this paper.", "title": "" }, { "docid": "7fe8026b233c41be2ad39da1c9ac2fca", "text": "This paper presents a new efficient technique for large-scale structure from motion from unordered data sets. We avoid costly computation of all pairwise matches and geometries by sampling pairs of images using the pairwise similarity scores based on the detected occurrences of visual words leading to a significant speedup. Furthermore, atomic 3D models reconstructed from camera triplets are used as the seeds which form the final large-scale 3D model when merged together. Using three views instead of two allows us to reveal most of the outliers of pairwise geometries at an early stage of the process hindering them from derogating the quality of the resulting 3D structure at later stages. The accuracy of the proposed technique is shown on a set of 64 images where the result of the exhaustive technique is known. Scalability is demonstrated on a landmark reconstruction from hundreds of images.", "title": "" }, { "docid": "720778ca4d6d8eb0fa78eecb1ebbb527", "text": "Address spoofing attacks like ARP spoofing and DDoS attacks are mostly launched in a networking environment to degrade the performance. These attacks sometimes break down the network services before the administrator comes to know about the attack condition. Software Defined Networking (SDN) has emerged as a novel network architecture in which date plane is isolated from the control plane. Control plane is implemented at a central device called controller. But, SDN paradigm is not commonly used due to some constraints like budget, limited skills to control SDN, the flexibility of traditional protocols. To get SDN benefits in a traditional network, a limited number of SDN devices can be deployed among legacy devices. This technique is called hybrid SDN. In this paper, we propose a new approach to automatically detect the attack condition and mitigate that attack in hybrid SDN. We represent the network topology in the form of a graph. A graph based traversal mechanism is adopted to indicate the location of the attacker. Simulation results show that our approach enhances the network efficiency and improves the network security Keywords—Communication system security; Network Security; ARP Spoofing Introduction", "title": "" }, { "docid": "fe640d50c7f18548c5103a9a7744a1a7", "text": "Polynomial filtering can provide a highly effective means of computing all eigenvalues of a real symmetric (or complex Hermitian) matrix that are located in a given interval, anywhere in the spectrum. This paper describes a technique for tackling this problem by combining a ThickRestart version of the Lanczos algorithm with deflation (‘locking’) and a new type of polynomial filters obtained from a least-squares technique. The resulting algorithm can be utilized in a ‘spectrumslicing’ approach whereby a very large number of eigenvalues and associated eigenvectors of the matrix are computed by extracting eigenpairs located in different sub-intervals independently from one another.", "title": "" }, { "docid": "b765a75438d9abd381038e1b84128004", "text": "Implementing a complex spelling program using a steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) remains a challenge due to difficulties in stimulus presentation and target identification. This study aims to explore the feasibility of mixed frequency and phase coding in building a high-speed SSVEP speller with a computer monitor. A frequency and phase approximation approach was developed to eliminate the limitation of the number of targets caused by the monitor refresh rate, resulting in a speller comprising 32 flickers specified by eight frequencies (8-15 Hz with a 1 Hz interval) and four phases (0°, 90°, 180°, and 270°). A multi-channel approach incorporating Canonical Correlation Analysis (CCA) and SSVEP training data was proposed for target identification. In a simulated online experiment, at a spelling rate of 40 characters per minute, the system obtained an averaged information transfer rate (ITR) of 166.91 bits/min across 13 subjects with a maximum individual ITR of 192.26 bits/min, the highest ITR ever reported in electroencephalogram (EEG)-based BCIs. The results of this study demonstrate great potential of a high-speed SSVEP-based BCI in real-life applications.", "title": "" }, { "docid": "9d4c35c960367e856212b9ce203e4c71", "text": "Recent progress in using long short-term memory (LSTM) for image captioning has motivated the exploration of their applications for video captioning. By taking a video as a sequence of features, an LSTM model is trained on video-sentence pairs and learns to associate a video to a sentence. However, most existing methods compress an entire video shot or frame into a static representation, without considering attention mechanism which allows for selecting salient features. Furthermore, existing approaches usually model the translating error, but ignore the correlations between sentence semantics and visual content. To tackle these issues, we propose a novel end-to-end framework named aLSTMs, an attention-based LSTM model with semantic consistency, to transfer videos to natural sentences. This framework integrates attention mechanism with LSTM to capture salient structures of video, and explores the correlation between multimodal representations (i.e., words and visual content) for generating sentences with rich semantic content. Specifically, we first propose an attention mechanism that uses the dynamic weighted sum of local two-dimensional convolutional neural network representations. Then, an LSTM decoder takes these visual features at time <inline-formula><tex-math notation=\"LaTeX\">$t$</tex-math></inline-formula> and the word-embedding feature at time <inline-formula><tex-math notation=\"LaTeX\">$t$</tex-math></inline-formula><inline-formula><tex-math notation=\"LaTeX\"> $-$</tex-math></inline-formula>1 to generate important words. Finally, we use multimodal embedding to map the visual and sentence features into a joint space to guarantee the semantic consistence of the sentence description and the video visual content. Experiments on the benchmark datasets demonstrate that our method using single feature can achieve competitive or even better results than the state-of-the-art baselines for video captioning in both BLEU and METEOR.", "title": "" }, { "docid": "7227e3aabc457c0949676f3b32e42683", "text": "Despite the geographically situated nature of most sharing economy tasks, little attention has been paid to the role that geography plays in the sharing economy. In this article, we help to address this gap in the literature by examining how four key principles from human geography—distance decay, structured variation in population density, mental maps, and “the Big Sort” (spatial homophily)—manifest in sharing economy platforms. We find that these principles interact with platform design decisions to create systemic biases in which the sharing economy is significantly more effective in dense, high socioeconomic status (SES) areas than in low-SES areas and the suburbs. We further show that these results are robust across two sharing economy platforms: UberX and TaskRabbit. In addition to highlighting systemic sharing economy biases, this article more fundamentally demonstrates the importance of considering well-known geographic principles when designing and studying sharing economy platforms.", "title": "" }, { "docid": "12b2cd11f2f99412ec59a96bdbe67a2a", "text": "We investigate opportunities for exploiting Artificial Intelligence (AI) techniques for enhancing capabilities of relational databases. In particular, we explore applications of Natural Language Processing (NLP) techniques to endow relational databases with capabilities that were very hard to realize in practice. We apply an unsupervised neural-network based NLP idea, Distributed Representation via Word Embedding, to extract latent information from a relational table. The word embedding model is based on meaningful textual view of a relational database and captures inter-/intra-attribute relationships between database tokens. For each database token, the model includes a vector that encodes these contextual semantic relationships. These vectors enable processing a new class of SQL-based business intelligence queries called cognitive intelligence (CI) queries that use the generated vectors to analyze contextual semantic relationships between database tokens. The cognitive capabilities enable complex queries such as semantic matching, reasoning queries such as analogies, predictive queries using entities not present in a database, and using knowledge from external sources.", "title": "" }, { "docid": "ce3a528ebd6a6cfed1230c466fd2bd60", "text": "This paper presents the design of a long-term evolution (LTE) antenna and its integration on the 3D surface of the mounting compartment of an automotive roof-top antenna, using molded interconnect device (MID) technology. In the first step the design of the planar LTE antenna is shown. This antenna provides an input matching better than 10 dB in the desired frequency band and exhibits an omnidirectional radiation characteristic in the horizontal plane. Subsequently, this antenna is bent on the surface of a roof antenna housing. The effects of geometrical mapping of the planar antenna structure to the 3D surface in terms of input matching and radiation characteristics are analyzed. Based on these findings, a conformal and optimized two antenna system is introduced and discussed. A prototype realized by MID laser direct structuring (LDS) is presented and the measured antenna performance is compared to simulation results. An excellent agreement between measured and simulated results is observed. Finally, the prototype meets the specification requirements.", "title": "" }, { "docid": "49cda71b86a3a6b374616a9013816b38", "text": "Discriminative localization is essential for fine-grained image classification task, which devotes to recognizing hundreds of subcategories in the same basic-level category. Reflecting on discriminative regions of objects, key differences among different subcategories are subtle and local. Existing methods generally adopt a two-stage learning framework: The first stage is to localize the discriminative regions of objects, and the second is to encode the discriminative features for training classifiers. However, these methods generally have two limitations: (1) Separation of the two-stage learning is time-consuming. (2) Dependence on object and parts annotations for discriminative localization learning leads to heavily labor-consuming labeling. It is highly challenging to address these two important limitations simultaneously. Existing methods only focus on one of them. Therefore, this paper proposes the discriminative localization approach via saliency-guided Faster R-CNN to address the above two limitations at the same time, and our main novelties and advantages are: (1) End-to-end network based on Faster R-CNN is designed to simultaneously localize discriminative regions and encode discriminative features, which accelerates classification speed. (2) Saliency-guided localization learning is proposed to localize the discriminative region automatically, avoiding labor-consuming labeling. Both are jointly employed to simultaneously accelerate classification speed and eliminate dependence on object and parts annotations. Comparing with the state-of-the-art methods on the widely-used CUB-200-2011 dataset, our approach achieves both the best classification accuracy and efficiency.", "title": "" }, { "docid": "0c4a9ee404cec4176e9d0f41c6d73b15", "text": "A novel envelope detector structure is proposed in this paper that overcomes the traditional trade-off required in these circuits, improving both the tracking and keeping of the signal. The method relies on holding the signal by two capacitors, discharging one when the other is in hold mode and employing the held signals to form the output. Simulation results show a saving greater than 60% of the capacitor area for the same ripple (0.3%) and a release time constant (0.4¿s) much smaller than that obtained by the conventional circuits.", "title": "" }, { "docid": "aaabe81401e33f7e2bb48dd6d5970f9b", "text": "Brain tumor is the most life undermining sickness and its recognition is the most challenging task for radio logistics by manual detection due to varieties in size, shape and location and sort of tumor. So, detection ought to be quick and precise and can be obtained by automated segmentation methods on MR images. In this paper, neutrosophic sets based segmentation is performed to detect the tumor. MRI is an intense apparatus over CT to analyze the interior segments of the body and the tumor. Tumor is detected and true, false and indeterminacy values of tumor are determined by this technique and the proposed method produce the beholden results.", "title": "" }, { "docid": "3a7d06192cf5e9921fe3f33839ee415b", "text": "Abstract Rationale. Modafinil, a novel wake-promoting agent, has been shown to have a similar clinical profile to that of conventional stimulants such as methylphenidate. We were therefore interested in assessing whether modafinil, with its unique pharmacological mode of action, might offer similar potential as a cognitive enhancer, without the side effects commonly experienced with amphetamine-like drugs. Objectives. The main aim of this study was to evaluate the cognitive enhancing potential of this novel agent using a comprehensive battery of neuropsychological tests. Methods. Sixty healthy young adult male volunteers received either a single oral dose of placebo, or 100 mg or 200 mg modafinil prior to performing a variety of tasks designed to test memory and attention. A randomised double-blind, between-subjects design was used. Results. Modafinil significantly enhanced performance on tests of digit span, visual pattern recognition memory, spatial planning and stop-signal reaction time. These performance improvements were complemented by a slowing in latency on three tests: delayed matching to sample, a decision-making task and the spatial planning task. Subjects reported feeling more alert, attentive and energetic on drug. The effects were not clearly dose dependent, except for those seen with the stop-signal paradigm. In contrast to previous findings with methylphenidate, there were no significant effects of drug on spatial memory span, spatial working memory, rapid visual information processing or attentional set-shifting. Additionally, no effects on paired associates learning were identified. Conclusions. These data indicate that modafinil selectively improves neuropsychological task performance. This improvement may be attributable to an enhanced ability to inhibit pre-potent responses. This effect appears to reduce impulsive responding, suggesting that modafinil may be of benefit in the treatment of attention deficit hyperactivity disorder.", "title": "" }, { "docid": "42900568c756b84e084323b258aa94b0", "text": "A lot of progress has been made to solve the depth estimation problem in stereo vision. Though, a very satisfactory performance is observed by utilizing the deep learning in supervised manner for depth estimation. This approach needs huge amount of ground truth training data as well as depth maps which is very laborious to prepare and many times it is not available in real scenario. Thus, the unsupervised depth estimation is the recent trend by utilizing the binocular stereo images to get rid of depth map ground truth. In unsupervised depth computation, the disparity images are generated by training the CNN with an image reconstruction loss based on the epipolar geometry constraints. The effective way of using CNN as well as investigating the better losses for the said problem needs to be addressed. In this paper, a dual CNN based model is presented for unsupervised depth estimation with 6 losses (DNM6) with individual CNN for each view to generate the corresponding disparity map. The proposed dual CNN model is also extended with 12 losses (DNM12) by utilizing the cross disparities. The presented DNM6 and DNM12 models are experimented over KITTI driving and Cityscapes urban database and compared with the recent state-of-the-art result of unsupervised depth estimation. 1", "title": "" }, { "docid": "70ba0f4938630e07d9b145216a01177a", "text": "For some decades radiation therapy has been proved successful in cancer treatment. It is the major task of clinical radiation treatment planning to realise on the one hand a high level dose of radiation in the cancer tissue in order to obtain maximum tumour control. On the other hand it is obvious that it is absolutely necessary to keep in the tissue outside the tumour, particularly in organs at risk, the unavoidable radiation as low as possible. No doubt, these two objectives of treatment planning – high level dose in the tumour, low radiation outside the tumour – have a basically contradictory nature. Therefore, it is no surprise that inverse mathematical models with dose distribution bounds tend to be infeasible in most cases. Thus, there is need for approximations compromising between overdosing the organs at risk and underdosing the target volume. Differing from the currently used time consuming iterative approach, which measures deviation from an ideal (non-achievable) treatment plan using recursively trial-and-error weights for the organs of interest, we go a new way trying to avoid a priori weight choices and consider the treatment planning problem as a multiple objective linear programming problem: with each organ of interest, target tissue as well as organs at risk, we associate an objective function measuring the maximal deviation from the prescribed doses. We build up a data base of relatively few efficient solutions representing and approximating the variety of Pareto solutions of the multiple objective linear programming problem. This data base can be easily scanned by physicians looking for an adequate treatment plan with the aid of an appropriate online tool. 1 The inverse radiation treatment problem – an introduction Every year, in Germany about 450.000 individuals are diagnosed with life-threatening forms of cancer. About 60% of these patients are treated with radiation; half of them are considered curable because their tumours are localised and susceptible to radiation. Nevertheless, despite the use of the best radiation therapy methods available, one third of these “curable” patients – nearly 40.000 people each year – die with primary tumours still active at the original site. Why does this occur ? Experts in the field have looked at the reasons for these failures and have concluded that radiation therapy planning – in particular in complicated anatomical situations – is often inadequate, providing either too little radiation to the tumour or too much radiation to nearby healthy tissue. Effective radiation therapy planning for treating malignent tumours is always a tightrope walk between ineffective underdose of tumour tissue – the target volume – and dangerous overdose of organs at risk being relevant for maintaining life quality of the cured patient. Therefore, it is the challenging task of a radiation therapy planner to realise a certain high dose level conform to the shape of the target volume in order to have a good prognosis for tumour control and to avoid overdose in relevant healthy tissue nearby. Part of this challenge is the computer aided representation of the relevant parts of the body. Modern scanning methods like computer tomography (CT), magnetic resonance tomography 1 on sabbatical leave at the Department of Engineering Science, University of Auckland, New Zealand", "title": "" }, { "docid": "1ef8c0e8fe403028ee77d788d4b5e8f2", "text": "This document provides a first \"how-to\" guide for modelling thermo-mechanical behaviour of an HDI board submitted to reflow process using finite element modelling (FEM). The parametric model constructed here includes microvias, buried vias and plated through holes (PTH). The HDI board is composed of a central core and two micro via layers on each external side. Firstly, equivalent material properties of the complete stack-up were determined according to three pre-preg materials (mat. A as ref., mat. B and mat. C) and core thickness (0.9, 1.1 and 1.3 mm as ref.) below and above their glass transition temperature, Tg. Several simulations were then done on the reference board in order to evaluate the impact of the temperature shock (above Tg) on the ageing of the HDI board. Finally, we focused on the behaviour of the buried via, which is a highly critic part of the HDI board. The objective was to optimize, for the different assembly processes (leaded and lead free), the choice of the FR4 material types as well as the design rules (core thickness and buried via diameter for example). Discussions on results in link with experimental data and calculation approximations are done in this study in order to better understand the failure roots and to determine way of simulation methodology improvements.", "title": "" }, { "docid": "ab3d4c0562847c6a4ebfe4ab398d8e74", "text": "Self-compassion refers to a kind and nurturing attitude toward oneself during situations that threaten one’s adequacy, while recognizing that being imperfect is part of being human. Although growing evidence indicates that selfcompassion is related to a wide range of desirable psychological outcomes, little research has explored self-compassion in older adults. The present study investigated the relationships between self-compassion and theoretically based indicators of psychological adjustment, as well as the moderating effect of self-compassion on self-rated health. A sample of 121 older adults recruited from a community library and a senior day center completed self-report measures of self-compassion, self-esteem, psychological well-being, anxiety, and depression. Results indicated that self-compassion is positively correlated with age, self-compassion is positively and uniquely related to psychological well-being, and self-compassion moderates the association between self-rated health and depression. These results suggest that interventions designed to increase self-compassion in older adults may be a fruitful direction for future applied research.", "title": "" } ]
scidocsrr
9ee73d67a1c335a91d5ca95489a1fa6d
Context Representation for Named Entity Linking
[ { "docid": "40ec8caea52ba75a6ad1e100fb08e89a", "text": "Disambiguating concepts and entities in a context sensitive way is a fundamental problem in natural language processing. The comprehensiveness of Wikipedia has made the online encyclopedia an increasingly popular target for disambiguation. Disambiguation to Wikipedia is similar to a traditional Word Sense Disambiguation task, but distinct in that the Wikipedia link structure provides additional information about which disambiguations are compatible. In this work we analyze approaches that utilize this information to arrive at coherent sets of disambiguations for a given document (which we call “global” approaches), and compare them to more traditional (local) approaches. We show that previous approaches for global disambiguation can be improved, but even then the local disambiguation provides a baseline which is very hard to beat.", "title": "" }, { "docid": "44582f087f9bb39d6e542ff7b600d1c7", "text": "We propose a new deterministic approach to coreference resolution that combines the global information and precise features of modern machine-learning models with the transparency and modularity of deterministic, rule-based systems. Our sieve architecture applies a battery of deterministic coreference models one at a time from highest to lowest precision, where each model builds on the previous model's cluster output. The two stages of our sieve-based architecture, a mention detection stage that heavily favors recall, followed by coreference sieves that are precision-oriented, offer a powerful way to achieve both high precision and high recall. Further, our approach makes use of global information through an entity-centric model that encourages the sharing of features across all mentions that point to the same real-world entity. Despite its simplicity, our approach gives state-of-the-art performance on several corpora and genres, and has also been incorporated into hybrid state-of-the-art coreference systems for Chinese and Arabic. Our system thus offers a new paradigm for combining knowledge in rule-based systems that has implications throughout computational linguistics.", "title": "" }, { "docid": "85b77b88c2a06603267b770dbad8ec73", "text": "Many errors in coreference resolution come from semantic mismatches due to inadequate world knowledge. Errors in named-entity linking (NEL), on the other hand, are often caused by superficial modeling of entity context. This paper demonstrates that these two tasks are complementary. We introduce NECO, a new model for named entity linking and coreference resolution, which solves both problems jointly, reducing the errors made on each. NECO extends the Stanford deterministic coreference system by automatically linking mentions to Wikipedia and introducing new NEL-informed mention-merging sieves. Linking improves mention-detection and enables new semantic attributes to be incorporated from Freebase, while coreference provides better context modeling by propagating named-entity links within mention clusters. Experiments show consistent improvements across a number of datasets and experimental conditions, including over 11% reduction in MUC coreference error and nearly 21% reduction in F1 NEL error on ACE 2004 newswire data.", "title": "" }, { "docid": "904db9e8b0deb5027d67bffbd345b05f", "text": "Entity Recognition (ER) is a key component of relation extraction systems and many other natural-language processing applications. Unfortunately, most ER systems are restricted to produce labels from to a small set of entity classes, e.g., person, organization, location or miscellaneous. In order to intelligently understand text and extract a wide range of information, it is useful to more precisely determine the semantic classes of entities mentioned in unstructured text. This paper defines a fine-grained set of 112 tags, formulates the tagging problem as multi-class, multi-label classification, describes an unsupervised method for collecting training data, and presents the FIGER implementation. Experiments show that the system accurately predicts the tags for entities. Moreover, it provides useful information for a relation extraction system, increasing the F1 score by 93%. We make FIGER and its data available as a resource for future work.", "title": "" } ]
[ { "docid": "3c22c94c9ab99727840c2ca00c66c0f3", "text": "The impact of numerous distributed generators (DGs) coupled with the implementation of virtual inertia on the transient stability of power systems has been studied extensively. Time-domain simulation is the most accurate and reliable approach to evaluate the dynamic behavior of power systems. However, the computational efficiency is restricted by their multi-time-scale property due to the combination of various DGs and synchronous generators. This paper presents a novel projective integration method (PIM) for the efficient transient stability simulation of power systems with high DG penetration. One procedure of the proposed PIM is decomposed into two stages, which adopt mixed explicit-implicit integration methods to achieve both efficiency and numerical stability. Moreover, the stability of the PIM is not affected by its parameter, which is related to the step size. Based on this property, an adaptive parameter scheme is developed based on error estimation to fit the time constants of the system dynamics and further increase the simulation speed. The presented approach is several times faster than the conventional integration methods with a similar level of accuracy. The proposed method is demonstrated using test systems with DGs and virtual synchronous generators, and the performance is verified against MATLAB/Simulink and DIgSILENT PowerFactory.", "title": "" }, { "docid": "39430478909e5818b242e0b28db419f0", "text": "BACKGROUND\nA modified version of the Berg Balance Scale (mBBS) was developed for individuals with intellectual and visual disabilities (IVD). However, the concurrent and predictive validity has not yet been determined.\n\n\nAIM\nThe purpose of the current study was to evaluate the concurrent and predictive validity of the mBBS for individuals with IVD.\n\n\nMETHOD\nFifty-four individuals with IVD and Gross Motor Functioning Classification System (GMFCS) Levels I and II participated in this study. The mBBS, the Centre of Gravity (COG), the Comfortable Walking Speed (CWS), and the Barthel Index (BI) were assessed during one session in order to determine the concurrent validity. The percentage of explained variance was determined by analyzing the squared multiple correlation between the mBBS and the BI, COG, CWS, GMFCS, and age, gender, level of intellectual disability, presence of epilepsy, level of visual impairment, and presence of hearing impairment. Furthermore, an overview of the degree of dependence between the mBBS, BI, CWS, and COG was obtained by graphic modelling. Predictive validity of mBBS was determined with respect to the number of falling incidents during 26 weeks and evaluated with Zero-inflated regression models using the explanatory variables of mBBS, BI, COG, CWS, and GMFCS.\n\n\nRESULTS\nThe results demonstrated that two significant explanatory variables, the GMFCS Level and the BI, and one non-significant variable, the CWS, explained approximately 60% of the mBBS variance. Graphical modelling revealed that BI was the most important explanatory variable for mBBS moreso than COG and CWS. Zero-inflated regression on the frequency of falling incidents demonstrated that the mBBS was not predictive, however, COG and CWS were.\n\n\nCONCLUSIONS\nThe results indicated that the concurrent validity as well as the predictive validity of mBBS were low for persons with IVD.", "title": "" }, { "docid": "c9a00df1eea1def318c92450b8d8f3f3", "text": "Removing pixel-wise heterogeneous motion blur is challenging due to the ill-posed nature of the problem. The predominant solution is to estimate the blur kernel by adding a prior, but extensive literature on the subject indicates the difficulty in identifying a prior which is suitably informative, and general. Rather than imposing a prior based on theory, we propose instead to learn one from the data. Learning a prior over the latent image would require modeling all possible image content. The critical observation underpinning our approach, however, is that learning the motion flow instead allows the model to focus on the cause of the blur, irrespective of the image content. This is a much easier learning task, but it also avoids the iterative process through which latent image priors are typically applied. Our approach directly estimates the motion flow from the blurred image through a fully-convolutional deep neural network (FCN) and recovers the unblurred image from the estimated motion flow. Our FCN is the first universal end-to-end mapping from the blurred image to the dense motion flow. To train the FCN, we simulate motion flows to generate synthetic blurred-image-motion-flow pairs thus avoiding the need for human labeling. Extensive experiments on challenging realistic blurred images demonstrate that the proposed method outperforms the state-of-the-art.", "title": "" }, { "docid": "accad42ca98cd758fd1132e51942cba8", "text": "The accuracy of face alignment affects the performance of a face recognition system. Since face alignment is usually conducted using eye positions, an accurate eye localization algorithm is therefore essential for accurate face recognition. In this paper, we first study the impact of eye locations on face recognition accuracy, and then introduce an automatic technique for eye detection. The performance of our automatic eye detection technique is subsequently validated using FRGC 1.0 database. The validation shows that our eye detector has an overall 94.5% eye detection rate, with the detected eyes very close to the manually provided eye positions. In addition, the face recognition performance based on the automatic eye detection is shown to be comparable to that of using manually given eye positions.", "title": "" }, { "docid": "385ae4c2278c2f4b876bf50941e98998", "text": "Deep neural networks (DNN) have been successfully employed for the problem of monaural sound source separation achieving state-of-the-art results. In this paper, we propose using convolutional recurrent neural network (CRNN) architecture for tackling this problem. We focus on a scenario where low algorithmic delay (< 10 ms) is paramount, and relatively little training data is available. We show that the proposed architecture can achieve slightly better performance as compared to feedforward DNNs and long short-term memory (LSTM) networks. In addition to reporting separation performance metrics (i.e., source to distortion ratios), we also report extended short term objective intelligibility (ESTOI) scores which better predict intelligibility performance in presence of non-stationary interferers.", "title": "" }, { "docid": "300bff5036b5b4e83a4bc605020b49e3", "text": "Many theories of human cognition postulate that people are equipped with a repertoire of strategies to solve the tasks they face. This theoretical framework of a cognitive toolbox provides a plausible account of intra- and interindividual differences in human behavior. Unfortunately, it is often unclear how to rigorously test the toolbox framework. How can a toolbox model be quantitatively specified? How can the number of toolbox strategies be limited to prevent uncontrolled strategy sprawl? How can a toolbox model be formally tested against alternative theories? The authors show how these challenges can be met by using Bayesian inference techniques. By means of parameter recovery simulations and the analysis of empirical data across a variety of domains (i.e., judgment and decision making, children's cognitive development, function learning, and perceptual categorization), the authors illustrate how Bayesian inference techniques allow toolbox models to be quantitatively specified, strategy sprawl to be contained, and toolbox models to be rigorously tested against competing theories. The authors demonstrate that their approach applies at the individual level but can also be generalized to the group level with hierarchical Bayesian procedures. The suggested Bayesian inference techniques represent a theoretical and methodological advancement for toolbox theories of cognition and behavior.", "title": "" }, { "docid": "5eea86c2482246473848e20818f79b6c", "text": "The purpose of this study is to evaluate claims that emotional intelligence is significantly related to transformational and other leadership behaviors. Results (based on 62 independent samples) indicated a validity estimate of .59 when ratings of both emotional intelligence and leadership behaviors were provided by the same source (self, subordinates, peers, or superiors). However, when ratings of the constructs were derived from different sources, the validity estimate was .12. Lower validity estimates were found for transactional and laissez-faire leadership behaviors. Separate analyses were performed for each measure of emotional intelligence. Trait measures of emotional intelligence tended to show higher validities than ability-based measures of emotional intelligence. Agreement across ratings sources for the same construct was low for both transformational leadership (.14) and emotional intelligence (.16).", "title": "" }, { "docid": "ba10bfce4c5deabb663b5ca490c320c9", "text": "OBJECTIVE\nAlthough the relationship between religious practice and health is well established, the relationship between spirituality and health is not as well studied. The objective of this study was to ascertain whether participation in the mindfulness-based stress reduction (MBSR) program was associated with increases in mindfulness and spirituality, and to examine the associations between mindfulness, spirituality, and medical and psychological symptoms.\n\n\nMETHODS\nForty-four participants in the University of Massachusetts Medical School's MBSR program were assessed preprogram and postprogram on trait (Mindful Attention and Awareness Scale) and state (Toronto Mindfulness Scale) mindfulness, spirituality (Functional Assessment of Chronic Illness Therapy--Spiritual Well-Being Scale), psychological distress, and reported medical symptoms. Participants also kept a log of daily home mindfulness practice. Mean changes in scores were computed, and relationships between changes in variables were examined using mixed-model linear regression.\n\n\nRESULTS\nThere were significant improvements in spirituality, state and trait mindfulness, psychological distress, and reported medical symptoms. Increases in both state and trait mindfulness were associated with increases in spirituality. Increases in trait mindfulness and spirituality were associated with decreases in psychological distress and reported medical symptoms. Changes in both trait and state mindfulness were independently associated with changes in spirituality, but only changes in trait mindfulness and spirituality were associated with reductions in psychological distress and reported medical symptoms. No association was found between outcomes and home mindfulness practice.\n\n\nCONCLUSIONS\nParticipation in the MBSR program appears to be associated with improvements in trait and state mindfulness, psychological distress, and medical symptoms. Improvements in trait mindfulness and spirituality appear, in turn, to be associated with improvements in psychological and medical symptoms.", "title": "" }, { "docid": "f783fa4cfa6eb85fdf4943ae9916d5cf", "text": "There are difficulties in presenting nontextual or dynamic information to blind or visually impaired users through computers. This article examines the potential of haptic and auditory trajectory playback as a method of teaching shapes and gestures to visually impaired people. Two studies are described which test the success of teaching simple shapes. The first study examines haptic trajectory playback alone, played through a force-feedback device, and compares performance of visually impaired users with sighted users. It demonstrates that the task is significantly harder for visually impaired users. The second study builds on these results, combining force-feedback with audio to teach visually impaired users to recreate shapes. The results suggest that users performed significantly better when presented with multimodal haptic and audio playback of the shape, rather than haptic only. Finally, an initial test of these ideas in an application context is described, with sighted participants describing drawings to visually impaired participants through touch and sound. This study demonstrates in what situations trajectory playback can prove a useful role in a collaborative setting.", "title": "" }, { "docid": "6b31a4293bd27e4a6013c63a19c2db70", "text": "This paper discusses the impact of network architecture on control performance in a class of distributed control systems called networked control systems (NCSs) and provides design considerations related to control quality of performance as well as network quality of service. The integrated network-control system changes the characteristics of time delays between application devices. This study first identifies several key components of the time delay through an analysis of network protocols and control dynamics. The analysis of network and control parameters is used to determine an acceptable working range of sampling periods in an NCS. A network-control simulator and an experimental networked machine tool have been developed to help validate and demonstrate the performance analysis results and identify the special performance characteristics in an NCS. These performance characteristics are useful guidelines for choosing the network and control parameters when designing an NCS.", "title": "" }, { "docid": "de024671f84d853ac3bb7735a4497f1f", "text": "Neural networks for natural language reasoning have largely focused on extractive, fact-based question-answering (QA) and common-sense inference. However, it is also crucial to understand the extent to which neural networks can perform relational reasoning and combinatorial generalization from natural language—abilities that are often obscured by annotation artifacts and the dominance of language modeling in standard QA benchmarks. In this work, we present a novel benchmark dataset for language understanding that isolates performance on relational reasoning. We also present a neural message-passing baseline and show that this model, which incorporates a relational inductive bias, is superior at combinatorial generalization compared to a traditional recurrent neural network approach.", "title": "" }, { "docid": "28a481f51a7d673d1acb396d8b9c25fb", "text": "This study investigated the combination of mothers' and fathers' parenting styles (affection, behavioral control, and psychological control) that would be most influential in predicting their children's internal and external problem behaviors. A total of 196 children (aged 5-6 years) were followed up six times from kindergarten to the second grade to measure their problem behaviors. Mothers and fathers filled in a questionnaire measuring their parenting styles once every year. The results showed that a high level of psychological control exercised by mothers combined with high affection predicted increases in the levels of both internal and external problem behaviors among children. Behavioral control exercised by mothers decreased children's external problem behavior but only when combined with a low level of psychological control.", "title": "" }, { "docid": "c69bc25454ba459cac60b59a7f293012", "text": "The morphology of the retinal blood vessels can be an important indicator for diseases like diabetes, hypertension and retinopathy of prematurity (ROP). Thus, the measurement of changes in morphology of arterioles and venules can be of diagnostic value. Here we present a method to automatically segment retinal blood vessels based upon multiscale feature extraction. This method overcomes the problem of variations in contrast inherent in these images by using the first and second spatial derivatives of the intensity image that gives information about vessel topology. This approach also enables the detection of blood vessels of different widths, lengths and orientations. The local maxima over scales of the magnitude of the gradient and the maximum principal curvature of the Hessian tensor are used in a multiple pass region growing procedure. The growth progressively segments the blood vessels using feature information together with spatial information. The algorithm is tested on red-free and fluorescein retinal images, taken from two local and two public databases. Comparison with first public database yields values of 75.05% true positive rate (TPR) and 4.38% false positive rate (FPR). Second database values are of 72.46% TPR and 3.45% FPR. Our results on both public databases were comparable in performance with other authors. However, we conclude that these values are not sensitive enough so as to evaluate the performance of vessel geometry detection. Therefore we propose a new approach that uses measurements of vessel diameters and branching angles as a validation criterion to compare our segmented images with those hand segmented from public databases. Comparisons made between both hand segmented images from public databases showed a large inter-subject variability on geometric values. A last evaluation was made comparing vessel geometric values obtained from our segmented images between red-free and fluorescein paired images with the latter as the \"ground truth\". Our results demonstrated that borders found by our method are less biased and follow more consistently the border of the vessel and therefore they yield more confident geometric values.", "title": "" }, { "docid": "c340cbb5f6b062caeed570dc2329e482", "text": "We present a mixed-mode analog/digital VLSI device comprising an array of leaky integrate-and-fire (I&F) neurons, adaptive synapses with spike-timing dependent plasticity, and an asynchronous event based communication infrastructure that allows the user to (re)configure networks of spiking neurons with arbitrary topologies. The asynchronous communication protocol used by the silicon neurons to transmit spikes (events) off-chip and the silicon synapses to receive spikes from the outside is based on the \"address-event representation\" (AER). We describe the analog circuits designed to implement the silicon neurons and synapses and present experimental data showing the neuron's response properties and the synapses characteristics, in response to AER input spike trains. Our results indicate that these circuits can be used in massively parallel VLSI networks of I&F neurons to simulate real-time complex spike-based learning algorithms.", "title": "" }, { "docid": "c20b774b1e2422cadaf41e60652f7363", "text": "In some situations, utilities may try to “save” the fuse of a circuit following temporary faults by de-energizing the line with the fast operation of an upstream recloser before the fuse is damaged. This fuse-saving practice is accomplished through proper time coordination between a recloser and a fuse. However, the installation of distributed generation (DG) into distribution networks may affect this coordination due to additional fault current contributions from the distributed resources. This phenomenon of recloser-fuse miscoordination is investigated in this paper with the help of a typical network that employs fuse saving. The limitations of a recloser equipped with time and instantaneous overcurrent elements with respect to fuse savings, in the presence of DG, are discussed. An adaptive relaying strategy is proposed to ensure fuse savings in the new scenario even in the worst fault conditions. The simulation results obtained by adaptively changing relay settings in response to changing DG configurations confirm that the settings selected theoretically in accordance with the proposed strategy hold well in operation.", "title": "" }, { "docid": "e44fefc20ed303064dabff3da1004749", "text": "Printflatables is a design and fabrication system for human-scale, functional and dynamic inflatable objects. We use inextensible thermoplastic fabric as the raw material with the key principle of introducing folds and thermal sealing. Upon inflation, the sealed object takes the expected three dimensional shape. The workflow begins with the user specifying an intended 3D model which is decomposed to two dimensional fabrication geometry. This forms the input for a numerically controlled thermal contact iron that seals layers of thermoplastic fabric. In this paper, we discuss the system design in detail, the pneumatic primitives that this technique enables and merits of being able to make large, functional and dynamic pneumatic artifacts. We demonstrate the design output through multiple objects which could motivate fabrication of inflatable media and pressure-based interfaces.", "title": "" }, { "docid": "44ff196ffef950215571a93a1c290169", "text": "Mary-Ellen Lynall,1,2 Danielle S. Bassett,1,2,3,4 Robert Kerwin,5 Peter J. McKenna,6 Manfred Kitzbichler,1,2 Ulrich Muller,1,2 and Ed Bullmore1,2,7 1Behavioural and Clinical Neuroscience Institute and 2Department of Psychiatry, University of Cambridge, Cambridge CB2 0SZ, United Kingdom, 3Cognition, and Psychosis Program, Clinical Brain Disorders Branch, Genes, National Institute of Mental Health, National Institutes of Health, Bethesda, Maryland 20892, 4Biological Soft Systems Sector, Department of Physics, University of Cambridge, Cambridge CB3 0HE, United Kingdom, 5Institute of Psychiatry, King’s College London, London SE5 8AF, United Kingdom, 6Benito Menni Complex Assistencial en Salut Mental, Centro de Investigacion en Red de Salut Mental, Sant Boi de Llobregat, Barcelona 08840, Spain, and 7Clinical Unit Cambridge, GlaxoSmithKline, Addenbrooke’s Hospital, Cambridge CB2 0QQ, United Kingdom", "title": "" }, { "docid": "33069cfad58493e2f2fdd3effcdf0279", "text": "Recent findings [HOT06] have made possible the learning of deep layered hierarchical representations of data mimicking the brains working. It is hoped that this paradigm will unlock some of the power of the brain and lead to advances towards true AI. In this thesis I implement and evaluate state-of-the-art deep learning models and using these as building blocks I investigate the hypothesis that predicting the time-to-time sensory input is a good learning objective. I introduce the Predictive Encoder (PE) and show that a simple non-regularized learning rule, minimizing prediction error on natural video patches leads to receptive fields similar to those found in Macaque monkey visual area V1. I scale this model to video of natural scenes by introducing the Convolutional Predictive Encoder (CPE) and show similar results. Both models can be used in deep architectures as a deep learning module.", "title": "" }, { "docid": "096b2ffac795053e046c25f1e8697fcf", "text": "Background\nThe benefit of computer-assisted planning in orthognathic surgery (OGS) has been extensively documented over the last decade. This study aimed to evaluate the accuracy of three-dimensional (3D) virtual planning in surgery-first OGS.\n\n\nMethods\nFifteen patients with skeletal class III malocclusion who underwent bimaxillary OGS with surgery-first approach were included. A composite skull model was reconstructed using data from cone-beam computed tomography and stereolithography from a scanned dental cast. Surgical procedures were simulated using Simplant O&O software, and the virtual plan was transferred to the operation room using 3D-printed splints. Differences of the 3D measurements between the virtual plan and postoperative results were evaluated, and the accuracy was reported using root mean square deviation (RMSD) and the Bland-Altman method.\n\n\nResults\nThe virtual planning was successfully transferred to surgery. The overall mean linear difference was 0.88 mm (0.79 mm for the maxilla and 1 mm for the mandible), and the overall mean angular difference was 1.16°. The RMSD ranged from 0.86 to 1.46 mm and 1.27° to 1.45°, within the acceptable clinical criteria.\n\n\nConclusion\nIn this study, virtual surgical planning and 3D-printed surgical splints facilitated the diagnosis and treatment planning, and offered an accurate outcome in surgery-first OGS.", "title": "" }, { "docid": "05457fe0f541e313c01b5d4b4015fa7b", "text": "This paper presents the case for and the evidence in favour of passive investment strategies and examines the major criticisms of the technique. I conclude that the evidence strongly supports passive investment management in all markets—smallcapitalisation stocks as well as large-capitalisation equities, US markets as well as international markets, and bonds as well as stocks. Recent attacks on the efficient market hypothesis do not weaken the case for indexing.", "title": "" } ]
scidocsrr
28b53ffa071baa14b70ae0ab72762700
Schema Theory for Genetic Programming with One-Point Crossover and Point Mutation
[ { "docid": "8e1a65dd8bf9d8a4b67c46a0067ca42d", "text": "Reading Genetic Programming IE Automatic Discovery ofReusable Programs (GPII) in its entirety is not a task for the weak-willed because the book without appendices is about 650 pages. An entire previous book by the same author [1] is devoted to describing Genetic Programming (GP), while this book is a sequel extolling an extension called Automatically Defined Functions (ADFs). The author, John R. Koza, argues that ADFs can be used in conjunction with GP to improve its efficacy on large problems. \"An automatically defined function (ADF) is a function (i.e., subroutine, procedure, module) that is dynamically evolved during a run of genetic programming and which may be called by a calling program (e.g., a main program) that is simultaneously being evolved\" (p. 1). Dr. Koza recommends adding the ADF technique to the \"GP toolkit.\" The book presents evidence that it is possible to interpret GP with ADFs as performing either a top-down process of problem decomposition or a bottom-up process of representational change to exploit identified regularities. This is stated as Main Point 1. Main Point 2 states that ADFs work by exploiting inherent regularities, symmetries, patterns, modularities, and homogeneities within a problem, though perhaps in ways that are very different from the style of programmers. Main Points 3 to 7 are appropriately qualified statements to the effect that, with a variety of problems, ADFs pay off be-", "title": "" } ]
[ { "docid": "be369006f6853e01ef0e107db77ac53f", "text": "Objective: to elaborate the conceptual and theoretical-empirical structure, based on the application of Roy’s Adaptation model, to guide the development of a controlled clinical trial aimed at assessing the effectiveness of a nursing intervention program to promote the adaptation of family caregivers with caregiver role strain. Method: theoretical study. The conceptual structure was developed in three phases: development of a comprehensive understanding of the conceptual model, literature review and construction of the conceptual and theoretical-empirical structure itself. Results: the application process demonstrated its consistency in the design of an intervention program for family caregivers of chronic patients, to be tested in a controlled clinical trial. The indicators of adaptation were the reduced score on the caregiver tension scale and the increased perception of wellbeing and quality of life. Conclusion: Roy’s model serves as an important guide for nursing research intended to test nursing interventions that favor the wellbeing of family caregivers. DESCRIPTORS: Caregivers. Nursing. Nursing theory. Chronic disease. MODELO DE ADAPTAÇÃO EM UM ENSAIO CLÍNICO CONTROLADO COM CUIDADORES FAMILIARES DE PESSOAS COM DOENÇAS CRÔNICAS", "title": "" }, { "docid": "a23aa9d2a0a100e805e3c25399f4f361", "text": "Cases of poisoning by oleander (Nerium oleander) were observed in several species, except in goats. This study aimed to evaluate the pathological effects of oleander in goats. The experimental design used three goats per group: the control group, which did not receive oleander and the experimental group, which received leaves of oleander (50 mg/kg/day) for six consecutive days. On the seventh day, goats received 110 mg/kg of oleander leaves four times at one-hourly interval. A last dose of 330 mg/kg of oleander leaves was given subsequently. After the last dose was administered, clinical signs such as apathy, colic, vocalizations, hyperpnea, polyuria, and moderate rumen distention were observed. Electrocardiogram revealed second-degree atrioventricular block. Death occurred on an average at 92 min after the last dosing. Microscopic evaluation revealed renal necrosis at convoluted and collector tubules and slight myocardial degeneration was observed by unequal staining of cardiomyocytes. Data suggest that goats appear to respond to oleander poisoning in a manner similar to other species.", "title": "" }, { "docid": "6f7332494ffc384eaae308b2116cab6a", "text": "Investigations of the relationship between pain conditions and psychopathology have largely focused on depression and have been limited by the use of non-representative samples (e.g. clinical samples). The present study utilized data from the Midlife Development in the United States Survey (MIDUS) to investigate associations between three pain conditions and three common psychiatric disorders in a large sample (N = 3,032) representative of adults aged 25-74 in the United States population. MIDUS participants provided reports regarding medical conditions experienced over the past year including arthritis, migraine, and back pain. Participants also completed several diagnostic-specific measures from the Composite International Diagnostic Interview-Short Form [Int. J. Methods Psychiatr. Res. 7 (1998) 171], which was based on the revised third edition of the Diagnostic and Statistical Manual of Mental Disorders [American Psychiatric Association 1987]. The diagnoses included were depression, panic attacks, and generalized anxiety disorder. Logistic regression analyses revealed significant positive associations between each pain condition and the psychiatric disorders (Odds Ratios ranged from 1.48 to 3.86). The majority of these associations remained statistically significant after adjusting for demographic variables, the other pain conditions, and other medical conditions. Given the emphasis on depression in the pain literature, it was noteworthy that the associations between the pain conditions and the anxiety disorders were generally larger than those between the pain conditions and depression. These findings add to a growing body of evidence indicating that anxiety disorders warrant further attention in relation to pain. The clinical and research implications of these findings are discussed.", "title": "" }, { "docid": "2e8a644c6412f9b490bad0e13e11794d", "text": "The traditional wisdom for building disk-based relational database management systems (DBMS) is to organize data in heavily-encoded blocks stored on disk, with a main memory block cache. In order to improve performance given high disk latency, these systems use a multi-threaded architecture with dynamic record-level locking that allows multiple transactions to access the database at the same time. Previous research has shown that this results in substantial overhead for on-line transaction processing (OLTP) applications [15]. The next generation DBMSs seek to overcome these limitations with architecture based on main memory resident data. To overcome the restriction that all data fit in main memory, we propose a new technique, called anti-caching, where cold data is moved to disk in a transactionally-safe manner as the database grows in size. Because data initially resides in memory, an anti-caching architecture reverses the traditional storage hierarchy of disk-based systems. Main memory is now the primary storage device. We implemented a prototype of our anti-caching proposal in a high-performance, main memory OLTP DBMS and performed a series of experiments across a range of database sizes, workload skews, and read/write mixes. We compared its performance with an open-source, disk-based DBMS optionally fronted by a distributed main memory cache. Our results show that for higher skewed workloads the anti-caching architecture has a performance advantage over either of the other architectures tested of up to 9⇥ for a data size 8⇥ larger than memory.", "title": "" }, { "docid": "18e019622188ab6ddb2beca69d51e1c9", "text": "The rhesus macaque (Macaca mulatta) is the most utilized primate model in the biomedical and psychological sciences. Expressive behavior is of interest to scientists studying these animals, both as a direct variable (modeling neuropsychiatric disease, where expressivity is a primary deficit), as an indirect measure of health and welfare, and also in order to understand the evolution of communication. Here, intramuscular electrical stimulation of facial muscles was conducted in the rhesus macaque in order to document the relative contribution of each muscle to the range of facial movements and to compare the expressive function of homologous muscles in humans, chimpanzees and macaques. Despite published accounts that monkeys possess less differentiated and less complex facial musculature, the majority of muscles previously identified in humans and chimpanzees were stimulated successfully in the rhesus macaque and caused similar appearance changes. These observations suggest that the facial muscular apparatus of the monkey has extensive homology to the human face. The muscles of the human face, therefore, do not represent a significant evolutionary departure from those of a monkey species. Thus, facial expressions can be compared between humans and rhesus macaques at the level of the facial musculature, facilitating the systematic investigation of comparative facial communication.", "title": "" }, { "docid": "adf530152b474c2b6147da07acf3d70d", "text": "One of the basic services in a distributed network is clock synchronization as it enables a palette of services, such as synchronized measurements, coordinated actions, or time-based access to a shared communication medium. The IEEE 1588 standard defines the Precision Time Protocol (PTP) and provides a framework to synchronize multiple slave clocks to a master by means of synchronization event messages. While PTP is capable for synchronization accuracies below 1 ns, practical synchronization approaches are hitting a new barrier due to asymmetric line delays. Although compensation fields for the asymmetry are present in PTP version 2008, no specific measures to estimate the asymmetry are defined in the standard. In this paper we present a solution to estimate the line asymmetry in 100Base-TX networks based on line swapping. This approach seems appealing for existing installations as most Ethernet PHYs have the line swapping feature built in, and it only delays the network startup, but does not alter the operation of the network. We show by an FPGA-based prototype system that our approach is able to improve the synchronization offset from more than 10 ns down to below 200 ps.", "title": "" }, { "docid": "bd47faa5acc45c9dca97ad1b5de09de6", "text": "We present a differentiable framework capable of learning a wide variety of compositions of simple policies that we call skills. By recursively composing skills with themselves, we can create hierarchies that display complex behavior. Skill networks are trained to generate skill-state embeddings that are provided as inputs to a trainable composition function, which in turn outputs a policy for the overall task. Our experiments on an environment consisting of multiple collect and evade tasks show that this architecture is able to quickly build complex skills from simpler ones. Furthermore, the learned composition function displays some transfer to unseen combinations of skills, allowing for zero-shot generalizations.", "title": "" }, { "docid": "80105a011097a3bd37bf58d030131e13", "text": "Deep CNNs have achieved great success in text detection. Most of existing methods attempt to improve accuracy with sophisticated network design, while paying less attention on speed. In this paper, we propose a general framework for text detection called Guided CNN to achieve the two goals simultaneously. The proposed model consists of one guidance subnetwork, where a guidance mask is learned from the input image itself, and one primary text detector, where every convolution and non-linear operation are conducted only in the guidance mask. The guidance subnetwork filters out non-text regions coarsely, greatly reducing the computation complexity. At the same time, the primary text detector focuses on distinguishing between text and hard non-text regions and regressing text bounding boxes, achieving a better detection accuracy. A novel training strategy, called background-aware block-wise random synthesis, is proposed to further boost up the performance. We demonstrate that the proposed Guided CNN is not only effective but also efficient with two state-of-the-art methods, CTPN [52] and EAST [64], as backbones. On the challenging benchmark ICDAR 2013, it speeds up CTPN by 2.9 times on average, while improving the F-measure by 1.5%. On ICDAR 2015, it speeds up EAST by 2.0 times while improving the F-measure by 1.0%. c © 2018. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms. * Zhanghui Kuang is the corresponding author 2 YUE ET AL: BOOSTING UP SCENE TEXT DETECTORS WITH GUIDED CNN Figure 1: Illustration of guiding the primary text detector. Convolutions and non-linear operations are conducted only in the guidance mask indicated by the red and blue rectangles. The guidance mask (the blue) is expanded by backgroundaware block-wise random synthesis (the red) during training. When testing, the guidance mask is not expanded. Figure 2: Text appears very sparsely in scene images. The left shows one example image. The right shows the text area ratio composition of ICDAR 2013 test set. Images with (0%,10%], (10%,20%], (20%,30%], and (30%,40%] text region account for 57%, 21%, 11%, and 6% respectively. Only 5 % images have more than 40% text region. 57% 21% 11% 6% 5% (0.0,0.1] (0.1,0.2] (0.2,0.3] (0.3,0.4] (0.4,1.0]", "title": "" }, { "docid": "a79424d0ec38c2355b288364f45f90de", "text": "This paper mainly deals with various classification algorithms namely, Bayes. NaiveBayes, Bayes. BayesNet, Bayes. NaiveBayesUpdatable, J48, Randomforest, and Multi Layer Perceptron. It analyzes the hepatitis patients from the UC Irvine machine learning repository. The results of the classification model are accuracy and time. Finally, it concludes that the Naive Bayes performance is better than other classification techniques for hepatitis patients.", "title": "" }, { "docid": "50c0ebb4a984ea786eb86af9849436f3", "text": "We systematically reviewed school-based skills building behavioural interventions for the prevention of sexually transmitted infections. References were sought from 15 electronic resources, bibliographies of systematic reviews/included studies and experts. Two authors independently extracted data and quality-assessed studies. Fifteen randomized controlled trials (RCTs), conducted in the United States, Africa or Europe, met the inclusion criteria. They were heterogeneous in terms of intervention length, content, intensity and providers. Data from 12 RCTs passed quality assessment criteria and provided evidence of positive changes in non-behavioural outcomes (e.g. knowledge and self-efficacy). Intervention effects on behavioural outcomes, such as condom use, were generally limited and did not demonstrate a negative impact (e.g. earlier sexual initiation). Beneficial effect on at least one, but never all behavioural outcomes assessed was reported by about half the studies, but this was sometimes limited to a participant subgroup. Sexual health education for young people is important as it increases knowledge upon which to make decisions about sexual behaviour. However, a number of factors may limit intervention impact on behavioural outcomes. Further research could draw on one of the more effective studies reviewed and could explore the effectiveness of 'booster' sessions as young people move from adolescence to young adulthood.", "title": "" }, { "docid": "54b6c687262c5d051529e5ed2d2bf8a1", "text": "INTRODUCTION\nThe chick embryo is an emerging in vivo model in several areas of pre-clinical research including radiopharmaceutical sciences. Herein, it was evaluated as a potential test system for assessing the biodistribution and in vivo stability of radiopharmaceuticals. For this purpose, a number of radiopharmaceuticals labeled with (18)F, (125)I, (99m)Tc, and (177)Lu were investigated in the chick embryo and compared with the data obtained in mice.\n\n\nMETHODS\nChick embryos were cultivated ex ovo for 17-19 days before application of the radiopharmaceutical directly into the peritoneum or intravenously using a vein of the chorioallantoic membrane (CAM). At a defined time point after application of radioactivity, the embryos were euthanized by shock-freezing using liquid nitrogen. Afterwards they were separated from residual egg components for post mortem imaging purposes using positron emission tomography (PET) or single photon emission computed tomography (SPECT).\n\n\nRESULTS\nSPECT images revealed uptake of [(99m)Tc]pertechnetate and [(125)I]iodide in the thyroid of chick embryos and mice, whereas [(177)Lu]lutetium, [(18)F]fluoride and [(99m)Tc]-methylene diphosphonate ([(99m)Tc]-MDP) were accumulated in the bones. [(99m)Tc]-dimercaptosuccinic acid ((99m)Tc-DMSA) and the somatostatin analog [(177)Lu]-DOTATOC, as well as the folic acid derivative [(177)Lu]-DOTA-folate showed accumulation in the renal tissue whereas [(99m)Tc]-mebrofenin accumulated in the gall bladder and intestine of both species. In vivo dehalogenation of [(18)F]fallypride and of the folic acid derivative [(125)I]iodo-tyrosine-folate was observed in both species. In contrast, the 3'-aza-2'-[(18)F]fluorofolic acid ([(18)F]-AzaFol) was stable in the chick embryo as well as in the mouse.\n\n\nCONCLUSIONS\nOur results revealed the same tissue distribution profile and in vivo stability of radiopharmaceuticals in the chick embryo and the mouse. This observation is promising with regard to a potential use of the chick embryo as an inexpensive and simple test model for preclinical screening of novel radiopharmaceuticals.", "title": "" }, { "docid": "4592c8f5758ccf20430dbec02644c931", "text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.", "title": "" }, { "docid": "bcf525a37e87ca084e5a39c63cfdde77", "text": "BACKGROUND\nObesity in people with chronic kidney disease (CKD) is associated with longer survival. The purpose of this study was to determine if a relationship exists between body condition score (BCS) and survival in dogs with CKD.\n\n\nHYPOTHESIS/OBJECTIVES\nHigher BCS is a predictor of prolonged survival in dogs with CKD.\n\n\nANIMALS\nOne hundred dogs were diagnosed with CKD (International Renal Interest Society stages II, III or IV) between 2008 and 2009.\n\n\nMETHODS\nRetrospective case review. Data regarding initial body weight and BCS, clinicopathologic values and treatments were collected from medical records and compared with survival times.\n\n\nRESULTS\nFor dogs with BCS recorded (n = 72), 13 were underweight (BCS = 1-3; 18%), 49 were moderate (BCS = 4-6; 68%), and 10 were overweight (BCS = 7-9; 14%). For dogs with at least 2 body weights recorded (n = 77), 21 gained weight, 47 lost weight, and 9 had no change in weight. Dogs classified as underweight at the time of diagnosis (median survival = 25 days) had a significantly shorter survival time compared to that in both moderate (median survival = 190 days; P < .001) and overweight dogs (median survival = 365 days; P < .001). There was no significant difference in survival between moderate and overweight dogs (P = .95).\n\n\nCONCLUSIONS AND CLINICAL IMPORTANCE\nHigher BCS at the time of diagnosis was significantly associated with improved survival. Further research on the effects of body composition could enhance the management of dogs with CKD.", "title": "" }, { "docid": "cd5bee864efd59b3122752f06f34f3b6", "text": "Prior background knowledge is essential for human reading and understanding. In this work, we investigate how to leverage external knowledge to improve question answering. We primarily focus on multiple-choice question answering tasks that require external knowledge to answer questions. We investigate the effects of utilizing external in-domain multiple-choice question answering datasets and enriching the reference corpus by external out-domain corpora (i.e., Wikipedia articles). Experimental results demonstrate the effectiveness of external knowledge on two challenging multiple-choice question answering tasks: ARC and OpenBookQA.", "title": "" }, { "docid": "b0593843ce815016a003c60f8f154006", "text": "This paper introduces a method for acquiring forensic-grade evidence from Android smartphones using open source tools. We investigate in particular cases where the suspect has made use of the smartphone's Wi-Fi or Bluetooth interfaces. We discuss the forensic analysis of four case studies, which revealed traces that were left in the inner structure of three mobile Android devices and also indicated security vulnerabilities. Subsequently, we propose a detailed plan for forensic examiners to follow when dealing with investigations of potential crimes committed using the wireless facilities of a suspect Android smartphone. This method can be followed to perform physical acquisition of data without using commercial tools and then to examine them safely in order to discover any activity associated with wireless communications. We evaluate our method using the Association of Chief Police Officers' (ACPO) guidelines of good practice for computer-based, electronic evidence and demonstrate that it is made up of an acceptable host of procedures for mobile forensic analysis, focused specifically on device Bluetooth and Wi-Fi facilities.", "title": "" }, { "docid": "9402365e2fdbdbdea13c18da5e4a05de", "text": "Battery models capture the characteristics of real-life batteries, and can be used to predict their behavior under various operating conditions. In this paper, a dynamic model of lithium-ion battery has been developed with MATLAB/Simulink® in order to investigate the output characteristics of lithium-ion batteries. Dynamic simulations are carried out, including the observation of the changes in battery terminal output voltage under different charging/discharging, temperature and cycling conditions, and the simulation results are compared with the results obtained from several recent studies. The simulation studies are presented for manifesting that the model is effective and operational.", "title": "" }, { "docid": "0ee23e7086c287bd52fbb0bb6be2039d", "text": "Mathematics is a ubiquitous foundation of science, technology, and engineering. Specific areas of mathematics, such as numeric and symbolic computation or logics, enjoy considerable software support. Working mathematicians have recently started to adopt Web 2.0 environments, such as blogs and wikis, but these systems lack machine support for knowledge organization and reuse, and they are disconnected from tools such as computer algebra systems or interactive proof assistants. We argue that such scenarios will benefit from Semantic Web technology. Conversely, mathematics is still underrepresented on the Web of [Linked] Data. There are mathematics-related Linked Data, for example statistical government data or scientific publication databases, but their mathematical semantics has not yet been modeled. We argue that the services for the Web of Data will benefit from a deeper representation of mathematical knowledge. Mathematical knowledge comprises structures given in a logical language – formulae, statements (e.g. axioms), and theories –, a mixture of rigorous natural language and symbolic notation in documents, application-specific metadata, and discussions about conceptualizations, formalizations, proofs, and (counter-)examples. Our review of vocabularies for representing these structures covers ontologies for mathematical problems, proofs, interlinked scientific publications, scientific discourse, as well as mathematical metadata vocabularies and domain knowledge from pure and applied mathematics. Many fields of mathematics have not yet been implemented as proper Semantic Web ontologies; however, we show that MathML and OpenMath, the standard XML-based exchange languages for mathematical knowledge, can be fully integrated with RDF representations in order to contribute existing mathematical knowledge to the Web of Data. We conclude with a roadmap for getting the mathematical Web of Data started: what datasets to publish, how to interlink them, and how to take advantage of these new connections.", "title": "" }, { "docid": "4b7714c60749a2f945f21ca3d6d367fe", "text": "Abstractive summarization aims to generate a shorter version of the document covering all the salient points in a compact and coherent fashion. On the other hand, query-based summarization highlights those points that are relevant in the context of a given query. The encodeattend-decode paradigm has achieved notable success in machine translation, extractive summarization, dialog systems, etc. But it suffers from the drawback of generation of repeated phrases. In this work we propose a model for the query-based summarization task based on the encode-attend-decode paradigm with two key additions (i) a query attention model (in addition to document attention model) which learns to focus on different portions of the query at different time steps (instead of using a static representation for the query) and (ii) a new diversity based attention model which aims to alleviate the problem of repeating phrases in the summary. In order to enable the testing of this model we introduce a new query-based summarization dataset building on debatepedia. Our experiments show that with these two additions the proposed model clearly outperforms vanilla encode-attend-decode models with a gain of 28% (absolute) in ROUGE-L scores.ive summarization aims to generate a shorter version of the document covering all the salient points in a compact and coherent fashion. On the other hand, query-based summarization highlights those points that are relevant in the context of a given query. The encodeattend-decode paradigm has achieved notable success in machine translation, extractive summarization, dialog systems, etc. But it suffers from the drawback of generation of repeated phrases. In this work we propose a model for the query-based summarization task based on the encode-attend-decode paradigm with two key additions (i) a query attention model (in addition to document attention model) which learns to focus on different portions of the query at different time steps (instead of using a static representation for the query) and (ii) a new diversity based attention model which aims to alleviate the problem of repeating phrases in the summary. In order to enable the testing of this model we introduce a new query-based summarization dataset building on debatepedia. Our experiments show that with these two additions the proposed model clearly outperforms vanilla encode-attend-decode models with a gain of 28% (absolute) in ROUGE-L scores.", "title": "" }, { "docid": "31cf8888a8f7fe1a6d3fc064eb67947c", "text": "Background: Gaussian processes (GP) provide an elegant and effective approach to learning in kernel machines. This approach leads to a highly interpretable model and allows using the bayesian framework for model adaptation and incorporating the prior knowledge about the problem. GP framework is successfully applied to regression, classification and dimensionality reduction problems. Unfortunately, the standard methods for both GP-regression and GP-classification scale as O(n 3), where n is the size of the dataset, which makes them inapplicable to big data problems. A variety of methods have been proposed to overcome this limitation both for regression and classification problems. The most successful recent methods are based on the concept of inducing inputs. These methods reduce the computational complexity to O(nm 2), where m is the number of inducing inputs with m typically much less, than n. In this work we focus on classification. The current state-of-the-art method for this problem is based on stochastic optimization of an evidence lower bound, that depends on O(m 2) parameters. For complex problems, the required number of inducing points m is fairly big, making the optimization in this method challenging. Methods: We analyze the structure of variational lower bound that appears in inducing input GP classification. First we notice that using quadratic approximation of several terms in this bound, it is possible to obtain analytical expressions for optimal values of most of the optimization parameters, thus sufficiently reducing the dimension of optimization space. Then we provide two methods for constructing necessary quadratic approximations. One is based on Jaakkola-Jordan bound for logistic function and the other one is derived using Taylor expansion. Results: We propose two new variational lower bounds for inducing input GP classification that depend on a number of parameters. Then we propose several methods for optimization of these bounds and compare the resulting algorithms with the state-of-the-art approach based on stochastic optimization. Experiments on a bunch of classification datasets show that the new methods perform as well or better than the existing one. However, new methods don't require any tunable parameters and can work in settings within a big range of n and m values thus significantly simplifying training of GP classification models.", "title": "" } ]
scidocsrr
c6830a797a70bfc247f11f5836b017ee
The effects of handwriting experience on functional brain development in pre-literate children
[ { "docid": "a39c0db041f31370135462af467426ed", "text": "Part of the ventral temporal lobe is thought to be critical for face perception, but what determines this specialization remains unknown. We present evidence that expertise recruits the fusiform gyrus 'face area'. Functional magnetic resonance imaging (fMRI) was used to measure changes associated with increasing expertise in brain areas selected for their face preference. Acquisition of expertise with novel objects (greebles) led to increased activation in the right hemisphere face areas for matching of upright greebles as compared to matching inverted greebles. The same areas were also more activated in experts than in novices during passive viewing of greebles. Expertise seems to be one factor that leads to specialization in the face area.", "title": "" } ]
[ { "docid": "a671eda59afa0c1210e042209e3cb084", "text": "BACKGROUND\nOutpatient Therapeutic feeding Program (OTP) brings the services for management of Severe Acute Malnutrition (SAM) closer to the community by making services available at decentralized treatment points within the primary health care settings, through the use of ready-to-use therapeutic foods, community outreach and mobilization. Little is known about the program outcomes. This study revealed the levels of program outcome indictors and determinant factors to recovery rate.\n\n\nMETHODS\nA retrospective cohort study was conducted on 628 children who had been managed for SAM under OTP from April/2008 to January/2012. The children were selected using systematic random sampling from 12 health posts and 4 health centers. The study relied on information of demographic characteristics, anthropometries, Plumpy'Nut, medical problems and routine medications intakes. The results were estimated using Kaplan-Meier survival curves, log-rank test and Cox-regression.\n\n\nRESULTS\nThe recovery, defaulter, mortality and weight gain rates were 61.78%, 13.85%, 3.02% and 5.23 gm/kg/day, respectively. Routine medications were administered partially and children with medical problems were managed inappropriately under the program. As a child consumed one more sachet of Plumpy'Nut, the recovery rate from SAM increased by 4% (HR = 1.04, 95%-CI = 1.03, 1.05, P<0.001). The adjusted hazard ratios to recovery of children with diarrhea, appetite loss with Plumpy'Nut and failure to gain weight were 2.20 (HR = 2.20, 95%-CI = 1.31, 3.41, P = 0.001), 4.49 (HR = 1.74, 95%-CI = 1.07, 2.83, P = 0.046) and 3.88 (HR = 1.95, 95%-CI = 1.17, 3.23, P<0.001), respectively. Children who took amoxicillin and de-worming had 95% (HR = 1.95, 95%-CI = 1.17, 3.23) and 74% (HR = 1.74, 95%-CI = 1.07, 2.83) more probability to recover from SAM as compared to those who didn't take them.\n\n\nCONCLUSIONS\nThe OTP was partially successful. Management of children with comorbidities under the program and partial administration of routine drugs were major threats for the program effectiveness. The stakeholders should focus on creating the capacity of the OTP providers on proper management of SAM to achieve fully effective program.", "title": "" }, { "docid": "c514cb2acdf18fc4d64dc0df52d09d51", "text": "Android introduced the dynamic code loading (DCL) mechanism to allow for code reuse, to achieve extensibility, to enable updating functionalities, or to boost application start-up performance. In spite of its wide adoption by developers, previous research has shown that the secure implementation of DCL-based functionality is challenging, often leading to remote code injection vulnerabilities. Unfortunately, previous attempts to address this problem by both the academic and Android developers communities are affected by either practicality or completeness issues, and, in some cases, are affected by severe vulnerabilities.\n In this paper, we propose, design, implement, and test Grab 'n Run, a novel code verification protocol and a series of supporting libraries, APIs, and tools, that address the problem by abstracting away from the developer many of the challenging implementation details. Grab 'n Run is designed to be practical: Among its tools, it provides a drop-in library, which requires no modifications to the Android framework or the underlying Dalvik/ART runtime, is very similar to the native API, and most code can be automatically rewritten to use it. Grab 'n Run also contains an application-rewriting tool, which allows to easily port legacy or third-party applications to use the secure APIs developed in this work.\n We evaluate the Grab 'n Run library with a user study, obtaining very encouraging results in vulnerability reduction, ease of use, and speed of development. We also show that the performance overhead introduced by our library is negligible. For the benefit of the security of the Android ecosystem, we released Grab 'n Run as open source.", "title": "" }, { "docid": "e0b8b4c2431b92ff878df197addb4f98", "text": "Malware classification is a critical part of the cybersecurity. Traditional methodologies for the malware classification typically use static analysis and dynamic analysis to identify malware. In this paper, a malware classification methodology based on its binary image and extracting local binary pattern (LBP) features is proposed. First, malware images are reorganized into 3 by 3 grids which are mainly used to extract LBP feature. Second, the LBP is implemented on the malware images to extract features in that it is useful in pattern or texture classification. Finally, Tensorflow, a library for machine learning, is applied to classify malware images with the LBP feature. Performance comparison results among different classifiers with different image descriptors such as GIST, a spatial envelope, and the LBP demonstrate that our proposed approach outperforms others.", "title": "" }, { "docid": "f8fb8f9cd9efd6aefd60950b257c4abd", "text": "The development of a 12-way X-band all-waveguide radial divider/combiner is presented. The radial combiner is comprised of three parts: a center feed, a radial line, and peripheral waveguide ports. The center feed is comprised of two sections: a rectangular waveguide section and a mode transducer section. The latter is a circular waveguide fed by four-way in-phase combiner to convert the rectangular waveguide TE10 mode to a TE10 circular waveguide mode for in-phase feeding of all peripheral ports. For design evaluation, the 12-way combiner was built and tested but also two back-to-back test fixtures, one for the mode transducer and the second for the radial combiner were fabricated and tested as well. The measured insertion loss and phase imbalance of the combiner over a 10% operating bandwidth are less than 0.35 dB and ±5°, respectively. The structure is suitable for high power and should handle few kilowatts.", "title": "" }, { "docid": "de333f099bad8a29046453e099f91b84", "text": "Financial time-series forecasting has long been a challenging problem because of the inherently noisy and stochastic nature of the market. In the high-frequency trading, forecasting for trading purposes is even a more challenging task, since an automated inference system is required to be both accurate and fast. In this paper, we propose a neural network layer architecture that incorporates the idea of bilinear projection as well as an attention mechanism that enables the layer to detect and focus on crucial temporal information. The resulting network is highly interpretable, given its ability to highlight the importance and contribution of each temporal instance, thus allowing further analysis on the time instances of interest. Our experiments in a large-scale limit order book data set show that a two-hidden-layer network utilizing our proposed layer outperforms by a large margin all existing state-of-the-art results coming from much deeper architectures while requiring far fewer computations.", "title": "" }, { "docid": "6a4595e71ad1c4e6196f17af20c8c1ef", "text": "We propose a novel regularizer to improve the training of Generative Adversarial Networks (GANs). The motivation is that when the discriminatorD spreads out its model capacity in the right way, the learning signals given to the generator G are more informative and diverse. These in turn help G to explore better and discover the real data manifold while avoiding large unstable jumps due to the erroneous extrapolation made by D . Our regularizer guides the rectifier discriminator D to better allocate its model capacity, by encouraging the binary activation patterns on selected internal layers of D to have a high joint entropy. Experimental results on both synthetic data and real datasets demonstrate improvements in stability and convergence speed of the GAN training, as well as higher sample quality. The approach also leads to higher classification accuracies in semi-supervised learning.", "title": "" }, { "docid": "eabb9e04ff7609bf6754431b9ce6718f", "text": "Electric phenomena play an important role in biophysics. Bioelectric processes control the ion transport processes across membranes, and are the basis for information transfer along neurons. These electrical effects are generally triggered by chemical processes. However, it is also possible to control such cell functions and transport processes by applying pulsed electric fields. This area of bioengineering, bioelectrics, offers new applications for pulsed power technology. One such application is prevention of biofouling, an effect that is based on reversible electroporation of cell membranes. Pulsed electric fields of several kV/cm amplitude and submicrosecond duration have been found effective in preventing the growth of aquatic nuisance species on surfaces. Reversible electroporation is also used for medical applications, e.g. for delivery of chemotherapeutic drugs into tumor cells, for gene therapy, and for transdermal drug delivery. Higher electric fields cause irreversible membrane damage. Pulses in the microsecond range with electric field intensities in the tens of kV/cm are being used for bacterial decontamination of water and liquid food. A new type of field-cell interaction, \"Intracellular Electromanipulation\", by means of nanosecond pulses at electric fields exceeding 50 kV/cm has been recently added to known bioelectric effects. It is based on capacitive coupling to cell substructures, has therefore the potential to affect transport processes across subcellular membranes, and may be used for gene transfer into cell nuclei. There are also indications that it triggers intracellular processes, such as programmed cell death, an effect, which can be used for cancer treatment. In order to generate the required electric fields for these processes, high voltage, high current sources are required. The pulse duration needs to be short to prevent thermal effects. Pulse power technology is the enabling technology for bioelectrics. The field of bioelectrics, therefore opens up a new research area for pulse power engineers, with fascinating applications in biology and medicine.", "title": "" }, { "docid": "9c11facbe1749ca3b8733a45741ae4c3", "text": "The robotics literature of the last two decades contains many important advances in the control of flexible joint robots. This is a survey of these advances and an assessment for future developments, concentrated mostly on the control issues of flexible joint robots.", "title": "" }, { "docid": "643599f9b0dcfd270f9f3c55567ed985", "text": "OBJECTIVES\nTo describe a new first-trimester sonographic landmark, the retronasal triangle, which may be useful in the early screening for cleft palate.\n\n\nMETHODS\nThe retronasal triangle, i.e. the three echogenic lines formed by the two frontal processes of the maxilla and the palate visualized in the coronal view of the fetal face posterior to the nose, was evaluated prospectively in 100 consecutive normal fetuses at the time of routine first-trimester sonographic screening at 11 + 0 to 13 + 6 weeks' gestation. In a separate study of five fetuses confirmed postnatally as having a cleft palate, ultrasound images, including multiplanar three-dimensional views, were analyzed retrospectively to review the retronasal triangle.\n\n\nRESULTS\nNone of the fetuses evaluated prospectively was affected by cleft lip and palate. During their first-trimester scan, the retronasal triangle could not be identified in only two fetuses. Reasons for suboptimal visualization of this area included early gestational age at scanning (11 weeks) and persistent posterior position of the fetal face. Of the five cases with postnatal diagnosis of cleft palate, an abnormal configuration of the retronasal triangle was documented in all cases on analysis of digitally stored three-dimensional volumes.\n\n\nCONCLUSIONS\nThis study demonstrates the feasibility of incorporating evaluation of the retronasal triangle into the routine evaluation of the fetal anatomy at 11 + 0 to 13 + 6 weeks' gestation. Because fetuses with cleft palate have an abnormal configuration of the retronasal triangle, focused examination of the midface, looking for this area at the time of the nuchal translucency scan, may facilitate the early detection of cleft palate in the first trimester.", "title": "" }, { "docid": "fcca1f2fea2534818c2bbf1ba8c9bf97", "text": "The world is increasingly going green in its energy use. Wind power is a green renewable source of energy that can compete effectively with fossil fuel as a generator of power in the electricity market. For this effective competion, the production cost must be comparable to that of fossil fuels or other sources of energy. The initial capital investment in wind power goes to machine and the supporting infrastructure. Any factors that lead to decrease in cost of energy such as turbine design, construction and operation are key to making wind power competitive as an alternative source of energy. A mathematical model of wind turbine is essential in the understanding of the behaviour of the wind turbine over its region of operation because it allows for the development of comprehensive control algorithms that aid in optimal operation of a wind turbine. Modelling enables control of wind turbine’s performance. This paper attempts to address part or whole of these general objectives of wind turbine modelling through examination of power coefficient parameter. Model results will be beneficial to designers and researchers of new generation turbines who can utilize the information to optimize the design of turbines and minimize generation costs leading 4528 A. W. Manyonge, R. M. Ochieng, F. N. Onyango and J. M. Shichikha to decrease in cost of wind energy and hence, making it an economically viable alternative source of energy. Mathematics Subject Classification: 65C20", "title": "" }, { "docid": "f16f8803a2aa1e08d449de477d3568d5", "text": "Polyphenols represent a group of chemical substances common in plants, structurally characterized by the presence of one or more phenol units. Polyphenols are the most abundant antioxidants in human diets and the largest and best studied class of polyphenols is flavonoids, which include several thousand compounds. Numerous studies confirm that they exert a protective action on human health and are key components of a healthy and balanced diet. Epidemiological studies correlate flavonoid intake with a reduced incidence of chronic diseases, such as cardiovascular disease, diabetes and cancer. The involvement of reactive oxygen species (ROS) in the etiology of these degenerative conditions has suggested that phytochemicals showing antioxidant activity may contribute to the prevention of these pathologies. The present review deals with phenolic compounds in plants and reports on recent studies. Moreover, the present work includes information on the relationships between the consumption of these compounds, via feeding, and risk of disease occurrence, i.e. the effect on human health. Results obtained on herbs, essential oils, from plants grown in tropical, subtropical and temperate regions, were also reported.", "title": "" }, { "docid": "2165c5f8990234a862bf2dece88ea6eb", "text": "This paper summarizes some recent advances on a set of tasks related to the processing of singing using state-of-the-art deep learning techniques. We discuss their achievements in terms of accuracy and sound quality, and the current challenges, such as availability of data and computing resources. We also discuss the impact that these advances do and will have on listeners and singers when they are integrated in commercial applications.", "title": "" }, { "docid": "09a236e2c9e7be6a879ab5ca84e426c9", "text": "A foot database comprising 3D foot shapes and footwear fitting reports of more than 300 participants is presented. It was primarily acquired to study footwear fitting, though it can also be used to analyse anatomical features of the foot. In fact, we present a technique for automatic detection of several foot anatomical landmarks, together with some empirical results.", "title": "" }, { "docid": "97a3c599c7410a0e12e1784585260b95", "text": "This research focuses on 3D printed carbon-epoxy composite components in which the reinforcing carbon fibers have been preferentially aligned during the micro-extrusion process. Most polymer 3D printing techniques use unreinforced polymers. By adding carbon fiber as a reinforcing material, properties such as mechanical strength, electrical conductivity, and thermal conductivity can be greatly enhanced. However, these properties are significantly influenced by the degree of fiber alignment (or lack thereof). A Design of Experiments (DOE) approach was used to identify significant process parameters affecting preferential fiber alignment in the micro-extrusion process. A 2D Fast Fourier Transform (FFT) was used with ImageJ software to quantify the degree of fiber alignment in micro-extruded carbonepoxy pastes. Based on analysis of experimental results, tensile test samples were printed with fibers aligned parallel and perpendicular to the tensile axis. A standard test method for tensile properties of plastic revealed that the 3D printed test coupons with fibers aligned parallel to the tensile axis were significantly better in tensile strength and modulus. Results of this research can be used to 3D print components with locally controlled fiber alignment that is difficult to achieve via conventional composite manufacturing techniques.", "title": "" }, { "docid": "a0c6b1817a08d1be63dff9664852a6b4", "text": "Despite years of HCI research on digital technology in museums, it is still unclear how different interactions impact on visitors'. A comparative evaluation of smart replicas, phone app and smart cards looked at the personal preferences, behavioural change, and the appeal of mobiles in museums. 76 participants used all three interaction modes and gave their opinions in a questionnaire; participants interaction was also observed. The results show the phone is the most disliked interaction mode while tangible interaction (smart card and replica combined) is the most liked. Preference for the phone favour mobility to the detriment of engagement with the exhibition. Different behaviours when interacting with the phone or the tangibles where observed. The personal visiting style appeared to be only marginally affected by the device. Visitors also expect museums to provide the phones against the current trend of developing apps in a \"bring your own device\" approach.", "title": "" }, { "docid": "46cabd836b416be86a18262bc58e9dec", "text": "Encrypting data on client-side before uploading it to a cloud storage is essential for protecting users' privacy. However client-side encryption is at odds with the standard practice of deduplication. Reconciling client-side encryption with cross-user deduplication is an active research topic. We present the first secure cross-user deduplication scheme that supports client-side encryption without requiring any additional independent servers. Interestingly, the scheme is based on using a PAKE (password authenticated key exchange) protocol. We demonstrate that our scheme provides better security guarantees than previous efforts. We show both the effectiveness and the efficiency of our scheme, via simulations using realistic datasets and an implementation.", "title": "" }, { "docid": "9cf0d6e811f7cdafe4316b49d060d192", "text": "Medical imaging plays a central role in a vast range of healthcare practices. The usefulness of 3D visualizations has been demonstrated for many types of treatment planning. Nevertheless, full access to 3D renderings outside of the radiology department is still scarce even for many image-centric specialties. Our work stems from the hypothesis that this under-utilization is partly due to existing visualization systems not taking the prerequisites of this application domain fully into account. We have developed a medical visualization table intended to better fit the clinical reality. The overall design goals were two-fold: similarity to a real physical situation and a very low learning threshold. This paper describes the development of the visualization table with focus on key design decisions. The developed features include two novel interaction components for touch tables. A user study including five orthopedic surgeons demonstrates that the system is appropriate and useful for this application domain.", "title": "" }, { "docid": "032526c7855e0895ae88748c309b21c0", "text": "Amazon is a well-known online company that sells products such as books and music. It also tracks the purchasing patterns of a variety of groups including private corporations, government organizations, and geographic areas. Amazon defines each of these groups as a “purchase circle.” For each purchase circle, Amazon lists the bestselling items in the Books, Music, Video, DVDs, and Electronics product categories. Our objective is to create a dynamic visualization of Amazon’s purchase circles that focuses on looking at the Top 10 music titles and genres that are popular in selected U.S. cities. We present a visualization known as CityPrints, a dynamic query-based tool for producing color-coded visual representations of purchase circles data. CityPrints allows users to quickly compare popular titles in different U.S. cities, identify which music genres are popular in a given city, and rank cities according to how popular a given music genre is in that city.", "title": "" }, { "docid": "42e25eaf06693b3544498d959a55bd1e", "text": "A standard view of the semantics of natural language sentences or utterances is that a sentence has a particular logical structure and is assigned truth-conditional content on the basis of that structure. Such a semantics is assumed to be able to capture the logical properties of sentences, including necessary truth, contradiction and valid inference; our knowledge of these properties is taken to be part of our semantic competence as native speakers of the language. The following examples pose a problem for this view of semantics:", "title": "" }, { "docid": "38984b625ac24137b23444f4bd53a312", "text": "Presence Volume /, Number 3. Summer / 992 Reprinted from Espacios 23-24, 1955. © / 992 The Massachusetts Institute of Technology Pandemonium reigns supreme in the film industry. Ever)' studio is hastily converting to its own \"revolutionär)'\" system—Cinerama, Colorama, Panoramic Screen, Cinemascope, Three-D, and Stereophonic Sound. A dozen marquees in Time Square are luring customers into the realm of a \"sensational new experience.\" Everywhere we see the \"initiated\" holding pencils before the winked eyes of the \"uninitiated\" explaining the mysteries of 3-D. The critics are lining up pro and con concluding their articles profoundly with \"after all, it's the story that counts.\" Along with other filmgoers desiring orientation, I have been reading these articles and have sadly discovered that they reflect this confusion rather than illuminate it. It is apparent that the inability to cope with the problem stems from a refusal to adopt a wider frame of reference, and from a meager understanding of the place art has in life generally. All living things engage, on a higher or lower level, in a continuous cycle of orientation and action. For example, an animal on a mountain ledge hears a rumbling sound and sees an avalanche of rocks descending on it. It cries with", "title": "" } ]
scidocsrr
3165ffa080f87ef63bfb66f0d3488a2b
Linguistic Analysis of Toxic Behavior in an Online Video Game
[ { "docid": "01b9bf49c88ae37de79b91edeae20437", "text": "While online, some people self-disclose or act out more frequently or intensely than they would in person. This article explores six factors that interact with each other in creating this online disinhibition effect: dissociative anonymity, invisibility, asynchronicity, solipsistic introjection, dissociative imagination, and minimization of authority. Personality variables also will influence the extent of this disinhibition. Rather than thinking of disinhibition as the revealing of an underlying \"true self,\" we can conceptualize it as a shift to a constellation within self-structure, involving clusters of affect and cognition that differ from the in-person constellation.", "title": "" } ]
[ { "docid": "9dbd988e0e7510ddf4fce9d5a216f9d6", "text": "Tooth abutments can be prepared to receive fixed dental prostheses with different types of finish lines. The literature reports different complications arising from tooth preparation techniques, including gingival recession. Vertical preparation without a finish line is a technique whereby the abutments are prepared by introducing a diamond rotary instrument into the sulcus to eliminate the cementoenamel junction and to create a new prosthetic cementoenamel junction determined by the prosthetic margin. This article describes 2 patients whose dental abutments were prepared to receive ceramic restorations using vertical preparation without a finish line.", "title": "" }, { "docid": "42faf2c0053c9f6a0147fc66c8e4c122", "text": "IN 1921, Gottlieb's discovery of the epithelial attachment of the gingiva opened new horizons which served as the basis for a better understanding of the biology of the dental supporting tissues in health and disease. Three years later his pupils, Orban and Kohler (1924), undertook the task of measuring the epithelial attachment as well as the surrounding tissue relations during the four phases of passive eruption of the tooth. Gottlieb and Orban's descriptions of the epithelial attachment unveiled the exact morphology of this epithelial structure, and clarified the relation of this", "title": "" }, { "docid": "64fc1433249bb7aba59e0a9092aeee5e", "text": "In this paper, we propose two generic text summarization methods that create text summaries by ranking and extracting sentences from the original documents. The first method uses standard IR methods to rank sentence relevances, while the second method uses the latent semantic analysis technique to identify semantically important sentences, for summary creations. Both methods strive to select sentences that are highly ranked and different from each other. This is an attempt to create a summary with a wider coverage of the document's main content and less redundancy. Performance evaluations on the two summarization methods are conducted by comparing their summarization outputs with the manual summaries generated by three independent human evaluators. The evaluations also study the influence of different VSM weighting schemes on the text summarization performances. Finally, the causes of the large disparities in the evaluators' manual summarization results are investigated, and discussions on human text summarization patterns are presented.", "title": "" }, { "docid": "c586e8821061f9714e80caa53a0b40d5", "text": "Many computer vision problems can be posed as learning a low-dimensional subspace from high dimensional data. The low rank matrix factorization (LRMF) represents a commonly utilized subspace learning strategy. Most of the current LRMF techniques are constructed on the optimization problem using L_1 norm and L_2 norm, which mainly deal with Laplacian and Gaussian noise, respectively. To make LRMF capable of adapting more complex noise, this paper proposes a new LRMF model by assuming noise as Mixture of Exponential Power (MoEP) distributions and proposes a penalized MoEP model by combining the penalized likelihood method with MoEP distributions. Such setting facilitates the learned LRMF model capable of automatically fitting the real noise through MoEP distributions. Each component in this mixture is adapted from a series of preliminary super-or sub-Gaussian candidates. An Expectation Maximization (EM) algorithm is also designed to infer the parameters involved in the proposed PMoEP model. The advantage of our method is demonstrated by extensive experiments on synthetic data, face modeling and hyperspectral image restoration.", "title": "" }, { "docid": "b7b01049a4cc9cfd2dd951ee1302bfbc", "text": "This article describes the design, implementation, and results of the latest installment of the dermoscopic image analysis benchmark challenge. The goal is to support research and development of algorithms for automated diagnosis of melanoma, the most lethal skin cancer. The challenge was divided into 3 tasks: lesion segmentation, feature detection, and disease classification. Participation involved 593 registrations, 81 pre-submissions, 46 finalized submissions (including a 4-page manuscript), and approximately 50 attendees, making this the largest standardized and comparative study in this field to date. While the official challenge duration and ranking of participants has concluded, the dataset snapshots remain available for further research and development.", "title": "" }, { "docid": "1459f6bf9ebf153277f49a0791e2cf6d", "text": "Content popularity prediction finds application in many areas, including media advertising, content caching, movie revenue estimation, traffic management and macro-economic trends forecasting, to name a few. However, predicting this popularity is difficult due to, among others, the effects of external phenomena, the influence of context such as locality and relevance to users,and the difficulty of forecasting information cascades.\n In this paper we identify patterns of temporal evolution that are generalisable to distinct types of data, and show that we can (1) accurately classify content based on the evolution of its popularity over time and (2) predict the value of the content's future popularity. We verify the generality of our method by testing it on YouTube, Digg and Vimeo data sets and find our results to outperform the K-Means baseline when classifying the behaviour of content and the linear regression baseline when predicting its popularity.", "title": "" }, { "docid": "bbf242fd4722abbba0bc993a636f50c2", "text": "Since the publication of its first edition in 1995, Artificial Intelligence: A Modern Approach has become a classic in our field. Even researchers outside AI, working in other areas of computer science, are familiar with the text and have gained a better appreciation of our field thanks to the efforts of its authors, Stuart Russell of UC Berkeley and Peter Norvig of Google Inc. It has been adopted by over 1000 universities in over 100 countries, and has provided an excellent introduction to AI to several hundreds of thousands of students worldwide. The book not only stands out in the way it provides a clear and comprehensive introduction to almost all aspects of AI for a student entering our field, it also provides a tremendous resource for experienced AI researchers interested in a good introduction to subfields of AI outside of their own area of specialization. In fact, many researchers enjoy reading insightful descriptions of their own area, combined, of course, with the tense moment of checking the author index to see whether their own work made it into the book. Fortunately, due in part to the comprehensive nature of the text, almost all AI researchers who have been around for a few years can be proud to see their own work cited. Writing such a high-quality and erudite overview of our field, while distilling key aspects of literally thousands of research papers, is a daunting task that requires a unique talent; the Russell and Norvig author team has clearly handled this challenge exceptionally well. Given the impact of the first edition of the book, a challenge for the authors was to keep such a unique text up-to-date in the face of rapid developments in AI over the past decade and a half. Fortunately, the authors have succeeded admirably in this challenge by bringing out a second edition in 2003 and now a third edition in 2010. Each of these new editions involves major rewrites and additions to the book to keep it fully current. The revisions also provide an insightful overview of the evolution of AI in recent years. The text covers essentially all major areas of AI, while providing ample and balanced coverage of each of the subareas. For certain subfields, part of the text was provided by respective subject experts. In particular, Jitendra Malik and David Forsyth contributed the chapter on computer vision, Sebastian Thrun wrote the chapter on robotics, and Vibhu Mittal helped with the chapter on natural language. Nick Hay, Mehran Sahami, and Ernest Davis contributed to the engaging set of exercises for students. Overall, this book brings together deep knowledge of various facets of AI, from the authors as well as from many experts in various subfields. The topics covered in the book are woven together via the theme of a grand challenge in AI — that of creating an intelligent agent, one that takes “the best possible [rational] action in [any] situation.” Every aspect of AI is considered in the context of such an agent. For instance, the book discusses agents that solve problems through search, planning, and reasoning, agents that are situated in the physical world, agents that learn from observations, agents that interact with the world through vision and perception, and agents that manipulate the physical world through", "title": "" }, { "docid": "739e9e1b31fdbef191d7dde4c7f15efe", "text": "This paper describes the design and human machine interface of an Active Leg EXoskeleton (ALEX) for gait rehabilitation of patients with walking disabilities. The paper proposes force-field controller which can apply suitable forces on the leg to help it move on a desired trajectory. The interaction forces between the subject and the orthosis are designed to be 'assist-as-needed' for safe and effective gait training. Simulations and experimental results with the force-field controller are presented. Experiments have been performed with healthy subjects walking on a treadmill. It was shown that a healthy subject could be retrained in about 45 minutes with ALEX to walk on a treadmill with a significantly altered gait. In the coming months, this powered orthosis will be used for gait training of stroke patients.", "title": "" }, { "docid": "6649b5482a9a5413059ff4f9446223c6", "text": "The emergence of drug resistance to traditional chemotherapy and newer targeted therapies in cancer patients is a major clinical challenge. Reactivation of the same or compensatory signaling pathways is a common class of drug resistance mechanisms. Employing drug combinations that inhibit multiple modules of reactivated signaling pathways is a promising strategy to overcome and prevent the onset of drug resistance. However, with thousands of available FDA-approved and investigational compounds, it is infeasible to experimentally screen millions of possible drug combinations with limited resources. Therefore, computational approaches are needed to constrain the search space and prioritize synergistic drug combinations for preclinical studies. In this study, we propose a novel approach for predicting drug combinations through investigating potential effects of drug targets on disease signaling network. We first construct a disease signaling network by integrating gene expression data with disease-associated driver genes. Individual drugs that can partially perturb the disease signaling network are then selected based on a drug-disease network \"impact matrix\", which is calculated using network diffusion distance from drug targets to signaling network elements. The selected drugs are subsequently clustered into communities (subgroups), which are proposed to share similar mechanisms of action. Finally, drug combinations are ranked according to maximal impact on signaling sub-networks from distinct mechanism-based communities. Our method is advantageous compared to other approaches in that it does not require large amounts drug dose response data, drug-induced \"omics\" profiles or clinical efficacy data, which are not often readily available. We validate our approach using a BRAF-mutant melanoma signaling network and combinatorial in vitro drug screening data, and report drug combinations with diverse mechanisms of action and opportunities for drug repositioning.", "title": "" }, { "docid": "e63eac157bd750ca39370fd5b9fdf85e", "text": "Allometric scaling relations, including the 3/4 power law for metabolic rates, are characteristic of all organisms and are here derived from a general model that describes how essential materials are transported through space-filling fractal networks of branching tubes. The model assumes that the energy dissipated is minimized and that the terminal tubes do not vary with body size. It provides a complete analysis of scaling relations for mammalian circulatory systems that are in agreement with data. More generally, the model predicts structural and functional properties of vertebrate cardiovascular and respiratory systems, plant vascular systems, insect tracheal tubes, and other distribution networks.", "title": "" }, { "docid": "90d4c7fb5addd3123746f64fe6ed96f7", "text": "As a trust machine, blockchain was recently introduced to the public to provide an immutable, consensus based and transparent system in the Fintech field. However, there are ongoing efforts to apply blockchain to other fields where trust and value are essential. In this paper, we suggest Gcoin blockchain as the base of the data flow of drugs to create transparent drug transaction data. Additionally, the regulation model of the drug supply chain could be altered from the inspection and examination only model to the surveillance net model, and every unit that is involved in the drug supply chain would be able to participate simultaneously to prevent counterfeit drugs and to protect public health, including patients.", "title": "" }, { "docid": "b8b6dd35c714c1b95cda6f9c9a85598d", "text": "There is significant current interest in the problem of influence maximization: given a directed social network with influence weights on edges and a number k, find k seed nodes such that activating them leads to the maximum expected number of activated nodes, according to a propagation model. Kempe et al. showed, among other things, that under the Linear Threshold Model, the problem is NP-hard, and that a simple greedy algorithm guarantees the best possible approximation factor in PTIME. However, this algorithm suffers from various major performance drawbacks. In this paper, we propose Simpath, an efficient and effective algorithm for influence maximization under the linear threshold model that addresses these drawbacks by incorporating several clever optimizations. Through a comprehensive performance study on four real data sets, we show that Simpath consistently outperforms the state of the art w.r.t. running time, memory consumption and the quality of the seed set chosen, measured in terms of expected influence spread achieved.", "title": "" }, { "docid": "7cbe504e03ab802389c48109ed1f1802", "text": "Despite recent breakthroughs in the applications of deep neural networks, one setting that presents a persistent challenge is that of “one-shot learning.” Traditional gradient-based networks require a lot of data to learn, often through extensive iterative training. When new data is encountered, the models must inefficiently relearn their parameters to adequately incorporate the new information without catastrophic interference. Architectures with augmented memory capacities, such as Neural Turing Machines (NTMs), offer the ability to quickly encode and retrieve new information, and hence can potentially obviate the downsides of conventional models. Here, we demonstrate the ability of a memory-augmented neural network to rapidly assimilate new data, and leverage this data to make accurate predictions after only a few samples. We also introduce a new method for accessing an external memory that focuses on memory content, unlike previous methods that additionally use memory locationbased focusing mechanisms.", "title": "" }, { "docid": "a289775f693d6b37f54b13898c242a82", "text": "The large-scale, dynamic, and heterogeneous nature of cloud computing poses numerous security challenges. But the cloud's main challenge is to provide a robust authorization mechanism that incorporates multitenancy and virtualization aspects of resources. The authors present a distributed architecture that incorporates principles from security management and software engineering and propose key requirements and a design model for the architecture.", "title": "" }, { "docid": "c1aa687c4a48cfbe037fe87ed4062dab", "text": "This paper deals with the modelling and control of a single sided linear switched reluctance actuator. This study provide a presentation of modelling and proposes a study on open and closed loop controls for the studied motor. From the proposed model, its dynamic behavior is described and discussed in detail. In addition, a simpler controller based on PID regulator is employed to upgrade the dynamic behavior of the motor. The simulation results in closed loop show a significant improvement in dynamic response compared with open loop. In fact, this simple type of controller offers the possibility to improve the dynamic response for sliding door application.", "title": "" }, { "docid": "2421518a0646cb76d2aac6c33ccd06dc", "text": "Modern technologies enable us to record sequences of online user activity at an unprecedented scale. Although such activity logs are abundantly available, most approaches to recommender systems are based on the rating-prediction paradigm, ignoring temporal and contextual aspects of user behavior revealed by temporal, recurrent patterns. In contrast to explicit ratings, such activity logs can be collected in a non-intrusive way and can offer richer insights into the dynamics of user preferences, which could potentially lead more accurate user models. In this work we advocate studying this ubiquitous form of data and, by combining ideas from latent factor models for collaborative filtering and language modeling, propose a novel, flexible and expressive collaborative sequence model based on recurrent neural networks. The model is designed to capture a user’s contextual state as a personalized hidden vector by summarizing cues from a data-driven, thus variable, number of past time steps, and represents items by a real-valued embedding. We found that, by exploiting the inherent structure in the data, our formulation leads to an efficient and practical method. Furthermore, we demonstrate the versatility of our model by applying it to two different tasks: music recommendation and mobility prediction, and we show empirically that our model consistently outperforms static and non-collaborative methods.", "title": "" }, { "docid": "242cc9922b120057fe9f9066f257fb44", "text": "ion Yes No Partly Availability / Mobility No No No Fault tolerance Partly No Partly Flexibility / Event based Yes Partly Partly Uncertainty of information No No No", "title": "" }, { "docid": "5ce645c361a403e5d3888f026061b870", "text": "The level-of-detail techniques presented in this paper enable a comprehensible interactive visualization of large and complex clustered graph layouts either in 2D or 3D. Implicit surfaces are used for the visually simplified representation of vertex clusters, and so-called edge bundles are formed for the simplification of edges. Additionally, dedicated transition techniques are provided for continuously adaptive and adjustable views of graphs that range from very abstract to very detailed representations.", "title": "" }, { "docid": "b1535b6f1c5f1054e2d61c4920d860ba", "text": "This research examines a collaborative solution to a common problem, that of providing help to distributed users. The Answer Garden 2 system provides a secondgeneration architecture for organizational and community memory applications. After describing the need for Answer Garden 2’s functionality, we describe the architecture of the system and two underlying systems, the Cafe ConstructionKit and Collaborative Refinery. We also present detailed descriptions of the collaborative help and collaborative refining facilities in the Answer Garden 2 system.", "title": "" }, { "docid": "c16ff028e77459867eed4c2b9c1f44c6", "text": "Neuroimage analysis usually involves learning thousands or even millions of variables using only a limited number of samples. In this regard, sparse models, e.g. the lasso, are applied to select the optimal features and achieve high diagnosis accuracy. The lasso, however, usually results in independent unstable features. Stability, a manifest of reproducibility of statistical results subject to reasonable perturbations to data and the model (Yu 2013), is an important focus in statistics, especially in the analysis of high dimensional data. In this paper, we explore a nonnegative generalized fused lasso model for stable feature selection in the diagnosis of Alzheimer’s disease. In addition to sparsity, our model incorporates two important pathological priors: the spatial cohesion of lesion voxels and the positive correlation between the features and the disease labels. To optimize the model, we propose an efficient algorithm by proving a novel link between total variation and fast network flow algorithms via conic duality. Experiments show that the proposed nonnegative model performs much better in exploring the intrinsic structure of data via selecting stable features compared with other state-of-the-arts. Introduction Neuroimage analysis is challenging due to its high feature dimensionality and data scarcity. Sparse models such as the lasso (Tibshirani 1996) have gained great reputation in statistics and machine learning, and they have been applied to the analysis of such high dimensional data by exploiting the sparsity property in the absence of abundant data. As a major result, automatic selection of relevant variables/features by such sparse formulation achieves promising performance. For example, in (Liu, Zhang, and Shen 2012), the lasso model was applied to the diagnosis of Alzheimer’s disease (AD) and showed better performance than the support vector machine (SVM), which is one of the state-of-the-arts in brain image classification. However, in statistics, it is known that the lasso does not always provide interpretable results because of its instability (Yu 2013). “Stability” here means the reproducibility of statistical results subject to reasonable perturbations to data and Copyright c © 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. the model. (These perturbations include the often used Jacknife, bootstrap and cross-validation.) This unstable behavior of the lasso model is critical in high dimensional data analysis. The resulting irreproducibility of the feature selection are especially undesirable in neuroimage analysis/diagnosis. However, unlike the problems such as registration and classification, the stability issue of feature selection is much less studied in this field. In this paper we propose a model to induce more stable feature selection from high dimensional brain structural Magnetic Resonance Imaging (sMRI) images. Besides sparsity, the proposed model harnesses two important additional pathological priors in brain sMRI: (i) the spatial cohesion of lesion voxels (via inducing fusion terms) and (ii) the positive correlation between the features and the disease labels. The correlation prior is based on the observation that in many brain image analysis problems (such as AD, frontotemporal dementia, corticobasal degeneration, etc), there exist strong correlations between the features and the labels. For example, gray matter of AD is degenerated/atrophied. Therefore, the gray matter values (indicating the volume) are positively correlated with the cognitive scores or disease labels {-1,1}. That is, the less gray matter, the lower the cognitive score. Accordingly, we propose nonnegative constraints on the variables to enforce the prior and name the model as “non-negative Generalized Fused Lasso” (nGFL). It extends the popular generalized fused lasso and enables it to explore the intrinsic structure of data via selecting stable features. To measure feature stability, we introduce the “Estimation Stability” recently proposed in (Yu 2013) and the (multi-set) Dice coefficient (Dice 1945). Experiments demonstrate that compared with existing models, our model selects much more stable (and pathological-prior consistent) voxels. It is worth mentioning that the non-negativeness per se is a very important prior of many practical problems, e.g. (Lee and Seung 1999). Although nGFL is proposed to solve the diagnosis of AD in this work, the model can be applied to more general problems. Incorporating these priors makes the problem novel w.r.t the lasso or generalized fused lasso from an optimization standpoint. Although off-the-shelf convex solvers such as CVX (Grant and Boyd 2013) can be applied to solve the optimization, it hardly scales to high-dimensional problems in feasible time. In this regard, we propose an efficient algoProceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence", "title": "" } ]
scidocsrr
74f5819b9000b4783be25b8bd7b1bba2
A latency and fault-tolerance optimizer for online parallel query plans
[ { "docid": "25adc988a57d82ae6de7307d1de5bf71", "text": "The size of data sets being collected and analyzed in the industry for business intelligence is growing rapidly, making traditional warehousing solutions prohibitively expensive. Hadoop [1] is a popular open-source map-reduce implementation which is being used in companies like Yahoo, Facebook etc. to store and process extremely large data sets on commodity hardware. However, the map-reduce programming model is very low level and requires developers to write custom programs which are hard to maintain and reuse. In this paper, we present Hive, an open-source data warehousing solution built on top of Hadoop. Hive supports queries expressed in a SQL-like declarative language - HiveQL, which are compiled into map-reduce jobs that are executed using Hadoop. In addition, HiveQL enables users to plug in custom map-reduce scripts into queries. The language includes a type system with support for tables containing primitive types, collections like arrays and maps, and nested compositions of the same. The underlying IO libraries can be extended to query data in custom formats. Hive also includes a system catalog - Metastore - that contains schemas and statistics, which are useful in data exploration, query optimization and query compilation. In Facebook, the Hive warehouse contains tens of thousands of tables and stores over 700TB of data and is being used extensively for both reporting and ad-hoc analyses by more than 200 users per month.", "title": "" }, { "docid": "e70f261ba4bfa47b476d2bbd4abd4982", "text": "A geometric program (GP) is a type of mathematical optimization problem characterized by objective and constraint functions that have a special form. Recently developed solution methods can solve even large-scale GPs extremely efficiently and reliably; at the same time a number of practical problems, particularly in circuit design, have been found to be equivalent to (or well approximated by) GPs. Putting these two together, we get effective solutions for the practical problems. The basic approach in GP modeling is to attempt to express a practical problem, such as an engineering analysis or design problem, in GP format. In the best case, this formulation is exact; when this isn’t possible, we settle for an approximate formulation. This tutorial paper collects together in one place the basic background material needed to do GP modeling. We start with the basic definitions and facts, and some methods used to transform problems into GP format. We show how to recognize functions and problems compatible with GP, and how to approximate functions or data in a form compatible with GP (when this is possible). We give some simple and representative examples, and also describe some common extensions of GP, along with methods for solving (or approximately solving) them. Information Systems Laboratory, Department of Electrical Engineering, Stanford University, Stanford CA 94305 (boyd@stanford.edu) Information Systems Laboratory, Department of Electrical Engineering, Stanford University, Stanford,CA 94305 (sjkim@stanford.edu) Department of Electrical Engineering, University of California, Los Angeles, CA 90095 (vandenbe@ucla.edu) Clear Shape Technologies, Inc., Sunnyvale, CA 94086 (arash@clearshape.com)", "title": "" } ]
[ { "docid": "5921f0049596d52bd3aea33e4537d026", "text": "Various lines of evidence indicate that men generally experience greater sexual arousal (SA) to erotic stimuli than women. Yet, little is known regarding the neurobiological processes underlying such a gender difference. To investigate this issue, functional magnetic resonance imaging was used to compare the neural correlates of SA in 20 male and 20 female subjects. Brain activity was measured while male and female subjects were viewing erotic film excerpts. Results showed that the level of perceived SA was significantly higher in male than in female subjects. When compared to viewing emotionally neutral film excerpts, viewing erotic film excerpts was associated, for both genders, with bilateral blood oxygen level dependent (BOLD) signal increases in the anterior cingulate, medial prefrontal, orbitofrontal, insular, and occipitotemporal cortices, as well as in the amygdala and the ventral striatum. Only for the group of male subjects was there evidence of a significant activation of the thalamus and hypothalamus, a sexually dimorphic area of the brain known to play a pivotal role in physiological arousal and sexual behavior. When directly compared between genders, hypothalamic activation was found to be significantly greater in male subjects. Furthermore, for male subjects only, the magnitude of hypothalamic activation was positively correlated with reported levels of SA. These findings reveal the existence of similarities and dissimilarities in the way the brain of both genders responds to erotic stimuli. They further suggest that the greater SA generally experienced by men, when viewing erotica, may be related to the functional gender difference found here with respect to the hypothalamus.", "title": "" }, { "docid": "19d8aff7e6c7d20f4aa17d33d3b46eee", "text": "PURPOSE\nTo evaluate the usefulness of transperineal sonography of the anal sphincter complex for differentiating between an anteriorly displaced anus, which is a normal anatomical variant, and a low-type imperforate anus with perineal fistula, which is a pathological developmental abnormality requiring surgical repair.\n\n\nMATERIALS AND METHODS\nTransperineal sonography was performed with a 13-MHz linear-array transducer on 8 infants (1 day-5.3 months old) who were considered on clinical grounds to have an anteriorly displaced anus and on 9 infants (0-8 months old) with a low-type imperforate anus and perineal fistula confirmed at surgery. The anal sphincter complex was identified and the relationship between the anal canal and the anal sphincter complex was evaluated.\n\n\nRESULTS\nTransperineal sonography was feasible for all children without any specific preparation. An anal canal running within an intact sphincter complex was identified in all infants with an anteriorly displaced anus (n = 8). In 8 of 9 infants with a low-type imperforate anus, a perineal fistula running outside the anal sphincter complex was correctly diagnosed by transperineal sonography. In one infant with a low-type imperforate anus, transperineal sonography revealed a deficient anal sphincter complex.\n\n\nCONCLUSION\nTransperineal sonography appears to be a useful non-invasive imaging technique for assessing congenital anorectal abnormalities in neonates and infants, allowing the surgeon to select infants who would benefit from surgical repair.", "title": "" }, { "docid": "7fd7aa4b2c721a06e3d21a2e5fe608e5", "text": "Self-organization can be approached in terms of developmental processes occurring within and between component systems of temperament. Within-system organization involves progressive shaping of cortical representations by subcortical motivational systems. As cortical representations develop, they feed back to provide motivational systems with enhanced detection and guidance capabilities. These reciprocal influences may amplify the underlying motivational functions and promote excessive impulsivity or anxiety. However, these processes also depend upon interactions arising between motivational and attentional systems. We discuss these between-system effects by considering the regulation of approach motivation by reactive attentional processes related to fear and by more voluntary processes related to effortful control. It is suggested than anxious and impulsive psychopathology may reflect limitations in these dual means of control, which can take the form of overregulation as well as underregulation.", "title": "" }, { "docid": "2de9a9887c9fe3bc1c750e0fb81934d7", "text": "An axial-mode helical antenna backed by a perfect electric conductor (PEC reflector) is optimized to radiate a circularly polarized (CP) wave, using the finite-difference time-domain method (FDTDM). After the optimization, the PEC reflector is replaced with a corrugated reflector. The effects of the corrugated reflector on the current distribution along the helical arm and the radiation pattern are investigated. A reduction in the backward radiation is attributed to the reduction in the current flowing over the rear surface of the corrugated reflector. A spiral antenna backed by a PEC reflector of finite extent is also analyzed using the FDTDM. As the antenna height decreases, the reverse current toward the feed point increases, resulting in deterioration of the axial ratio. To overcome this deterioration, the PEC reflector is replaced with an electromagnetic band-gap (EBG) reflector composed of mushroom-like elements. Analysis reveals that the spiral radiates a CP wave even when the spiral is located close to the reflector (0.06 wavelength above the EBG surface). The input impedance for the EBG reflector is more stable over a wide frequency band than that for the PEC reflector.", "title": "" }, { "docid": "ea1b0f4e82ac9ad8593c5e4ba1567a59", "text": "This paper describes an emerging shared repository of large-text resources for creating word vectors, including pre-processed corpora and pre-trained vectors for a range of frameworks and configurations. This will facilitate reuse, rapid experimentation, and replicability of results.", "title": "" }, { "docid": "6527c10c822c2446b7be928f86d3c8f8", "text": "In this paper we present a novel algorithm for automatic analysis, transcription, and parameter extraction from isolated polyphonic guitar recordings. In addition to general score-related information such as note onset, duration, and pitch, instrumentspecific information such as the plucked string, the applied plucking and expression styles are retrieved automatically. For this purpose, we adapted several state-of-the-art approaches for onset and offset detection, multipitch estimation, string estimation, feature extraction, and multi-class classification. Furthermore we investigated a robust partial tracking algorithm with respect to inharmonicity, an extensive extraction of novel and known audio features as well as the exploitation of instrument-based knowledge in the form of plausability filtering to obtain more reliable prediction. Our system achieved very high accuracy values of 98 % for onset and offset detection as well as multipitch estimation. For the instrument-related parameters, the proposed algorithm also showed very good performance with accuracy values of 82 % for the string number, 93 % for the plucking style, and 83 % for the expression style. Index Terms playing techniques, plucking style, expression style, multiple fundamental frequency estimation, string classification, fretboard position, fingering, electric guitar, inharmonicity coefficient, tablature", "title": "" }, { "docid": "4ef27acb8442047deb68ad6715ffa03d", "text": "OBJECTIVE\nThere has been uncertainty about whether refugees and asylum seekers with PTSD can be treated effectively in standard psychiatric settings in industrialized countries. In this study, Narrative Exposure Therapy (NET) was compared to Treatment As Usual (TAU) in 11 general psychiatric health care units in Norway. The focus was on changes in symptom severity and in the diagnostic status for PTSD and depression.\n\n\nMETHOD\nRefugees and asylum seekers fulfilling the DSM-IV criteria for PTSD (N = 81) were randomized with an a-priori probability of 2:1 to either NET (N = 51) or TAU (N = 30). The patients were assessed with Clinician Administered PTSD Scale, Hamilton rating scale for depression and the MINI Neuropsychiatric Interview before treatment, and again at one and six months after the completion.\n\n\nRESULTS\nBoth NET and TAU gave clinically relevant symptom reduction both in PTSD and in depression. NET gave significantly more symptom reduction compared to TAU as well as significantly more reduction in participants with PTSD diagnoses. No difference in treatment efficacy was found between refugees and asylum seekers.\n\n\nCONCLUSIONS\nThe study indicated that refugees and asylum seekers can be treated successfully for PTSD and depression in the general psychiatric health care system; NET appeared to be a promising treatment for both groups.", "title": "" }, { "docid": "dcff0e9e62d245212554f639d5b152bf", "text": "The pull-based development model, enabled by git and popularised by collaborative coding platforms like BitBucket, Gitorius, and GitHub, is widely used in distributed software teams. While this model lowers the barrier to entry for potential contributors (since anyone can submit pull requests to any repository), it also increases the burden on integrators (i.e., members of a project's core team, responsible for evaluating the proposed changes and integrating them into the main development line), who struggle to keep up with the volume of incoming pull requests. In this paper we report on a quantitative study that tries to resolve which factors affect pull request evaluation latency in GitHub. Using regression modeling on data extracted from a sample of GitHub projects using the Travis-CI continuous integration service, we find that latency is a complex issue, requiring many independent variables to explain adequately.", "title": "" }, { "docid": "1ae92f60b2df645d60a3a45af2441edf", "text": "A comprehensive theory of cerebellar function is presented, which ties together the known anatomy and physiology of the cerebellum into a pattern-recognition data processing system. The cerebellum is postulated to be functionally and structurally equivalent to a modification of the classical Perceptron pattern-classification device. I t i s suggested that the mossy fiber-+ granule c e l l-+ Golgi c e l l input network performs an expansion recoding that enhances the pattern-discrimination capacity and learning speed of the cerebellar Purkinje response cells. Parallel fiber synapses of the dendritic spines of Purkinje cells, basket cells, and stellate cells are all postulated to be specifically variable in response to climbing fiber activity. I t is argued that this variability i s the mechanism of pattern storage. I t i s demonstrated that, in order for the learning process to be stable, pattern storage must be accomplished principally by weakening synaptic weights rather than by strengthening them.", "title": "" }, { "docid": "26f393df2f3e7c16db2ee10d189efb37", "text": "Recently a few systems for automatically solving math word problems have reported promising results. However, the datasets used for evaluation have limitations in both scale and diversity. In this paper, we build a large-scale dataset which is more than 9 times the size of previous ones, and contains many more problem types. Problems in the dataset are semiautomatically obtained from community question-answering (CQA) web pages. A ranking SVM model is trained to automatically extract problem answers from the answer text provided by CQA users, which significantly reduces human annotation cost. Experiments conducted on the new dataset lead to interesting and surprising results.", "title": "" }, { "docid": "685a3c1eee19ee71c36447c49aca757f", "text": "Advanced diagnostic technologies, such as polymerase chain reaction (PCR) and enzyme-linked immunosorbent assay (ELISA), have been widely used in well-equipped laboratories. However, they are not affordable or accessible in resource-limited settings due to the lack of basic infrastructure and/or trained operators. Paper-based diagnostic technologies are affordable, user-friendly, rapid, robust, and scalable for manufacturing, thus holding great potential to deliver point-of-care (POC) diagnostics to resource-limited settings. In this review, we present the working principles and reaction mechanism of paper-based diagnostics, including dipstick assays, lateral flow assays (LFAs), and microfluidic paper-based analytical devices (μPADs), as well as the selection of substrates and fabrication methods. Further, we report the advances in improving detection sensitivity, quantification readout, procedure simplification and multi-functionalization of paper-based diagnostics, and discuss the disadvantages of paper-based diagnostics. We envision that miniaturized and integrated paper-based diagnostic devices with the sample-in-answer-out capability will meet the diverse requirements for diagnosis and treatment monitoring at the POC.", "title": "" }, { "docid": "d951e1bc1822fd147ffb333d2d71c1ff", "text": "In order to solve the problem of precise temperature control, the thermoelectric cooler (TEC) principle widely used is analyzed for the design of the whole control process and selection of control parameters, and then accurate simulation model of the TEC is established in Proteus simulation software. Moreover, combined with the traditional circuit simulation model, the temperature control loop is designed, and the response characteristics of the system are tested using an input signal similar to the unit-step function to achieve the precise temperature control. Simulation results show that the proposed control circuits can precisely convert error signal to output voltage sent to TEC model, and TEC model behaves approximately like a two-pole system. The first pole starts at 20mHz and a second pole at 1Hz.", "title": "" }, { "docid": "682f68ccb2a00b9c1ccc93caf587cb2d", "text": "To evaluate the feasibility of coating formulated recombinant human erythropoietin alfa (EPO) on a titanium microneedle transdermal delivery system, ZP-EPO, and assess preclinical patch delivery performance. Formulation rheology and surface activity were assessed by viscometry and contact angle measurement. EPO liquid formulation was coated onto titanium microneedles by dip-coating and drying. Stability of coated EPO was assessed by SEC-HPLC, CZE and potency assay. Preclinical in vivo delivery and pharmacokinetic studies were conducted in rats with EPO-coated microneedle patches and compared to subcutaneous EPO injection. Studies demonstrated successful EPO formulation development and coating on microneedle arrays. ZP-EPO patch was stable at 25°C for at least 3 months with no significant change in % aggregates, isoforms, or potency. Preclinical studies in rats showed the ZP-EPO microneedle patches, coated with 750 IU to 22,000 IU, delivered with high efficiency (75–90%) with a linear dose response. PK profile was similar to subcutaneous injection of commercial EPO. Results suggest transdermal microneedle patch delivery of EPO is feasible and may offer an efficient, dose-adjustable, patient-friendly alternative to current intravenous or subcutaneous routes of administration.", "title": "" }, { "docid": "b14007d127629d7082d9bb5169140d0e", "text": "The term \"selection bias\" encompasses various biases in epidemiology. We describe examples of selection bias in case-control studies (eg, inappropriate selection of controls) and cohort studies (eg, informative censoring). We argue that the causal structure underlying the bias in each example is essentially the same: conditioning on a common effect of 2 variables, one of which is either exposure or a cause of exposure and the other is either the outcome or a cause of the outcome. This structure is shared by other biases (eg, adjustment for variables affected by prior exposure). A structural classification of bias distinguishes between biases resulting from conditioning on common effects (\"selection bias\") and those resulting from the existence of common causes of exposure and outcome (\"confounding\"). This classification also leads to a unified approach to adjust for selection bias.", "title": "" }, { "docid": "06c51b2a995d4ccbddd85898afa36ae8", "text": "Denial of Service (DoS henceforth) attack is performed solely with the intention to deny the legitimate users to access services. Since DoS attack is usually performed by means of bots, automated software. These bots send a large number of fake requests to the server which exceeds server buffer capacity which results in DoS attack. In this paper we propose an idea to prevent DoS attack on web-sites which ask for user credentials before it allows them to access resources. Our approach is based on CAPTCHA verification. We verify CAPTCHA submitted by user before allowing the access to credentials page. The CAPTCHA would consist of variety of patterns that would be distinct in nature and are randomly generated during each visit to the webpage. Most of the current web sites use a common methodology to generate all its CAPTCHAs. The bots usually take advantage of this approach since bots are able to decipher those CAPTCHAs. A set of distinct CAPTCHA patterns prevents bots to decipher it and consequently helps to reduce the generation of illicit traffic. This preserves the server bandwidth to allow the legitimate users to access the site.", "title": "" }, { "docid": "70c90e6ed53fe4dbe489e5592aab201c", "text": "Many-objective optimization has posed a great challenge to the classical Pareto dominance-based multiobjective evolutionary algorithms (MOEAs). In this paper, an evolutionary algorithm based on a new dominance relation is proposed for many-objective optimization. The proposed evolutionary algorithm aims to enhance the convergence of the recently suggested nondominated sorting genetic algorithm III by exploiting the fitness evaluation scheme in the MOEA based on decomposition, but still inherit the strength of the former in diversity maintenance. In the proposed algorithm, the nondominated sorting scheme based on the introduced new dominance relation is employed to rank solutions in the environmental selection phase, ensuring both convergence and diversity. The proposed algorithm is evaluated on a number of well-known benchmark problems having 3-15 objectives and compared against eight state-of-the-art algorithms. The extensive experimental results show that the proposed algorithm can work well on almost all the test functions considered in this paper, and it is compared favorably with the other many-objective optimizers. Additionally, a parametric study is provided to investigate the influence of a key parameter in the proposed algorithm.", "title": "" }, { "docid": "23c21581171fb00c611b41fe3ca5a9db", "text": "Design, analysis and optimization of a parallel-coupled microstrip bandpass filter for FM Wireless applications is presented in this paper. The filter is designed and optimized at a center frequency of 6 GHz. Half wavelength long resonators and admittance inverters are used to design the filter. A brief description of coupled microstrip lines and immittance inverters is also included. Design equations to compute physical dimensions of the filter are given in the paper. The filter is simulated using ADS (Advanced Design System) design software and implemented on Roger 4003C substrate.", "title": "" }, { "docid": "4185d65971d7345afbd7189368ed9303", "text": "Ticket annotation and search has become an essential research subject for the successful delivery of IT operational analytics. Millions of tickets are created yearly to address business users' IT related problems. In IT service desk management, it is critical to first capture the pain points for a group of tickets to determine root cause; secondly, to obtain the respective distributions in order to layout the priority of addressing these pain points. An advanced ticket analytics system utilizes a combination of topic modeling, clustering and Information Retrieval (IR) technologies to address the above issues and the corresponding architecture which integrates of these features will allow for a wider distribution of this technology and progress to a significant financial benefit for the system owner. Topic modeling has been used to extract topics from given documents; in general, each topic is represented by a unigram language model. However, it is not clear how to interpret the results in an easily readable/understandable way until now. Due to the inefficiency to render top concepts using existing techniques, in this paper, we propose a probabilistic framework, which consists of language modeling (especially the topic models), Part-Of-Speech (POS) tags, query expansion, retrieval modeling and so on for the practical challenge. The rigorously empirical experiments demonstrate the consistent and utility performance of the proposed method on real datasets.", "title": "" }, { "docid": "afe0c431852191bc2316d1c5091f239b", "text": "Dynamic models of pneumatic artificial muscles (PAMs) are important for simulation of the movement dynamics of the PAM-based actuators and also for their control. The simple models of PAMs are geometric models, which can be relatively easy used under certain simplification for obtaining of the static and dynamic characteristics of the pneumatic artificial muscle. An advanced geometric muscle model is used in paper for describing the dynamic behavior of PAM based antagonistic actuator.", "title": "" }, { "docid": "d464711e6e07b61896ba6efe2bbfa5e4", "text": "This paper presents a simple model for body-shadowing in off-body and body-to-body channels. The model is based on a body shadowing pattern associated with the on-body antenna, represented by a cosine function whose amplitude parameter is calculated from measurements. This parameter, i.e the maximum body-shadowing loss, is found to be linearly dependent on distance. The model was evaluated against a set of off-body channel measurements at 2.45 GHz in an indoor office environment, showing a good fit. The coefficient of determination obtained for the linear model of the maximum body-shadowing loss is greater than 0.6 in all considered scenarios, being higher than 0.8 for the ones with a static user.", "title": "" } ]
scidocsrr
869ac40f19f22de4cd961c145f3838e3
Ultra-Wideband Patch Antenna for K-Band Applications
[ { "docid": "71a9394d995cefb8027bed3c56ec176c", "text": "A broadband microstrip-fed printed antenna is proposed for phased antenna array systems. The antenna consists of two parallel-modified dipoles of different lengths. The regular dipole shape is modified to a quasi-rhombus shape by adding two triangular patches. Using two dipoles helps maintain stable radiation patterns close to their resonance frequencies. A modified array configuration is proposed to further enhance the antenna radiation characteristics and usable bandwidth. Scanning capabilities are studied for a four-element array. The proposed antenna provides endfire radiation patterns with high gain, high front-to-back (F-to-B) ratio, low cross-polarization level, wide beamwidth, and wide scanning angles in a wide bandwidth of 103%", "title": "" } ]
[ { "docid": "02d5d1d867b35aa11bc4413c924a5695", "text": "Purpose – The purpose of this research is to help knowledge managers systematically grasp ‘‘knowledge about management knowledge’’ and get a ‘‘deep and full’’ understanding of the nature, scope and methodologies of knowledge management. Design/methodology/approach – Through presenting a variety of perspectives on knowledge, management, and knowledge management, the article explores the essence of knowledge management in organizations from a perspective of critical systems thinking. Findings – Knowledge management in business organizations has the task of managing the activities of knowledge workers or the transformation and interaction of organizational ‘‘static substance knowledge’’ and ‘‘dynamic process knowledge’’ for ‘‘products, services, and practical process innovation’’ and, at the same time, ‘‘creating new or justifying existing organizational systematic knowledge’’. Knowledge management is not simply about recording and manipulating explicit knowledge, but needs to address that which is implicit, and from which benefit can therefore be derived only through process rather than content. Originality/value – The comprehensive review and classification of various management theories will expand both knowledge managers’ and knowledge workers’ understanding of the subject and provide a foundation for building a knowledge management toolkit in practice.", "title": "" }, { "docid": "6bd608ff22eb9eec7ef82a6a312ae5b9", "text": "We propose a Topic Compositional Neural Language Model (TCNLM), a novel method designed to simultaneously capture both the global semantic meaning and the local wordordering structure in a document. The TCNLM learns the global semantic coherence of a document via a neural topic model, and the probability of each learned latent topic is further used to build a Mixture-ofExperts (MoE) language model, where each expert (corresponding to one topic) is a recurrent neural network (RNN) that accounts for learning the local structure of a word sequence. In order to train the MoE model efficiently, a matrix factorization method is applied, by extending each weight matrix of the RNN to be an ensemble of topic-dependent weight matrices. The degree to which each member of the ensemble is used is tied to the document-dependent probability of the corresponding topics. Experimental results on several corpora show that the proposed approach outperforms both a pure RNN-based model and other topic-guided language models. Further, our model yields sensible topics, and also has the capacity to generate meaningful sentences conditioned on given topics.", "title": "" }, { "docid": "bf84e66bab43950f0d4d8c2d465b907e", "text": "Paraphrases are sentences or phrases that convey the same meaning using different wording. Although the logical definition of paraphrases requires strict semantic equivalence, linguistics accepts a broader, approximate, equivalence—thereby allowing far more examples of “quasi-paraphrase.” But approximate equivalence is hard to define. Thus, the phenomenon of paraphrases, as understood in linguistics, is difficult to characterize. In this article, we list a set of 25 operations that generate quasi-paraphrases. We then empirically validate the scope and accuracy of this list by manually analyzing random samples of two publicly available paraphrase corpora. We provide the distribution of naturally occurring quasi-paraphrases in English text.", "title": "" }, { "docid": "8d04e80c5a70ed020e7cfda3b2c02850", "text": "We present a framework to segment cultural and natural features, given 3D aerial scans of a large urban area, and (optionally) registered ground level scans of the same area. This system provides a primary step to achieve the ultimate goal of detecting every object from a large number of varied categories, from antenna to power plants. Our framework first identifies local patches of the ground surface and roofs of buildings. This is accomplished by tensor voting that infers surface orientation from neighboring regions as well as local 3D points. We then group adjacent planar surfaces with consistent pose to find surface segments and classify them as either the terrain or roofs of buildings. The same approach is also applied to delineate vertical faces of buildings, as well as free-standing vertical structures such as fences. The inferred large structures are then used as geometric context to segment linear structures, such as power lines, and structures attached to walls and roofs from remaining unclassified 3D points in the scene. We demonstrate our system on real LIDAR datasets acquired from typical urban regions with areas of a few square kilometers each, and provide a quantitative analysis of performance using externally provided ground truth.", "title": "" }, { "docid": "452cdeeea582159a7e69dbf58a4553be", "text": "From early adolescence through adulthood, women are twice as likely as men to experience depression. Many different explanations for this gender difference in depression have been offered, but none seems to fully explain it. Recent research has focused on gender differences in stress responses, and in exposure to certain stressors. I review this research and describe how gender differences in stress experiences and stress reactivity may interact to create women’s greater vulnerability to depression.", "title": "" }, { "docid": "bf9e56e0e125e922de95381fb5520569", "text": "Today, many private households as well as broadcasting or film companies own large collections of digital music plays. These are time series that differ from, e.g., weather reports or stocks market data. The task is normally that of classification, not prediction of the next value or recognizing a shape or motif. New methods for extracting features that allow to classify audio data have been developed. However, the development of appropriate feature extraction methods is a tedious effort, particularly because every new classification task requires tailoring the feature set anew. This paper presents a unifying framework for feature extraction from value series. Operators of this framework can be combined to feature extraction methods automatically, using a genetic programming approach. The construction of features is guided by the performance of the learning classifier which uses the features. Our approach to automatic feature extraction requires a balance between the completeness of the methods on one side and the tractability of searching for appropriate methods on the other side. In this paper, some theoretical considerations illustrate the trade-off. After the feature extraction, a second process learns a classifier from the transformed data. The practical use of the methods is shown by two types of experiments: classification of genres and classification according to user preferences.", "title": "" }, { "docid": "5b4e2380172b90c536eb974268a930b6", "text": "This paper addresses the problem of road scene segmentation in conventional RGB images by exploiting recent advances in semantic segmentation via convolutional neural networks (CNNs). Segmentation networks are very large and do not currently run at interactive frame rates. To make this technique applicable to robotics we propose several architecture refinements that provide the best trade-off between segmentation quality and runtime. This is achieved by a new mapping between classes and filters at the expansion side of the network. The network is trained end-to-end and yields precise road/lane predictions at the original input resolution in roughly 50ms. Compared to the state of the art, the network achieves top accuracies on the KITTI dataset for road and lane segmentation while providing a 20× speed-up. We demonstrate that the improved efficiency is not due to the road segmentation task. Also on segmentation datasets with larger scene complexity, the accuracy does not suffer from the large speed-up.", "title": "" }, { "docid": "1d64f04b9c3d1579cbff94a2d8dce623", "text": "In the present work, the performance of indoor deployment solutions based on the combination of Distributed Antenna Systems (DAS) and MIMO transmission techniques (Interleaved-MIMO DAS solutions) is investigated for high-order MIMO schemes with the aid of LTE link level simulations. Planning guidelines for linear and 2D coverage solutions based on Interleaved-MIMO DAS are then derived.", "title": "" }, { "docid": "d8d102c3d6ac7d937bb864c69b4d3cd9", "text": "Question Answering (QA) systems are becoming the inspiring model for the future of search engines. While recently, underlying datasets for QA systems have been promoted from unstructured datasets to structured datasets with highly semantic-enriched metadata, but still question answering systems involve serious challenges which cause to be far beyond desired expectations. In this paper, we raise the challenges for building a Question Answering (QA) system especially with the focus of employing structured data (i.e. knowledge graph). This paper provide an exhaustive insight of the known challenges, so far. Thus, it helps researchers to easily spot open rooms for the future research agenda.", "title": "" }, { "docid": "74e0fb4cb7b57d8b84eed3f895a39ef3", "text": "High-throughput data production has revolutionized molecular biology. However, massive increases in data generation capacity require analysis approaches that are more sophisticated, and often very computationally intensive. Thus, making sense of high-throughput data requires informatics support. Galaxy (http://galaxyproject.org) is a software system that provides this support through a framework that gives experimentalists simple interfaces to powerful tools, while automatically managing the computational details. Galaxy is distributed both as a publicly available Web service, which provides tools for the analysis of genomic, comparative genomic, and functional genomic data, or a downloadable package that can be deployed in individual laboratories. Either way, it allows experimentalists without informatics or programming expertise to perform complex large-scale analysis with just a Web browser.", "title": "" }, { "docid": "adad5599122e63cde59322b7ba46461b", "text": "Understanding how images of objects and scenes behave in response to specific ego-motions is a crucial aspect of proper visual development, yet existing visual learning methods are conspicuously disconnected from the physical source of their images. We propose to exploit proprioceptive motor signals to provide unsupervised regularization in convolutional neural networks to learn visual representations from egocentric video. Specifically, we enforce that our learned features exhibit equivariance i.e. they respond systematically to transformations associated with distinct ego-motions. With three datasets, we show that our unsupervised feature learning system significantly outperforms previous approaches on visual recognition and next-best-view prediction tasks. In the most challenging test, we show that features learned from video captured on an autonomous driving platform improve large-scale scene recognition in a disjoint domain.", "title": "" }, { "docid": "42ecca95c15cd1f92d6e5795f99b414a", "text": "Personalized tag recommendation systems recommend a list of tags to a user when he is about to annotate an item. It exploits the individual preference and the characteristic of the items. Tensor factorization techniques have been applied to many applications, such as tag recommendation. Models based on Tucker Decomposition can achieve good performance but require a lot of computation power. On the other hand, models based on Canonical Decomposition can run in linear time and are more feasible for online recommendation. In this paper, we propose a novel method for personalized tag recommendation, which can be considered as a nonlinear extension of Canonical Decomposition. Different from linear tensor factorization, we exploit Gaussian radial basis function to increase the model’s capacity. The experimental results show that our proposed method outperforms the state-of-the-art methods for tag recommendation on real datasets and perform well even with a small number of features, which verifies that our models can make better use of features.", "title": "" }, { "docid": "dc5b6cd087a99d7dd123a69b1991eb3e", "text": "Current top-N recommendation methods compute the recommendations by taking into account only relations between pairs of items, thus leading to potential unused information when higher-order relations between the items exist. Past attempts to incorporate the higherorder information were done in the context of neighborhood-based methods. However, in many datasets, they did not lead to significant improvements in the recommendation quality. We developed a top-N recommendation method that revisits the issue of higher-order relations, in the context of the model-based Sparse LInear Method (SLIM). The approach followed (Higher-Order Sparse LInear Method, or HOSLIM) learns two sparse aggregation coefficient matrices S and S′ that capture the item-item and itemset-item similarities, respectively. Matrix S′ allows HOSLIM to capture higher-order relations, whose complexity is determined by the length of the itemset. Following the spirit of SLIM, matrices S and S′ are estimated using an elastic net formulation, which promotes model sparsity. We conducted extensive experiments which show that higher-order interactions exist in real datasets and when incorporated in the HOSLIM framework, the recommendations made are improved. The experimental results show that the greater the presence of higher-order relations, the more substantial the improvement in recommendation quality is, over the best existing methods. In addition, our experiments show that the performance of HOSLIM remains good when we select S′ such that its number of nonzeros is comparable to S, which reduces the time required to compute the recommendations.", "title": "" }, { "docid": "879af50edd27c74bde5b656d0421059a", "text": "In this thesis we present an approach to adapt the Single Shot multibox Detector (SSD) for face detection. Our experiments are performed on the WIDER dataset which contains a large amount of small faces (faces of 50 pixels or less). The results show that the SSD method performs poorly on the small/hard subset of this dataset. We analyze the influence of increasing the resolution during inference and training time. Building on this analysis we present two additions to the SSD method. The first addition is changing the SSD architecture to an image pyramid architecture. The second addition is creating a selection criteria on each of the different branches of the image pyramid architecture. The results show that increasing the resolution, even during inference, increases the performance for the small/hard subset. By combining resolutions in an image pyramid structure we observe that the performance keeps consistent across different sizes of faces. Finally, the results show that adding a selection criteria on each branch of the image pyramid further increases performance, because the selection criteria negates the competing behaviour of the image pyramid. We conclude that our approach not only increases performance on the small/hard subset of the WIDER dataset but keeps on performing well on the large subset.", "title": "" }, { "docid": "5034984717b3528f7f47a1f88a3b1310", "text": "ALL RIGHTS RESERVED. This document contains material protected under International and Federal Copyright Laws and Treaties. Any unauthorized reprint or use of this material is prohibited. No part of this document may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system without express written permission from the author / publisher.", "title": "" }, { "docid": "36356a91bc84888cb2dd6180983fdfc5", "text": "We recently showed that Long Short-Term Memory (LSTM) recurrent neural networks (RNNs) outperform state-of-the-art deep neural networks (DNNs) for large scale acoustic modeling where the models were trained with the cross-entropy (CE) criterion. It has also been shown that sequence discriminative training of DNNs initially trained with the CE criterion gives significant improvements. In this paper, we investigate sequence discriminative training of LSTM RNNs in a large scale acoustic modeling task. We train the models in a distributed manner using asynchronous stochastic gradient descent optimization technique. We compare two sequence discriminative criteria – maximum mutual information and state-level minimum Bayes risk, and we investigate a number of variations of the basic training strategy to better understand issues raised by both the sequential model, and the objective function. We obtain significant gains over the CE trained LSTM RNN model using sequence discriminative training techniques.", "title": "" }, { "docid": "04845ef4f3c878b35c7b34ea1a3d228d", "text": "OBJECTIVES\nBased on 1984 data developed from reviews of medical records of patients treated in New York hospitals, the Institute of Medicine estimated that up to 98,000 Americans die each year from medical errors. The basis of this estimate is nearly 3 decades old; herein, an updated estimate is developed from modern studies published from 2008 to 2011.\n\n\nMETHODS\nA literature review identified 4 limited studies that used primarily the Global Trigger Tool to flag specific evidence in medical records, such as medication stop orders or abnormal laboratory results, which point to an adverse event that may have harmed a patient. Ultimately, a physician must concur on the findings of an adverse event and then classify the severity of patient harm.\n\n\nRESULTS\nUsing a weighted average of the 4 studies, a lower limit of 210,000 deaths per year was associated with preventable harm in hospitals. Given limitations in the search capability of the Global Trigger Tool and the incompleteness of medical records on which the Tool depends, the true number of premature deaths associated with preventable harm to patients was estimated at more than 400,000 per year. Serious harm seems to be 10- to 20-fold more common than lethal harm.\n\n\nCONCLUSIONS\nThe epidemic of patient harm in hospitals must be taken more seriously if it is to be curtailed. Fully engaging patients and their advocates during hospital care, systematically seeking the patients' voice in identifying harms, transparent accountability for harm, and intentional correction of root causes of harm will be necessary to accomplish this goal.", "title": "" }, { "docid": "149de84d7cbc9ea891b4b1297957ade7", "text": "Deep convolutional neural networks (CNNs) have had a major impact in most areas of image understanding, including object category detection. In object detection, methods such as R-CNN have obtained excellent results by integrating CNNs with region proposal generation algorithms such as selective search. In this paper, we investigate the role of proposal generation in CNN-based detectors in order to determine whether it is a necessary modelling component, carrying essential geometric information not contained in the CNN, or whether it is merely a way of accelerating detection. We do so by designing and evaluating a detector that uses a trivial region generation scheme, constant for each image. Combined with SPP, this results in an excellent and fast detector that does not require to process an image with algorithms other than the CNN itself. We also streamline and simplify the training of CNN-based detectors by integrating several learning steps in a single algorithm, as well as by proposing a number of improvements that accelerate detection.", "title": "" }, { "docid": "b7758121f5c24dd87e6c5fd795140066", "text": "Conflicts between security and usability goals can be avoided by considering the goals together throughout an iterative design process. A successful design involves addressing users' expectations and inferring authorization based on their acts of designation.", "title": "" }, { "docid": "3f826b00b909e409c82f3b4c89842a9f", "text": "This paper presents a monocular vision-based preceding vehicle detection system using Histogram of Oriented Gradient (HOG) based method and linear SVM classification. Our detection algorithm consists of three main components: HOG feature extraction, linear SVM classifier training and vehicles detection. Integral Image method is adopted to improve the HOG computational efficiency, and hard examples are generated to reduce false positives in the training phase. In detection step, the multiple overlapping detections due to multi-scale window searching are very well fused by non-maximum suppression based on mean-shift. The monocular system is tested under different traffic scenarios (e.g., simply structured highway, complex urban environments, local occlusion conditions), illustrating good performance.", "title": "" } ]
scidocsrr
d971c13855f88315b5d5ae836d025ea0
Real-time bidding algorithms for performance-based display ad allocation
[ { "docid": "2438479795a9673c36138212b61c6d88", "text": "Motivated by the emergence of auction-based marketplaces for display ads such as the Right Media Exchange, we study the design of a bidding agent that implements a display advertising campaign by bidding in such a marketplace. The bidding agent must acquire a given number of impressions with a given target spend, when the highest external bid in the marketplace is drawn from an unknown distribution P. The quantity and spend constraints arise from the fact that display ads are usually sold on a CPM basis. We consider both the full information setting, where the winning price in each auction is announced publicly, and the partially observable setting where only the winner obtains information about the distribution; these differ in the penalty incurred by the agent while attempting to learn the distribution. We provide algorithms for both settings, and prove performance guarantees using bounds on uniform closeness from statistics, and techniques from online learning. We experimentally evaluate these algorithms: both algorithms perform very well with respect to both target quantity and spend; further, our algorithm for the partially observable case performs nearly as well as that for the fully observable setting despite the higher penalty incurred during learning.", "title": "" } ]
[ { "docid": "ebe86cf94b566d7a4df045ce28055f66", "text": "Despite numerous predictions of the paperless office, knowledge work is still characterized by the combined use of paper and digital documents. Digital pen-and-paper user interfaces bridge the gap between both worlds by electronically capturing the interactions of a user with a pen on real paper. The contribution of this paper is two-fold: First, we introduce an interaction framework for pen-and-paper user interfaces consisting of six core interactions. This helps both in analyzing existing work practices and interfaces and in guiding the design of interfaces which offer complex functionality and nevertheless remain simple to use. Second, we apply this framework and contribute three novel pen-and-paper interaction strategies for creating hyperlinks between printed and digital documents and for tagging both types of documents.", "title": "" }, { "docid": "3bb4d0f44ed5a2c14682026090053834", "text": "A Meander Line Antenna (MLA) for 2.45 GHz is proposed. This research focuses on the optimum value of gain and reflection coefficient. Therefore, the MLA's parametric studies is discussed which involved the number of turn, width of feed (W1), length of feed (LI) and vertical length partial ground (L3). As a result, the studies have significantly achieved MLA's gain and reflection coefficient of 3.248dB and -45dB respectively. The MLA also resembles the monopole antenna behavior of Omni-directional radiation pattern. Measured and simulated results are presented. The proposed antenna has big potential to be implemented for WLAN device such as optical mouse application.", "title": "" }, { "docid": "caa6f0769cc62cbde30b96ae31dabb3f", "text": "ThyssenKrupp Transrapid developed a new motor winding for synchronous long stator propulsion with optimized grounding system. The motor winding using a cable without metallic screen is presented. The function as well as the mechanical and electrical design of the grounding system is illustrated. The new design guarantees a much lower electrical stress than the load capacity of the system. The main design parameters, simulation and testing results as well as calculations of the electrical stress of the grounding system are described.", "title": "" }, { "docid": "6bcc65065f9e1f52bbe0276b4a5d8a45", "text": "Urban mobility impacts urban life to a great extent. To enhance urban mobility, much research was invested in traveling time prediction: given an origin and destination, provide a passenger with an accurate estimation of how long a journey lasts. In this work, we investigate a novel combination of methods from Queueing Theory and Machine Learning in the prediction process. We propose a prediction engine that, given a scheduled bus journey (route) and a ‘source/destination’ pair, provides an estimate for the traveling time, while considering both historical data and real-time streams of information that are transmitted by buses. We propose a model that uses natural segmentation of the data according to bus stops and a set of predictors, some use learning while others are learning-free, to compute traveling time. Our empirical evaluation, using bus data that comes from the bus network in the city of Dublin, demonstrates that the snapshot principle, taken from Queueing Theory works well yet suffers from outliers. To overcome the outliers problem, we use machine learning techniques as a regulator that assists in identifying outliers and propose prediction based on historical data.", "title": "" }, { "docid": "25c8d687e6044ae734270bb0d7fd8868", "text": "Continual learning broadly refers to the algorithms which aim to learn continuously over time across varying domains, tasks or data distributions. This is in contrast to algorithms restricted to learning a fixed number of tasks in a given domain, assuming a static data distribution. In this survey we aim to discuss a wide breadth of challenges faced in a continual learning setup and review existing work in the area. We discuss parameter regularization techniques to avoid catastrophic forgetting in neural networks followed by memory based approaches and the role of generative models in assisting continual learning algorithms. We discuss how dynamic neural networks assist continual learning by endowing neural networks with a new capacity to learn further. We conclude by discussing possible future directions.", "title": "" }, { "docid": "dea6ad0e1985260dbe7b70cef1c5da54", "text": "The commonest mitochondrial diseases are probably those impairing the function of complex I of the respiratory electron transport chain. Such complex I impairment may contribute to various neurodegenerative disorders e.g. Parkinson's disease. In the following, using hepatocytes as a model cell, we have shown for the first time that the cytotoxicity caused by complex I inhibition by rotenone but not that caused by complex III inhibition by antimycin can be prevented by coenzyme Q (CoQ1) or menadione. Furthermore, complex I inhibitor cytotoxicity was associated with the collapse of the mitochondrial membrane potential and reactive oxygen species (ROS) formation. ROS scavengers or inhibitors of the mitochondrial permeability transition prevented cytotoxicity. The CoQ1 cytoprotective mechanism required CoQ1 reduction by DT-diaphorase (NQO1). Furthermore, the mitochondrial membrane potential and ATP levels were restored at low CoQ1 concentrations (5 microM). This suggests that the CoQ1H2 formed by NQO1 reduced complex III and acted as an electron bypass of the rotenone block. However cytoprotection still occurred at higher CoQ1 concentrations (>10 microM), which were less effective at restoring ATP levels but readily restored the cellular cytosolic redox potential (i.e. lactate: pyruvate ratio) and prevented ROS formation. This suggests that CoQ1 or menadione cytoprotection also involves the NQO1 catalysed reoxidation of NADH that accumulates as a result of complex I inhibition. The CoQ1H2 formed would then also act as a ROS scavenger.", "title": "" }, { "docid": "ebc107147884d89da4ef04eba2d53a73", "text": "Twitter sentiment analysis (TSA) has become a hot research topic in recent years. The goal of this task is to discover the attitude or opinion of the tweets, which is typically formulated as a machine learning based text classification problem. Some methods use manually labeled data to train fully supervised models, while others use some noisy labels, such as emoticons and hashtags, for model training. In general, we can only get a limited number of training data for the fully supervised models because it is very labor-intensive and time-consuming to manually label the tweets. As for the models with noisy labels, it is hard for them to achieve satisfactory performance due to the noise in the labels although it is easy to get a large amount of data for training. Hence, the best strategy is to utilize both manually labeled data and noisy labeled data for training. However, how to seamlessly integrate these two different kinds of data into the same learning framework is still a challenge. In this paper, we present a novel model, called emoticon smoothed language model (ESLAM), to handle this challenge. The basic idea is to train a language model based on the manually labeled data, and then use the noisy emoticon data for smoothing. Experiments on real data sets demonstrate that ESLAM can effectively integrate both kinds of data to outperform those methods using only one of them.", "title": "" }, { "docid": "fec6a9e89b8552e62f98696d92de5a19", "text": "The emergence of genetic engineering at the beginning of the 1970's opened the era of biomedical technologies, which aims to improve human health using genetic manipulation techniques in a clinical context. Gene therapy represents an innovating and appealing strategy for treatment of human diseases, which utilizes vehicles or vectors for delivering therapeutic genes into the patients' body. However, a few past unsuccessful events that negatively marked the beginning of gene therapy resulted in the need for further studies regarding the design and biology of gene therapy vectors, so that this innovating treatment approach can successfully move from bench to bedside. In this paper, we review the major gene delivery vectors and recent improvements made in their design meant to overcome the issues that commonly arise with the use of gene therapy vectors. At the end of the manuscript, we summarized the main advantages and disadvantages of common gene therapy vectors and we discuss possible future directions for potential therapeutic vectors.", "title": "" }, { "docid": "4e4f653da064c9fc2096a5f334662ca8", "text": "Face images appearing in multimedia applications, e.g., social networks and digital entertainment, usually exhibit dramatic pose, illumination, and expression variations, resulting in considerable performance degradation for traditional face recognition algorithms. This paper proposes a comprehensive deep learning framework to jointly learn face representation using multimodal information. The proposed deep learning structure is composed of a set of elaborately designed convolutional neural networks (CNNs) and a three-layer stacked auto-encoder (SAE). The set of CNNs extracts complementary facial features from multimodal data. Then, the extracted features are concatenated to form a high-dimensional feature vector, whose dimension is compressed by SAE. All of the CNNs are trained using a subset of 9,000 subjects from the publicly available CASIA-WebFace database, which ensures the reproducibility of this work. Using the proposed single CNN architecture and limited training data, 98.43% verification rate is achieved on the LFW database. Benefitting from the complementary information contained in multimodal data, our small ensemble system achieves higher than 99.0% recognition rate on LFW using publicly available training set.", "title": "" }, { "docid": "29786d164d0d5e76ea9c098944e27266", "text": "Future mobile communications systems are likely to be very different to those of today with new service innovations driven by increasing data traffic demand, increasing processing power of smart devices and new innovative applications. To meet these service demands the telecommunications industry is converging on a common set of 5G requirements which includes network speeds as high as 10 Gbps, cell edge rate greater than 100 Mbps, and latency of less than 1 msec. To reach these 5G requirements the industry is looking at new spectrum bands in the range up to 100 GHz where there is spectrum availability for wide bandwidth channels. For the development of new 5G systems to operate in bands up to 100 GHz there is a need for accurate radio propagation models which are not addressed by existing channel models developed for bands below 6 GHz. This paper presents a preliminary overview of the 5G channel models for bands up to 100 GHz in indoor offices and shopping malls, derived from extensive measurements across a multitude of bands. These studies have found some extensibility of the existing 3GPP models (e.g. 3GPP TR36.873) to the higher frequency bands up to 100 GHz. The measurements indicate that the smaller wavelengths introduce an increased sensitivity of the propagation models to the scale of the environment and show some frequency dependence of the path loss as well as increased occurrence of blockage. Further, the penetration loss is highly dependent on the material and tends to increase with frequency. The small-scale characteristics of the channel such as delay spread and angular spread and the multipath richness is somewhat similar over the frequency range, which is encouraging for extending the existing 3GPP models to the wider frequency range. Further work will be carried out to complete these models, but this paper presents the first steps for an initial basis for the model development.", "title": "" }, { "docid": "051603c7ee83c49b31428ce611de06c2", "text": "The Internet of Things (IoT) will feature pervasive sensing and control capabilities via a massive deployment of machine-type communication (MTC) devices. The limited hardware, low-complexity, and severe energy constraints of MTC devices present unique communication and security challenges. As a result, robust physical-layer security methods that can supplement or even replace lightweight cryptographic protocols are appealing solutions. In this paper, we present an overview of low-complexity physical-layer security schemes that are suitable for the IoT. A local IoT deployment is modeled as a composition of multiple sensor and data subnetworks, with uplink communications from sensors to controllers, and downlink communications from controllers to actuators. The state of the art in physical-layer security for sensor networks is reviewed, followed by an overview of communication network security techniques. We then pinpoint the most energy-efficient and low-complexity security techniques that are best suited for IoT sensing applications. This is followed by a discussion of candidate low-complexity schemes for communication security, such as on-off switching and space-time block codes. The paper concludes by discussing open research issues and avenues for further work, especially the need for a theoretically well-founded and holistic approach for incorporating complexity constraints in physical-layer security designs.", "title": "" }, { "docid": "94b482fefc9e8e61fe4614245ff03287", "text": "In this paper, a general-purpose fuzzy controller for dc–dc converters is investigated. Based on a qualitative description of the system to be controlled, fuzzy controllers are capable of good performances, even for those systems where linear control techniques fail, e.g., when a mathematical description is not available or is in the presence of wide parameter variations. The presented approach is general and can be applied to any dc–dc converter topologies. Controller implementation is relatively simple and can guarantee a small-signal response as fast and stable as other standard regulators and an improved large-signal response. Simulation results of Buck-Boost and Sepic converters show control potentialities.", "title": "" }, { "docid": "f49923e0f36a47162ec087c661169459", "text": "People use imitation to encourage each other during conversation. We have conducted an experiment to investigate how imitation by a robot affect people’s perceptions of their conversation with it. The robot operated in one of three ways: full head gesture mimicking, partial head gesture mimicking (nodding), and non-mimicking (blinking). Participants rated how satisfied they were with the interaction. We hypothesized that participants in the full head gesture condition will rate their interaction the most positively, followed by the partial and non-mimicking conditions. We also performed gesture analysis to see if any differences existed between groups, and did find that men made significantly more gestures than women while interacting with the robot. Finally, we interviewed participants to try to ascertain additional insight into their feelings of rapport with the robot, which revealed a number of valuable insights.", "title": "" }, { "docid": "1fb47a3542ff1e7a382521c71fcacd4d", "text": "In this paper, we show that tracking different kinds of interacting objects can be formulated as a network-flow Mixed Integer Program. This is made possible by tracking all objects simultaneously and expressing the fact that one object can appear or disappear at locations where another is in terms of linear flow constraints. We demonstrate the power of our approach on scenes involving cars and pedestrians, bags being carried and dropped by people, and balls being passed from one player to the next in a basketball game. In particular, we show that by estimating jointly and globally the trajectories of different types of objects, the presence of the ones which were not initially detected based solely on image evidence can be inferred from the detections of the others.", "title": "" }, { "docid": "7cd992aec08167cb16ea1192a511f9aa", "text": "In this thesis, we will present an Echo State Network (ESN) to investigate hierarchical cognitive control, one of the functions of Prefrontal Cortex (PFC). This ESN is designed with the intention to implement it as a robot controller, making it useful for biologically inspired robot control and for embodied and embedded PFC research. We will apply the ESN to a n-back task and a Wisconsin Card Sorting task to confirm the hypothesis that topological mapping of temporal and policy abstraction over the PFC can be explained by the effects of two requirements: a better preservation of information when information is processed in different areas, versus a better integration of information when information is processed in a single area.", "title": "" }, { "docid": "126946b552f20bc5fe4e920381f52305", "text": "Although deep learning models have proven effective at solving problems in natural language processing, the mechanism by which they come to their conclusions is often unclear. As a result, these models are generally treated as black boxes, yielding no insight of the underlying learned patterns. In this paper we consider Long Short Term Memory networks (LSTMs) and demonstrate a new approach for tracking the importance of a given input to the LSTM for a given output. By identifying consistently important patterns of words, we are able to distill state of the art LSTMs on sentiment analysis and question answering into a set of representative phrases. This representation is then quantitatively validated by using the extracted phrases to construct a simple, rule-based classifier which approximates the output of the LSTM.", "title": "" }, { "docid": "a6bc752bd6a4fc070fa01a5322fb30a1", "text": "The formulation of a generalized area-based confusion matrix for exploring the accuracy of area estimates is presented. The generalized confusion matrix is appropriate for both traditional classiŽ cation algorithms and sub-pixel area estimation models. An error matrix, derived from the generalized confusion matrix, allows the accuracy of maps generated using area estimation models to be assessed quantitatively and compared to the accuracies obtained from traditional classiŽ cation techniques. The application of this approach is demonstrated for an area estimation model applied to Landsat data of an urban area of the United Kingdom.", "title": "" }, { "docid": "1557db582fbcf5e17c2b021b6d37b03a", "text": "Visual codebook based quantization of robust appearance descriptors extracted from local image patches is an effective means of capturing image statistics for texture analysis and scene classification. Codebooks are usually constructed by using a method such as k-means to cluster the descriptor vectors of patches sampled either densely ('textons') or sparsely ('bags of features' based on key-points or salience measures) from a set of training images. This works well for texture analysis in homogeneous images, but the images that arise in natural object recognition tasks have far less uniform statistics. We show that for dense sampling, k-means over-adapts to this, clustering centres almost exclusively around the densest few regions in descriptor space and thus failing to code other informative regions. This gives suboptimal codes that are no better than using randomly selected centres. We describe a scalable acceptance-radius based clusterer that generates better codebooks and study its performance on several image classification tasks. We also show that dense representations outperform equivalent keypoint based ones on these tasks and that SVM or mutual information based feature selection starting from a dense codebook further improves the performance.", "title": "" }, { "docid": "8c1f5e3f66256860bbb866fc5c330714", "text": "In this paper experimental comparisons between two Time-of-Flight (ToF) cameras are reported in order to test their performance and to give some procedures for testing data delivered by this kind of technology. In particular, the SR-4000 camera by Mesa Imaging AG and the CamCube3.0 by PMD Technologies have been evaluated since they have good performances and are well known to researchers dealing with Time-ofFlight (ToF) cameras. After a brief overview of commercial ToF cameras available on the market and the main specifications of the tested devices, two topics are presented in this paper. First, the influence of camera warm-up on distance measurement is analyzed: a warm-up of 40 minutes is suggested to obtain the measurement stability, especially in the case of the CamCube3.0 camera, that exhibits distance measurement variations of several centimeters. Secondly, the variation of distance measurement precision variation over integration time is presented: distance measurement precisions of some millimeters are obtained in both cases. Finally, a comparison between the two cameras based on the experiments and some information about future work on evaluation of sunlight influence on distance measurements are reported.", "title": "" }, { "docid": "708ff6ba9b6e593b9cb693ec65916767", "text": "The emergence of antibiotic resistance mechanisms among bacterial pathogens increases the demand for novel treatment strategies. Lately, the contribution of non-coding RNAs to antibiotic resistance and their potential value as drug targets became evident. RNA attenuator elements in mRNA leader regions couple expression of resistance genes to the presence of the cognate antibiotic. Trans-encoded small RNAs (sRNAs) modulate antibiotic tolerance by base-pairing with mRNAs encoding functions important for resistance such as metabolic enzymes, drug efflux pumps, or transport proteins. Bacteria respond with extensive changes of their sRNA repertoire to antibiotics. Each antibiotic generates a unique sRNA profile possibly causing downstream effects that may help to overcome the antibiotic challenge. In consequence, regulatory RNAs including sRNAs and their protein interaction partners such as Hfq may prove useful as targets for antimicrobial chemotherapy. Indeed, several compounds have been developed that kill bacteria by mimicking ligands for riboswitches controlling essential genes, demonstrating that regulatory RNA elements are druggable targets. Drugs acting on sRNAs are considered for combined therapies to treat infections. In this review, we address how regulatory RNAs respond to and establish resistance to antibiotics in bacteria. Approaches to target RNAs involved in intrinsic antibiotic resistance or virulence for chemotherapy will be discussed.", "title": "" } ]
scidocsrr
ed73ec251a124fff0f1e09adf0e5bab1
Intelligent Lighting Control for Vision-Based Robotic Manipulation
[ { "docid": "936d92f1afcab16a9dfe24b73d5f986d", "text": "Active vision techniques use programmable light sources, such as projectors, whose intensities can be controlled over space and time. We present a broad framework for fast active vision using Digital Light Processing (DLP) projectors. The digital micromirror array (DMD) in a DLP projector is capable of switching mirrors “on” and “off” at high speeds (10/s). An off-the-shelf DLP projector, however, effectively operates at much lower rates (30-60Hz) by emitting smaller intensities that are integrated over time by a sensor (eye or camera) to produce the desired brightness value. Our key idea is to exploit this “temporal dithering” of illumination, as observed by a high-speed camera. The dithering encodes each brightness value uniquely and may be used in conjunction with virtually any active vision technique. We apply our approach to five well-known problems: (a) structured light-based range finding, (b) photometric stereo, (c) illumination de-multiplexing, (d) high frequency preserving motion-blur and (e) separation of direct and global scene components, achieving significant speedups in performance. In all our methods, the projector receives a single image as input whereas the camera acquires a sequence of frames.", "title": "" } ]
[ { "docid": "8e8d7b2411fa0b0c19d745ce85fcec11", "text": "Parallel distributed processing (PDP) architectures demonstrate a potentially radical alternative to the traditional theories of language processing that are based on serial computational models. However, learning complex structural relationships in temporal data presents a serious challenge to PDP systems. For example, automata theory dictates that processing strings from a context-free language (CFL) requires a stack or counter memory device. While some PDP models have been hand-crafted to emulate such a device, it is not clear how a neural network might develop such a device when learning a CFL. This research employs standard backpropagation training techniques for a recurrent neural network (RNN) in the task of learning to predict the next character in a simple deterministic CFL (DCFL). We show that an RNN can learn to recognize the structure of a simple DCFL. We use dynamical systems theory to identify how network states re ̄ ect that structure by building counters in phase space. The work is an empirical investigation which is complementary to theoretical analyses of network capabilities, yet original in its speci ® c con® guration of dynamics involved. The application of dynamical systems theory helps us relate the simulation results to theoretical results, and the learning task enables us to highlight some issues for understanding dynamical systems that process language with counters.", "title": "" }, { "docid": "2fd7cc65c34551c90a72fc3cb4665336", "text": "Generating natural language requires conveying content in an appropriate style. We explore two related tasks on generating text of varying formality: monolingual formality transfer and formality-sensitive machine translation. We propose to solve these tasks jointly using multi-task learning, and show that our models achieve state-of-the-art performance for formality transfer and are able to perform formality-sensitive translation without being explicitly trained on styleannotated translation examples.", "title": "" }, { "docid": "d10e4740076aeba6441b74512e6993df", "text": "Purpose – In recent decades, innovation management has changed. This article provides an overview of the changes that have taken place, focusing on innovation management in large companies, with the aim of explaining that innovation management has evolved toward a contextual approach, which it will explain and illustrate using two cases. Design/methodology/approach – The basic approach in this article is to juxtapose a review of existing literature regarding trends in innovation management and research and development (R&D) management generations, and empirical data about actual approaches to innovation. Findings – The idea that there is a single mainstream innovation approach does not match with the (successful) approaches companies have adopted. What is required is a contextual approach. However, research with regard to such an approach is fragmented. Decisions to adapt the innovation management approach to the newness of an innovation or the type of organization respectively have thus far been investigated separately. Research limitations/implications – An integrated approach is needed to support the intuitive decisions managers make to tailor their innovation approach to the type of innovation, organization(s), industry and country/culture. Originality/value – The practical and scientific value of this paper is that is describes an integrated approach to contextual innovation.", "title": "" }, { "docid": "9a13a2baf55676f82457f47d3929a4e7", "text": "Humans are a cultural species, and the study of human psychology benefits from attention to cultural influences. Cultural psychology's contributions to psychological science can largely be divided according to the two different stages of scientific inquiry. Stage 1 research seeks cultural differences and establishes the boundaries of psychological phenomena. Stage 2 research seeks underlying mechanisms of those cultural differences. The literatures regarding these two distinct stages are reviewed, and various methods for conducting Stage 2 research are discussed. The implications of culture-blind and multicultural psychologies for society and intergroup relations are also discussed.", "title": "" }, { "docid": "09df260d26638f84ec3bd309786a8080", "text": "If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here: http://metaoptimize. com/projects/wordreprs/", "title": "" }, { "docid": "aa98ed0384fe6161d044cb3aa2225a98", "text": "Article history: Received 22 December 2013 Received in revised form 15 July 2014 Available online 26 September 2015 Dedicated to the Memory of Mary Ellen Rudin, a Great Person and a Great Mathematician MSC: 54H11 54A25 54B05", "title": "" }, { "docid": "9753b3ad7eac092f45035c941b59ebcb", "text": "Since the metabolic disorder may be the high risk that contribute to the progress of Alzheimer's disease (AD). Overtaken of High-fat, high-glucose or high-cholesterol diet may hasten the incidence of AD in later life, due to the metabolic dysfunction. But the metabolism of lipid in brain and the exact effect of lipid to brain or to the AD's pathological remain controversial. Here we summarize correlates of lipid metabolism and AD to provide more foundation for the daily nursing of AD sensitive patients.", "title": "" }, { "docid": "680fa29fcd41421a2b3b235555f0cb91", "text": "Brown adipose tissue (BAT) is the main site of adaptive thermogenesis and experimental studies have associated BAT activity with protection against obesity and metabolic diseases, such as type 2 diabetes mellitus and dyslipidaemia. Active BAT is present in adult humans and its activity is impaired in patients with obesity. The ability of BAT to protect against chronic metabolic disease has traditionally been attributed to its capacity to utilize glucose and lipids for thermogenesis. However, BAT might also have a secretory role, which could contribute to the systemic consequences of BAT activity. Several BAT-derived molecules that act in a paracrine or autocrine manner have been identified. Most of these factors promote hypertrophy and hyperplasia of BAT, vascularization, innervation and blood flow, processes that are all associated with BAT recruitment when thermogenic activity is enhanced. Additionally, BAT can release regulatory molecules that act on other tissues and organs. This secretory capacity of BAT is thought to be involved in the beneficial effects of BAT transplantation in rodents. Fibroblast growth factor 21, IL-6 and neuregulin 4 are among the first BAT-derived endocrine factors to be identified. In this Review, we discuss the current understanding of the regulatory molecules (the so-called brown adipokines or batokines) that are released by BAT that influence systemic metabolism and convey the beneficial metabolic effects of BAT activation. The identification of such adipokines might also direct drug discovery approaches for managing obesity and its associated chronic metabolic diseases.", "title": "" }, { "docid": "14d2f63cb324b3013c5fbf138a7f9dff", "text": "THISARTICLE WILL EXPLORE THE ROLE OF THE LIBRARIAN arid of the service perspective in the digital library environment. The focus of the article will be limited to the topic of librarian/user collaboration where the librarian and user are not co-located. The role of the librarian will be explored as outlined in the literature on digital libraries, some studies will be examined that attempt to put the service perspective in the digital library, survey existing initiatives in providing library services electronically, and outline potential service perspectives for the digital library. INTRODUCTION The digital library offers users the prospect of access to electronic resources at their convenience temporally and spatially. Users do not have to be concerned with the physical library’s hours of operation, and users do not have to go physically to the library to access resources. Much has been written about the digital library. The focus of most studies, papers, and articles has been on the technology or on the types of resources offered. Human interaction in the digital library is discussed far less frequently. One would almost get the impression that the service tradition of the physical library will be unnecessary and redundant in the digital library environment. Bernie Sloan, Office for Planning and Budget, Room 338, 506 S. Wright Street, University of Illinois, Urbana, IL 61801 LIBRARY TRENDS, Vol. 47, No. 1, Summer 1998,pp. 117-143 01998 The Board of’Trustees, University of Illinois 118 I.IBRARY TRENDS/SUMMER 1998 DEFINING LIBRARY-WHERE SERVICE FITI N ? THE DIGITA DOES Defining the digital library is an interesting, but somewhat daunting, task. There is no shortage of proposed definitions. One would think that there would be some commonly accepted and fairly straightforward standard definition, but there does not appear to be. Rather, there are many. And one common thread among all these definitions is a heavy emphasis on rrsourcesand an apparent lack of emphasis on librarians and the services they provide. The Association of Research Libraries (ARL) notes: “There are many definitions of a ‘digital library’. . . .Terms such as ‘electronic library’ and ‘virtual library’ are often used synonymously” (Association of Research Libraries, 1995). The AlU relies on Karen Drabenstott’s (1994) Analytical Reuiai~ojthe Library ofthe Future for its inspiration. In defining the digital library, Drabenstott offers fourteen definitions published between 1987 and 1993. The commonalties of these different definitions are summarized as follows: The digital library is not a single entity. The digital library requires technology to link the resources of many libraries and information services. Transparent to end-users are the linkages between the many digital libraries and information services. Universal access to digital libraries and information services is a goal. Digital libraries are not limited to document surrogates; they extend to digital artifacts that cannot be represented or distributed in printed formats. (p.9) One interesting aspect of Drabenstott’s summary definition is that, while there is a user-orientation stated, as well as references to technology and information resources, there is no reference to the role of the librarian in the digital library. Another report by Saffady (1995) cites thirty definitions of the digital library published between 1991 and 1994. Among the terms Saffady uses in describing these various definitions are: “repositories of.. .information assets,” “large information repositories,” “various online databases and.. .information products,” “computer storage devices on which information repositories reside,” “computerized, networked library systems,” accessible through the Internet,” “CD-ROM information products,” “database servers,” “libraries with online catalogs,” and “collections of computer-processible information” (p. 2 2 3 ) . Saffady summarizes these definitions by stating: “Broadly defined, a digital library is a collection of computer-processible information or a repository for such information” (p. 223). He then narrows the definition by noting that “a digital library is a library that maintains all, or a substantial part, of its collection in computer-processible form as an alternative, supplement, or complement to the conventional printed and microform materials that currently domiSLOAN/SERVICE PERSPECTIVES FOR THE DIGITAL LIBRARY I 19 nate library collections” (p. 224). Without exception, each of the definitions Saffady cites focuses on collections, repositories, or information resources. In another paper, Nurnberg, Furata, Leggett, Marshall, and Shipman (1995) ask “Why is a digital library called a library at all?” They state that the traditional physical library can provide a basis for discussing the digital library and arrive at this definition: the traditional library “deals with physical data” while the digital library works “primarily with digital data.” Once again, a definition that is striking in its neglect of service perspectives. In a paper presented at the Digital Libraries ’94 conference, Miksa and Doty (1994) again discuss the digital library as a “collection” or a series of collections. In another paper, Schatz and Chen (1996) state that digital libraries are “network information systems,” accessing resources “from and across large collections.” What do all these definitions of the “digital library” have in common? An emphasis on technology and information resources and a very noticeable lack of discussion of the service aspects of the digital library. Why is it important to take a look at how the digital library is defined? As more definitions of the digital library are published, with an absence of the service perspective and little treatment of the importance of librarian/ user collaboration, we perhaps draw closer to the Redundancy Theory (Hathorn, 1997) in which “the rise of digitized information threatens to make librarians practically obsolete.” People may well begin to believe that, as physical barriers to access to information are reduced through technological means, the services of the librarian are no longer as necessary. HUMAN OF THE DIGITAL ASPECTS IBRARY While considering the future, it sometimes is helpful to examine the past. As such, it might be useful to reflect on Jesse Shera’s oft-quoted definition of a library: “To bring together human beings and recorded knowledge in as fruitful a relationship as is humanly possible” (in Dysart &Jones, 1995, p. 16). Digital library proponents must consider the role of people (i.e., as users and service providers) if the digital library is to be truly beneficial. Technology and information resources on their own cannot make up an effective digital library. While a good deal of the literature on digital libraries emphasizes technology and resources at the expense of the service perspective, a number of authors and researchers have considered human interaction in the digital library environment. A number of studies at Lancaster University (Twidale, 1995, 1996; Twidale, Nichols, & Paice, 1996; Crabtree, Twidale, O’Brien, & Nichols, 1997; Nichols, Twidale, & Paice, 1997) have considered the importance of human interaction in the digital library. These studies focus on the social interactions of library users with librarians, librarians with librarians, and users with other users. By studying these collaborations in physical library settings, the authors have drawn some general conclusions that might be applied to digital library design: Collaboration between users, and between users and system personnel, is a significant element of searching in current information systems. The development of electronic libraries threatens existing forms of collaboration but also offers opportunities for new forms of collaboration. The sharing of both the search product and the search process are important for collaborative activities (including the education of searchers). There exist$ great potential for improving search effectiveness through the re-use of previous searches; this is one mechanism for adding value to existing databases. Browsing is not restricted to browsing for inanimate objects; browsing for people is also possible and could be a valuable source ofinformation. Searchers of databases need externalized help to reduce their cognitive load during the search process. This can be provided both by traditional paper-based technology and through computerized systems (Twidale et al., 1996). In a paper presented at the Digital Libraries ’94Conference, Ackerman (1994) stresses that, while the concept of the digital library “includes solving many of the technical and logistical issues in current libraries and information seeking,” it would be a mistake to consider solely the mechanical aspects of the library while ignoring the “useful social interactions in information seeking.” Ackerman outlines four ways in which social interaction can be helpful in the information-seeking process: 1. One may need to consult another person in order to know what to know (help in selecting information). 2. One may need to consult a person to obtain information that is transitory in nature and as such is unindexed (seeking informal information). 3. One may need to consult others for assistance in obtaining/understanding information that is highly contextual in nature rather than merely obtaining the information in a textual format (information seekers often have highly specific needs and interests). 4. Libraries serve important social functions, e.g., students and/or faculty meeting each other in hallways, study areas, etc. (socializing function). Ackerman notes that these points “all argue for the inclusion of some form of social interaction within the digital library. Such interaction should include not only librarians (or some human helper), but other users as well.” In a paper for the Digital Libraries ’96 Conference, Brewer, Ding, Hahn, ", "title": "" }, { "docid": "53bbb6d5467574af4533607c95505ee4", "text": "The synthesis of genetics-based machine learning and fuzzy logic is beginning to show promise as a potent tool in solving complex control problems in multi-variate non-linear systems. In this paper an overview of current research applying the genetic algorithm to fuzzy rule based control is presented. A novel approach to genetics-based machine learning of fuzzy controllers, called a Pittsburgh Fuzzy Classifier System # 1 (P-FCS1) is proposed. P-FCS1 is based on the Pittsburgh model of learning classifier systems and employs variable length rule-sets and simultaneously evolves fuzzy set membership functions and relations. A new crossover operator which respects the functional linkage between fuzzy rules with overlapping input fuzzy set membership functions is introduced. Experimental results using P-FCS l are reported and compared with other published results. Application of P-FCS1 to a distributed control problem (dynamic routing in computer networks) is also described and experimental results are presented.", "title": "" }, { "docid": "f55e380c158ae01812f009fd81642d7f", "text": "In this paper, we proposed a system to effectively create music mashups – a kind of re-created music that is made by mixing parts of multiple existing music pieces. Unlike previous studies which merely generate mashups by overlaying music segments on one single base track, the proposed system creates mashups with multiple background (e.g. instrumental) and lead (e.g. vocal) track segments. So, besides the suitability between the vertically overlaid tracks (i.e. vertical mashability) used in previous studies, we proposed to further consider the suitability between the horizontally connected consecutive music segments (i.e. horizontal mashability) when searching for proper music segments to be combined. On the vertical side, two new factors: “harmonic change balance” and “volume weight” have been considered. On the horizontal side, the methods used in the studies of medley creation are incorporated. Combining vertical and horizontal mashabilities together, we defined four levels of mashability that may be encountered and found the proper solution to each of them. Subjective evaluations showed that the proposed four levels of mashability can appropriately reflect the degrees of listening enjoyment. Besides, by taking the newly proposed vertical mashability measurement into account, the improvement in user satisfaction is statistically significant.", "title": "" }, { "docid": "aa622e064469291fedfadfe36afe3aef", "text": "Multiple kernel clustering (MKC), which performs kernel-based data fusion for data clustering, is an emerging topic. It aims at solving clustering problems with multiple cues. Most MKC methods usually extend existing clustering methods with a multiple kernel learning (MKL) setting. In this paper, we propose a novel MKC method that is different from those popular approaches. Centered kernel alignment—an effective kernel evaluation measure—is employed in order to unify the two tasks of clustering and MKL into a single optimization framework. To solve the formulated optimization problem, an efficient two-step iterative algorithm is developed. Experiments on several UCI datasets and face image datasets validate the effectiveness and efficiency of our MKC algorithm.", "title": "" }, { "docid": "799883184a752a4f97eeb7ba474bbb8b", "text": "This paper presents the design and implementation of a distributed virtual reality (VR) platform that was developed to support the training of multiple users who must perform complex tasks in which situation assessment and critical thinking are the primary components of success. The system is fully immersive and multimodal, and users are represented as tracked, full-body figures. The system supports the manipulation of virtual objects, allowing users to act upon the environment in a natural manner. The underlying intelligent simulation component creates an interactive, responsive world in which the consequences of such actions are presented within a realistic, time-critical scenario. The focus of this work has been on the training of medical emergency-response personnel. BioSimMER, an application of the system to training first responders to an act of bio-terrorism, has been implemented and is presented throughout the paper as a concrete example of how the underlying platform architecture supports complex training tasks. Finally, a preliminary field study was performed at the Texas Engineering Extension Service Fire Protection Training Division. The study focused on individual, rather than team, interaction with the system and was designed to gauge user acceptance of VR as a training tool. The results of this study are presented.", "title": "" }, { "docid": "b9239e05f0544c83597a0204bf22ec30", "text": "In this paper, two data mining algorithms are applied to build a churn prediction model using credit card data collected from a real Chinese bank. The contribution of four variable categories: customer information, card information, risk information, and transaction activity information are examined. The paper analyzes a process of dealing with variables when data is obtained from a database instead of a survey. Instead of considering the all 135 variables into the model directly, it selects the certain variables from the perspective of not only correlation but also economic sense. In addition to the accuracy of analytic results, the paper designs a misclassification cost measurement by taking the two types error and the economic sense into account, which is more suitable to evaluate the credit card churn prediction model. The algorithms used in this study include logistic regression and decision tree which are proven mature and powerful classification algorithms. The test result shows that regression performs a little better than decision tree. 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "19a9d9286f5af35bac3e051e9bc5213b", "text": "The healthcare environment is more and more data enriched, but the amount of knowledge getting from those data is very less, because lack of data analysis tools. We need to get the hidden relationships from the data. In the healthcare system to predict the heart attack perfectly, there are some techniques which are already in use. There is some lack of accuracy in the available techniques like Naïve Bayes. Here, this paper proposes the system which uses neural network and Decision tree (ID3) to predict the heart attacks. Here the dataset with 6 attributes is used to diagnose the heart attacks. The dataset used is acath heart attack dataset provided by UCI machine learning repository. The results of the prediction give more accurate output than the other techniques.", "title": "" }, { "docid": "aa54c82efcb94caf8fd224f362631167", "text": "A current-reused quadrature voltage-controlled oscillator (CR-QVCO) is proposed with the cross-coupled transformer-feedback technology for the quadrature signal generation. This CR-QVCO has the advantages of low-voltage/low-power operation with an adequate phase noise performance. A compact differential three-port transformer, in which two half-circle secondary coils are carefully designed to optimize the effective turn ratio and the coupling factor, is newly constructed to satisfy the need of signal coupling and to save the area consumption simultaneously. The quadrature oscillator providing a center frequency of 7.128 GHz for the ultrawideband (UWB) frequency synthesizer use is demonstrated in a 0.18 mum RF CMOS technology. The oscillator core dissipates 2.2 mW from a 1 V supply and occupies an area of 0.48 mm2. A tuning range of 330 MHz (with a maximum control voltage of 1.8 V) can be achieved to stand the frequency shift caused by the process variation. The measured phase noise is -111.2 dBc/Hz at 1 MHz offset from the center frequency. The IQ phase error shown is less than 2deg. The calculated figure-of-merit (FOM) is 184.8 dB.", "title": "" }, { "docid": "a245aca07bd707ee645cf5cb283e7c5e", "text": "The paradox of blunted parathormone (PTH) secretion in patients with severe hypomagnesemia has been known for more than 20 years, but the underlying mechanism is not deciphered. We determined the effect of low magnesium on in vitro PTH release and on the signals triggered by activation of the calcium-sensing receptor (CaSR). Analogous to the in vivo situation, PTH release from dispersed parathyroid cells was suppressed under low magnesium. In parallel, the two major signaling pathways responsible for CaSR-triggered block of PTH secretion, the generation of inositol phosphates, and the inhibition of cAMP were enhanced. Desensitization or pertussis toxin-mediated inhibition of CaSR-stimulated signaling suppressed the effect of low magnesium, further confirming that magnesium acts within the axis CaSR-G-protein. However, the magnesium binding site responsible for inhibition of PTH secretion is not identical with the extracellular ion binding site of the CaSR, because the magnesium deficiency-dependent signal enhancement was not altered on CaSR receptor mutants with increased or decreased affinity for calcium and magnesium. By contrast, when the magnesium affinity of the G alpha subunit was decreased, CaSR activation was no longer affected by magnesium. Thus, the paradoxical block of PTH release under magnesium deficiency seems to be mediated through a novel mechanism involving an increase in the activity of G alpha subunits of heterotrimeric G-proteins.", "title": "" }, { "docid": "126d8080f7dd313d534a95d8989b0fbd", "text": "Intrusion prevention mechanisms are largely insufficient for protection of databases against Information Warfare attacks by authorized users and has drawn interest towards intrusion detection. We visualize the conflicting motives between an attacker and a detection system as a multi-stage game between two players, each trying to maximize his payoff. We consider the specific application of credit card fraud detection and propose a fraud detection system based on a game-theoretic approach. Not only is this approach novel in the domain of Information Warfare, but also it improvises over existing rule-based systems by predicting the next move of the fraudster and learning at each step.", "title": "" }, { "docid": "13d8ce0c85befb38e6f2da583ac0295b", "text": "The addition of sensors to wearable computers allows them to adapt their functions to more suit the activities and situation of their wearers. A wearable sensor badge is described constructed from (hard) electronic components, which can sense perambulatory activities for context-awareness. A wearable sensor jacket is described that uses advanced knitting techniques to form (soft) fabric stretch sensors positioned to measure upper limb and body movement. Worn on-the-hip, or worn as clothing, these unobtrusive sensors supply abstract information about your current activity to your other wearable computers.", "title": "" }, { "docid": "e4e2bb8bf8cc1488b319a59f82a71f08", "text": "We aim to dismantle the prevalent black-box neural architectures used in complex visual reasoning tasks, into the proposed eXplainable and eXplicit Neural Modules (XNMs), which advance beyond existing neural module networks towards using scene graphs — objects as nodes and the pairwise relationships as edges — for explainable and explicit reasoning with structured knowledge. XNMs allow us to pay more attention to teach machines how to “think”, regardless of what they “look”. As we will show in the paper, by using scene graphs as an inductive bias, 1) we can design XNMs in a concise and flexible fashion, i.e., XNMs merely consist of 4 meta-types, which significantly reduce the number of parameters by 10 to 100 times, and 2) we can explicitly trace the reasoning-flow in terms of graph attentions. XNMs are so generic that they support a wide range of scene graph implementations with various qualities. For example, when the graphs are detected perfectly, XNMs achieve 100% accuracy on both CLEVR and CLEVR CoGenT, establishing an empirical performance upper-bound for visual reasoning; when the graphs are noisily detected from real-world images, XNMs are still robust to achieve a competitive 67.5% accuracy on VQAv2.0, surpassing the popular bag-of-objects attention models without graph structures.", "title": "" } ]
scidocsrr
461e8e09ff7e56baf8799f780ec9023b
Annotate-Sample-Average (ASA): A New Distant Supervision Approach for Twitter Sentiment Analysis
[ { "docid": "ebc107147884d89da4ef04eba2d53a73", "text": "Twitter sentiment analysis (TSA) has become a hot research topic in recent years. The goal of this task is to discover the attitude or opinion of the tweets, which is typically formulated as a machine learning based text classification problem. Some methods use manually labeled data to train fully supervised models, while others use some noisy labels, such as emoticons and hashtags, for model training. In general, we can only get a limited number of training data for the fully supervised models because it is very labor-intensive and time-consuming to manually label the tweets. As for the models with noisy labels, it is hard for them to achieve satisfactory performance due to the noise in the labels although it is easy to get a large amount of data for training. Hence, the best strategy is to utilize both manually labeled data and noisy labeled data for training. However, how to seamlessly integrate these two different kinds of data into the same learning framework is still a challenge. In this paper, we present a novel model, called emoticon smoothed language model (ESLAM), to handle this challenge. The basic idea is to train a language model based on the manually labeled data, and then use the noisy emoticon data for smoothing. Experiments on real data sets demonstrate that ESLAM can effectively integrate both kinds of data to outperform those methods using only one of them.", "title": "" }, { "docid": "c8dbc63f90982e05517bbdb98ebaeeb5", "text": "Even though considerable attention has been given to the polarity of words (positive and negative) and the creation of large polarity lexicons, research in emotion analysis has had to rely on limited and small emotion lexicons. In this paper we show how the combined strength and wisdom of the crowds can be used to generate a large, high-quality, word–emotion and word–polarity association lexicon quickly and inexpensively. We enumerate the challenges in emotion annotation in a crowdsourcing scenario and propose solutions to address them. Most notably, in addition to questions about emotions associated with terms, we show how the inclusion of a word choice question can discourage malicious data entry, help identify instances where the annotator may not be familiar with the target term (allowing us to reject such annotations), and help obtain annotations at sense level (rather than at word level). We conducted experiments on how to formulate the emotionannotation questions, and show that asking if a term is associated with an emotion leads to markedly higher inter-annotator agreement than that obtained by asking if a term evokes an emotion.", "title": "" } ]
[ { "docid": "a4368fed8852c1b92a50e49b18b1c8a5", "text": "This paper reports on the analysis, design and characterization of a 30 GHz fully differential variable gain amplifier for ultra-wideband radar systems. The circuit consists of a variable gain differential stage, which is fed by two cascaded emitter followers. Capacitive degeneration and inductive peaking are used to enhance bandwidth. The maximum differential gain is 11.5 dB with plusmn1.5 dB gain flatness in the desired frequency range. The amplifier gain can be regulated from 0 dB up to 11.5 dB. The circuit exhibits an output 1 dB compression point of 12 dBm. The measured differential output voltage swing is 1.23 Vpp. The 0.75 mm2 broadband amplifier consumes 560 mW at a supply voltage of plusmn3.3 V. It is manufactured in a low-cost 0.25 mum SiGe BiCMOS technology with a cut-off frequency of 75 GHz. The experimental results agree very well with the simulated response. A figure of merit has been proposed for comparing the amplifier performance to previously reported works.", "title": "" }, { "docid": "dd04410d2709fd489d98bd97bff8e2aa", "text": "This paper presents a new dimmer using only two active switches for AC LED lamp. The control method of the proposed dimmer is pulse width control (PWM) method. Compared with the conventional phase- controlled dimmer, the proposed PWM dimmer can produce sine wave and it does not cause harmonics problem. Furthermore, the proposed control method does not amplify the light flicker due to independence of the input voltage. Therefore, the proposed PWM dimmer can be used as the AC LED lamp's dimmer instead of the conventional phase-controlled dimmer. The experimental result shows that the proposed PWM dimmer has good performances.", "title": "" }, { "docid": "a69c11dc80ea019f7af7fcbaec37f0b7", "text": "In this paper, we present the design and implementation of an autonomous flight control law for a smallscale unmanned aerial vehicle (UAV) helicopter. The approach is decentralized in nature by incorporating a newly developed nonlinear control technique, namely the composite nonlinear feedback control, together with dynamic inversion. The overall control law consists of three hierarchical layers, namely, the kernel control, command generator and flight scheduling, and is implemented and verified in flight tests on the actual UAV helicopter. The flight test results demonstrate that the UAV helicopter is capable of carrying out complicated flight missions autonomously. © 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "d22f3bbb7af0ce2a221a17a12381de25", "text": "Ambient occlusion is a technique that computes the amount of light reaching a point on a diffuse surface based on its directly visible occluders. It gives perceptual clues of depth, curvature, and spatial proximity and thus is important for realistic rendering. Traditionally, ambient occlusion is calculated by integrating the visibility function over the normal-oriented hemisphere around any given surface point. In this paper we show this hemisphere can be partitioned into two regions by a horizon line defined by the surface in a local neighborhood of such point. We introduce an image-space algorithm for finding an approximation of this horizon and, furthermore, we provide an analytical closed form solution for the occlusion below the horizon, while the rest of the occlusion is computed by sampling based on a distribution to improve the convergence. The proposed ambient occlusion algorithm operates on the depth buffer of the scene being rendered and the associated per-pixel normal buffer. It can be implemented on graphics hardware in a pixel shader, independently of the scene geometry. We introduce heuristics to reduce artifacts due to the incompleteness of the input data and we include parameters to make the algorithm easy to customize for quality or performance purposes. We show that our technique can render high-quality ambient occlusion at interactive frame rates on current GPUs. CR Categories: I.3.3 [Computer Graphics]: Picture/Image Generation—; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism—;", "title": "" }, { "docid": "370054a58b8f50719106508b138bd095", "text": "In-network aggregation has been proposed as one method for reducing energy consumption in sensor networks. In this paper, we explore two ideas related to further reducing energy consumption in the context of in-network aggregation. The first is by influencing the construction of the routing trees for sensor networks with the goal of reducing the size of transmitted data. To this end, we propose a group-aware network configuration method that “clusters” along the same path sensor nodes that belong to the same group. The second idea involves imposing a hierarchy of output filters on the sensor network with the goal of both reducing the size of transmitted data and minimizing the number of transmitted messages. More specifically, we propose a framework to use temporal coherency tolerances in conjunction with in-network aggregation to save energy at the sensor nodes while maintaining specified quality of data. These tolerances are based on user preferences or can be dictated by the network in cases where the network cannot support the current tolerance level. Our framework, called TiNA, works on top of existing in-network aggregation schemes. We evaluate experimentally our proposed schemes in the context of existing in-network aggregation schemes. We present experimental results measuring energy consumption, response time, and quality of data for Group-By queries. Overall, our schemes provide significant energy savings with respect to communication and a negligible drop in quality of data.", "title": "" }, { "docid": "c8ffa511ba6aa4a5b93678b2cc32815d", "text": "Many long-held practices surrounding newborn injections lack evidence and have unintended consequences. The choice of needles, injection techniques, and pain control methods are all factors for decreasing pain and improving the safety of intramuscular injections. Using practices founded on the available best evidence, nurses can reduce pain, improve the quality and safety of care, and set the stage for long-term compliance with vaccination schedules.", "title": "" }, { "docid": "ce86579be146c2f4b19224c0857eff1e", "text": "A floating raft structure sensor using piezoelectric polyvinylidene fluoride (PVDF) film as sensitive material for grain loss detecting of combine harvester was presented in this paper. Double-layer vibration isolator was proposed to eliminate the vibration influence of combine harvester. Signal processing circuit which composed of a charge amplifier, band-pass filter, envelope detector, absolute value amplifier and square wave generator was constructed to detect the grain impact signal. According to the impact duration of grain on PVDF film, critical frequencies of band-pass filter were determined and the impact responses of filter under different impact durations were numerical simulated. Then, grain detecting experiments were carried out by assembling the sensor on the rear of vibrating cleaning sieve, the results showed that the grain impact can be identified effectively from vibrating noise and the sensor can output a standard square voltage signal while a grain impact is detected.", "title": "" }, { "docid": "f60f75d03c06842efcb2454536ec8226", "text": "The Internet of Things (IoT) relies on physical objects interconnected between each others, creating a mesh of devices producing information. In this context, sensors are surrounding our environment (e.g., cars, buildings, smartphones) and continuously collect data about our living environment. Thus, the IoT is a prototypical example of Big Data. The contribution of this paper is to define a software architecture supporting the collection of sensor-based data in the context of the IoT. The architecture goes from the physical dimension of sensors to the storage of data in a cloud-based system. It supports Big Data research effort as its instantiation supports a user while collecting data from the IoT for experimental or production purposes. The results are instantiated and validated on a project named SMARTCAMPUS, which aims to equip the SophiaTech campus with sensors to build innovative applications that supports end-users.", "title": "" }, { "docid": "86cfaeb7523020d7a58db6a0375c7fa8", "text": "Brain-computer interface (BCI) systems are allowing humans and non-human primates to drive prosthetic devices such as computer cursors and artificial arms with just their thoughts. Invasive BCI systems acquire neural signals with intracranial or subdural electrodes, while noninvasive BCI systems typically acquire neural signals with scalp electroencephalography (EEG). Some drawbacks of invasive BCI systems are the inherent risks of surgery and gradual degradation of signal integrity. A limitation of noninvasive BCI systems for two-dimensional control of a cursor, in particular those based on sensorimotor rhythms, is the lengthy training time required by users to achieve satisfactory performance. Here we describe a novel approach to continuously decoding imagined movements from EEG signals in a BCI experiment with reduced training time. We demonstrate that, using our noninvasive BCI system and observational learning, subjects were able to accomplish two-dimensional control of a cursor with performance levels comparable to those of invasive BCI systems. Compared to other studies of noninvasive BCI systems, training time was substantially reduced, requiring only a single session of decoder calibration (∼ 20 min) and subject practice (∼ 20 min). In addition, we used standardized low-resolution brain electromagnetic tomography to reveal that the neural sources that encoded observed cursor movement may implicate a human mirror neuron system. These findings offer the potential to continuously control complex devices such as robotic arms with one's mind without lengthy training or surgery.", "title": "" }, { "docid": "2d845ef6552b77fb4dd0d784233aa734", "text": "The timing of the origin of arthropods in relation to the Cambrian explosion is still controversial, as are the timing of other arthropod macroevolutionary events such as the colonization of land and the evolution of flight. Here we assess the power of a phylogenomic approach to shed light on these major events in the evolutionary history of life on earth. Analyzing a large phylogenomic dataset (122 taxa, 62 genes) with a Bayesian-relaxed molecular clock, we simultaneously reconstructed the phylogenetic relationships and the absolute times of divergences among the arthropods. Simulations were used to test whether our analysis could distinguish between alternative Cambrian explosion scenarios with increasing levels of autocorrelated rate variation. Our analyses support previous phylogenomic hypotheses and simulations indicate a Precambrian origin of the arthropods. Our results provide insights into the 3 independent colonizations of land by arthropods and suggest that evolution of insect wings happened much earlier than the fossil record indicates, with flight evolving during a period of increasing oxygen levels and impressively large forests. These and other findings provide a foundation for macroevolutionary and comparative genomic study of Arthropoda.", "title": "" }, { "docid": "1b86b0f75fee552eefb82740dbdfc21f", "text": "Research in cancer immunology is currently accelerating following a series of cancer immunotherapy breakthroughs during the last 5 years. Various monoclonal antibodies which block the interaction between checkpoint molecules PD-1 on immune cells and PD-L1 on cancer cells have been used to successfully treat non-small cell lung cancer (NSCLC), including some durable responses lasting years. Two drugs, nivolumab and pembrolizumab, are now FDA approved for use in certain patients who have failed or progressed on platinum-based or targeted therapies while agents targeting PD-L1, atezolizumab and durvalumab, are approaching the final stages of clinical testing. Despite impressive treatment outcomes in a subset of patients who receive these immune therapies, many patients with NSCLC fail to respond to anti-PD-1/PD-L1 and the identification of a biomarker to select these patients remains highly sought after. In this review, we discuss the recent clinical trial results of pembrolizumab, nivolumab, and atezolizumab for NSCLC, and the significance of companion diagnostic testing for tumor PD-L1 expression.", "title": "" }, { "docid": "c8da5151cc8dd563965c4ee60a6d9002", "text": "The aim of this paper is to analyze the robustness of the electrostatic separation process control. The objective was to reduce variation in the process outcome by finding operating conditions (high-voltage level, roll speed), under which uncontrollable variation in the noise factors (granule size, composition of the material to be separated) has minimal impact on the quantity (and the quality) of the recovered products. The experiments were carried out on a laboratory roll-type electrostatic separator, provided with a corona electrode and a tubular electrode, both connected to a dc high-voltage supply. The samples of processed material were prepared from genuine chopped electric wire wastes (granule size >1 mm and <5 mm) containing various proportions of copper and PVC. The design and noise factors were combined into one single experimental design, based on Taguchi's approach, and a regression model of the process was fitted. The impact of the noise factors could be estimated, as well as the interactions between the design and noise factors. The conditions of industry application of Taguchi's methodology are discussed, as well as the possibility of adapting it to other electrostatic processes.", "title": "" }, { "docid": "5e5123b8641c7154d311b58e9b1c6524", "text": "Human designers understand a range of potential purposes behind objects and configurations when creating content, which are only partially addressed in typical procedural content generation techniques. This paper describes our research into the provision and use of semantic information to guide logical solver-based content generation, in order to feasibly generate meaningful and valid content. Initial results show we can use answer set programming to generate basic roguelike dungeon layouts from a provided semantic knowledge base, and we intend to extend this to generate a range of other content types. By using semantic models as input for a content-agnostic generation system, we hope to provide more domain-general content generation.", "title": "" }, { "docid": "cfd60f60a0a0bcc16ede57c7cee4fd23", "text": "A compact planar multiband four-unit multiple-input multiple-output (MIMO) antenna system with high isolation is developed. At VSWR ≤ 2.75, the proposed MIMO antenna operates in the frequency range of LTE Band-1, 2, 3, 7, 40 and WLAN 2.4 GHz band. A T-strip and dumbbell shaped slots are studied to mitigate mutual coupling effects. The measured worst case isolation is better that 15.3 dB and envelope correlation coefficient is less than 0.01. The received signals satisfy the equal power gain condition and radiation patterns confirm the pattern diversity to combat multipath fading effects. At 29 dB SNR, the achieved MIMO channel capacity is about 22.2 b/s/Hz. These results infer that the proposed MIMO antenna is an attractive candidate for 4G-LTE mobile phone applications.", "title": "" }, { "docid": "ffd4fc3c7d63eab3cc8a7129f31afdea", "text": "The growth of desktop 3-D printers is driving an interest in recycled 3-D printer filament to reduce costs of distributed production. Life cycle analysis studies were performed on the recycling of high density polyethylene into filament suitable for additive layer manufacturing with 3-D printers. The conventional centralized recycling system for high population density and low population density rural locations was compared to the proposed in home, distributed recycling system. This system would involve shredding and then producing filament with an open-source plastic extruder from postconsumer plastics and then printing the extruded filament into usable, value-added parts and products with 3-D printers such as the open-source self replicating rapid prototyper, or RepRap. The embodied energy and carbon dioxide emissions were calculated for high density polyethylene recycling using SimaPro 7.2 and the database EcoInvent v2.0. The results showed that distributed recycling uses less embodied energy than the best-case scenario used for centralized recycling. For centralized recycling in a low-density population case study involving substantial embodied energy use for transportation and collection these savings for distributed recycling were found to extend to over 80%. If the distributed process is applied to the U.S. high density polyethylene currently recycled, more than 100 million MJ of energy could be conserved per annum along with the concomitant significant reductions in greenhouse gas emissions. It is concluded that with the open-source 3-D printing network expanding rapidly the potential for widespread adoption of in-home recycling of post-consumer plastic represents a novel path to a future of distributed manufacturing appropriate for both the developed and developing world with lower environmental impacts than the current system.", "title": "" }, { "docid": "9ce96c63e80f8aa0643a2da03819e113", "text": "We propose two novel techniques---stacking bottleneck features and minimum generation error (MGE) training criterion---to improve the performance of deep neural network (DNN)-based speech synthesis. The techniques address the related issues of frame-by-frame independence and ignorance of the relationship between static and dynamic features, within current typical DNN-based synthesis frameworks. Stacking bottleneck features, which are an acoustically informed linguistic representation, provides an efficient way to include more detailed linguistic context at the input. The MGE training criterion minimises overall output trajectory error across an utterance, rather than minimising the error per frame independently, and thus takes into account the interaction between static and dynamic features. The two techniques can be easily combined to further improve performance. We present both objective and subjective results that demonstrate the effectiveness of the proposed techniques. The subjective results show that combining the two techniques leads to significantly more natural synthetic speech than from conventional DNN or long short-term memory recurrent neural network systems.", "title": "" }, { "docid": "2f3e10724dca50927bd1a39cfd1f45e5", "text": "Many recommendation systems suggest items to users by utilizing the techniques of collaborative filtering (CF) based on historical records of items that the users have viewed, purchased, or rated. Two major problems that most CF approaches have to resolve are scalability and sparseness of the user profiles. In this paper, we describe Alternating-Least-Squares with Weighted-λ-Regularization (ALS-WR), a parallel algorithm that we designed for the Netflix Prize, a large-scale collaborative filtering challenge. We use parallel Matlab on a Linux cluster as the experimental platform. We show empirically that the performance of ALS-WR monotonically increases with both the number of features and the number of ALS iterations. Our ALS-WR applied to the Netflix dataset with 1000 hidden features obtained a RMSE score of 0.8985, which is one of the best results based on a pure method. Combined with the parallel version of other known methods, we achieved a performance improvement of 5.91% over Netflix’s own CineMatch recommendation system. Our method is simple and scales well to very large datasets.", "title": "" }, { "docid": "35dda21bd1f2c06a446773b0bfff2dd7", "text": "Mobile devices and their application marketplaces drive the entire economy of the today’s mobile landscape. Android platforms alone have produced staggering revenues, exceeding five billion USD, which has attracted cybercriminals and increased malware in Android markets at an alarming rate. To better understand this slew of threats, we present CopperDroid, an automatic VMI-based dynamic analysis system to reconstruct the behaviors of Android malware. The novelty of CopperDroid lies in its agnostic approach to identify interesting OSand high-level Android-specific behaviors. It reconstructs these behaviors by observing and dissecting system calls and, therefore, is resistant to the multitude of alterations the Android runtime is subjected to over its life-cycle. CopperDroid automatically and accurately reconstructs events of interest that describe, not only well-known process-OS interactions (e.g., file and process creation), but also complex intraand inter-process communications (e.g., SMS reception), whose semantics are typically contextualized through complex Android objects. Because CopperDroid’s reconstruction mechanisms are agnostic to the underlying action invocation methods, it is able to capture actions initiated both from Java and native code execution. CopperDroid’s analysis generates detailed behavioral profiles that abstract a large stream of low-level—often uninteresting—events into concise, high-level semantics, which are well-suited to provide insightful behavioral traits and open the possibility to further research directions. We carried out an extensive evaluation to assess the capabilities and performance of CopperDroid on more than 2,900 Android malware samples. Our experiments show that CopperDroid faithfully reconstructs OSand Android-specific behaviors. Additionally, we demonstrate how CopperDroid can be leveraged to disclose additional behaviors through the use of a simple, yet effective, app stimulation technique. Using this technique, we successfully triggered and disclosed additional behaviors on more than 60% of the analyzed malware samples. This qualitatively demonstrates the versatility of CopperDroid’s ability to improve dynamic-based code coverage.", "title": "" }, { "docid": "60a655d6b6d79f55151e871d2f0d4d34", "text": "The clinical characteristics of drug hypersensitivity reactions are very heterogeneous as drugs can actually elicit all types of immune reactions. The majority of allergic reactions involve either drug-specific IgE or T cells. Their stimulation leads to quite distinct immune responses, which are classified according to Gell and Coombs. Here, an extension of this subclassification, which considers the distinct T-cell functions and immunopathologies, is presented. These subclassifications are clinically useful, as they require different treatment and diagnostic steps. Copyright © 2007 S. Karger AG, Basel", "title": "" }, { "docid": "eeb1fb4b6fe17f3021afd92be86a48f2", "text": "Despite immense technological advances, learners still prefer studying text from printed hardcopy rather than from computer screens. Subjective and objective differences between on-screen and on-paper learning were examined in terms of a set of cognitive and metacognitive components, comprising a Metacognitive Learning Regulation Profile (MLRP) for each study media. Participants studied expository texts of 1000-1200 words in one of the two media and for each text they provided metacognitive prediction-of-performance judgments with respect to a subsequent multiple-choice test. Under fixed study time (Experiment 1), test performance did not differ between the two media, but when study time was self-regulated (Experiment 2) worse performance was observed on screen than on paper. The results suggest that the primary differences between the two study media are not cognitive but rather metacognitive--less accurate prediction of performance and more erratic study-time regulation on screen than on paper. More generally, this study highlights the contribution of metacognitive regulatory processes to learning and demonstrates the potential of the MLRP methodology for revealing the source of subjective and objective differences in study performance among study conditions.", "title": "" } ]
scidocsrr
34b3a30cb068e4dacb3475ae56713a9c
Convolutional RNN: An enhanced model for extracting features from sequential data
[ { "docid": "56321ec6dfc3d4c55fc99125e942cf44", "text": "The last decade has seen a substantial body of literature on the recognition of emotion from speech. However, in comparison to related speech processing tasks such as Automatic Speech and Speaker Recognition, practically no standardised corpora and test-conditions exist to compare performances under exactly the same conditions. Instead a multiplicity of evaluation strategies employed – such as cross-validation or percentage splits without proper instance definition – prevents exact reproducibility. Further, in order to face more realistic scenarios, the community is in desperate need of more spontaneous and less prototypical data. This INTERSPEECH 2009 Emotion Challenge aims at bridging such gaps between excellent research on human emotion recognition from speech and low compatibility of results. The FAU Aibo Emotion Corpus [1] serves as basis with clearly defined test and training partitions incorporating speaker independence and different room acoustics as needed in most reallife settings. This paper introduces the challenge, the corpus, the features, and benchmark results of two popular approaches towards emotion recognition from speech.", "title": "" } ]
[ { "docid": "9b0bddb295cd7485ae9c3bfcf3b639a3", "text": "Graphics processing units (GPUs) continue to grow in popularity for general-purpose, highly parallel, high-throughput systems. This has forced GPU vendors to increase their focus on general purpose workloads, sometimes at the expense of the graphics-specific workloads. Using GPUs for general-purpose computation is a departure from the driving forces behind programmable GPUs that were focused on a narrow subset of graphics rendering operations. Rather than focus on purely graphics-related or general-purpose use, we have designed and modeled an architecture that optimizes for both simultaneously to efficiently handle all GPU workloads. In this paper, we present Nyami, a co-optimized GPU architecture and simulation model with an open-source implementation written in Verilog. This approach allows us to more easily explore the GPU design space in a synthesizable, cycle-precise, modular environment. An instruction-precise functional simulator is provided for co-simulation and verification. Overall, we assume a GPU may be used as a general-purpose GPU (GPGPU) or a graphics engine and account for this in the architecture's construction and in the options and modules selectable for synthesis and simulation. To demonstrate Nyami's viability as a GPU research platform, we exploit its flexibility and modularity to explore the impact of a set of architectural decisions. These include sensitivity to cache size and associativity, barrel and switch-on-stall multithreaded instruction scheduling, and software vs. hardware implementations of rasterization. Through these experiments, we gain insight into commonly accepted GPU architecture decisions, adapt the architecture accordingly, and give examples of the intended use as a GPU research tool.", "title": "" }, { "docid": "cc6e7b82468243d7f92861fa155c10ee", "text": "Road throughput can be increased by driving at small inter-vehicle time gaps. The amplification of velocity disturbances in upstream direction, however, poses limitations to the minimum feasible time gap. String-stable behavior is thus considered an essential requirement for the design of automatic distance control systems, which are needed to allow for safe driving at time gaps well below 1 s. Theoretical analysis reveals that this requirement can be met using wireless inter-vehicle communication to provide real-time information of the preceding vehicle, in addition to the information obtained by common Adaptive Cruise Control (ACC) sensors. In order to validate these theoretical results and to demonstrate the technical feasibility, the resulting control system, known as Cooperative ACC (CACC), is implemented on a test fleet consisting of six passenger vehicles. Experiments clearly show that the practical results match the theoretical analysis, thereby indicating the possibilities for short-distance vehicle following.", "title": "" }, { "docid": "a8af37df01ad45139589e82bd81deb61", "text": "As technology use continues to rise, especially among young individuals, there are concerns that excessive use of technology may impact academic performance. Researchers have started to investigate the possible negative effects of technology use on college academic performance, but results have been mixed. The following study seeks to expand upon previous studies by exploring the relationship among the use of a wide variety of technology forms and an objective measure of academic performance (GPA) using a 7-day time diary data collection method. The current study also seeks to examine both underclassmen and upperclassmen to see if these groups differ in how they use technology. Upperclassmen spent significantly more time using technology for academic and workrelated purposes, whereas underclassmen spent significantly more time using cell phones, online chatting, and social networking sites. Significant negative correlations with GPA emerged for television, online gaming, adult site, and total technology use categories. Keyword: Technology use, academic performance, post-secondary education.", "title": "" }, { "docid": "93b880dbc635a49ffc7a9e6906b094f6", "text": "Abstract machines provide a certain separation between platform-dependent and platform-independent concerns in compilation. Many of the differences between architectures are encapsulated in the specific abstract machine implementation and the bytecode is left largely architecture independent. Taking advantage of this fact, we present a framework for estimating upper and lower bounds on the execution times of logic programs running on a bytecode-based abstract machine. Our approach includes a one-time, program-independent profiling stage which calculates constants or functions bounding the execution time of each abstract machine instruction. Then, a compile-time cost estimation phase, using the instruction timing information, infers expressions giving platform-dependent upper and lower bounds on actual execution time as functions of input data sizes for each program. Working at the abstract machine level makes it possible to take into account low-level issues in new architectures and platforms by just reexecuting the calibration stage instead of having to tailor the analysis for each architecture and platform. Applications of such predicted execution times include debugging/verification of time properties, certification of time properties in mobile code, granularity control in parallel/distributed computing, and resource-oriented specialization", "title": "" }, { "docid": "a5911891697a1b2a407f231cf0ad6c28", "text": "In this paper, a new control method for the parallel operation of inverters operating in an island grid or connected to an infinite bus is described. Frequency and voltage control, including mitigation of voltage harmonics, are achieved without the need for any common control circuitry or communication between inverters. Each inverter supplies a current that is the result of the voltage difference between a reference ac voltage source and the grid voltage across a virtual complex impedance. The reference ac voltage source is synchronized with the grid, with a phase shift, depending on the difference between rated and actual grid frequency. A detailed analysis shows that this approach has a superior behavior compared to existing methods, regarding the mitigation of voltage harmonics, short-circuit behavior and the effectiveness of the frequency and voltage control, as it takes the R to X line impedance ratio into account. Experiments show the behavior of the method for an inverter feeding a highly nonlinear load and during the connection of two parallel inverters in operation.", "title": "" }, { "docid": "d4774f784e3b439dfb77b0f10a8c4950", "text": "As consequence of the considerable increase of the electrical power demand in vehicles, the adoption of a combined direct-drive starter/alternator system is being seriously pursued and a new generation of vehicle alternators delivering power up to 6 kW over the entire range of the engine speed is soon expected for use with connection to a 42 V bus. The surface permanent magnet (SPM) machines offer many of the features sought for such future automotive power generation systems, and thereby a substantial improvement in the control of their output voltage would allow the full exploitation of their attractive characteristics in the direct-drive starter/alternator application without significant penalties otherwise resulting on the machine-fed power converter. Concerning that, this paper reports on the original solution adopted in a proof-of-concept axial-flux permanent magnet machine (AFPM) prototype to provide weakening of the flux linkage with speed and thereby achieve constant-power operation over a wide speed range. The principle being utilized is introduced and described, including design dimensions and experimental data taken from the proof-of-concept machine prototype.", "title": "" }, { "docid": "e34ba7711bf03aedfc34ce3b7c4335b3", "text": "Graph layout problems are a particular class of combinatorial optimization problems whose goal is to find a linear layout of an input graph in such way that a certain objective cost is optimized. This survey considers their motivation, complexity, approximation properties, upper and lower bounds, heuristics and probabilistic analysis on random graphs. The result is a complete view of the current state of the art with respect to layout problems from an algorithmic point of view.", "title": "" }, { "docid": "57d5b69473898b0ae31fcb2f7b0660af", "text": "This paper describes an approach for managing the interaction of human users with computer-controlled agents in an interactive narrative-oriented virtual environment. In these kinds of systems, the freedom of the user to perform whatever action she desires must be balanced with the preservation of the storyline used to control the system's characters. We describe a technique, narrative mediation, that exploits a plan-based model of narrative structure to manage and respond to users' actions inside a virtual world. We define two general classes of response to situations where users execute actions that interfere with story structure: accommodation and intervention. Finally, we specify an architecture that uses these definitions to monitor and automatically characterize user actions, and to compute and implement responses to unanticipated activity. The approach effectively integrates user action and system response into the unfolding narrative, providing for the balance between a user's sense of control within the story world and the user's sense of coherence of the overall narrative.", "title": "" }, { "docid": "6254241cb765d5a280c5f4fb9d599944", "text": "Photodegradation is an abiotic process in the dissipation of pesticides where molecular excitation by absorption of light energy results in various organic reactions, or reactive oxygen species such as OH*, O3, and 1O2 specifically or nonspecifically oxidize the functional groups in a pesticide molecule. In the case of soil photolysis, the heterogeneity of soil together with soil properties varying with meteorological conditions makes photolytic processes difficult to understand. In contrast to solution photolysis, where light is attenuated by solid particles, both absorption and emission profiles of a pesticide are modified through interaction with soil components such as adsorption to clay minerals or solubilization to humic substances. Diffusion of a pesticide molecule results in heterogeneous concentration in soil, and either steric constraint or photoinduced generation of reactive species under the limited mobility sometimes modifies degradation mechanisms. Extensive investigations of meteorological effects on soil moisture and temperature as well as development of an elaborate testing chamber controlling these factors seems to provide better conditions for researchers to examine the photodegradation of pesticides on soil under conditions similar to the real environment. However, the mechanistic analysis of photodegradation has just begun, and there still remain many issues to be clarified. For example, how photoprocesses affect the electronic states of pesticide molecules on soil or how the reactive oxygen species are generated on soil via interaction with clay minerals and humic substances should be investigated in greater detail. From this standpoint, the application of diffuse reflectance spectroscopy and usage or development of various probes to trap intermediate species is highly desired. Furthermore, only limited information is yet available on the reactions of pesticides on soil with atmospheric chemical species. For photodegradation on plants, the importance of an emission spectrum of the light source near its surface was clarified. Most photochemical information comes from photolysis in organic solvents or on glass surfaces and/or plant metabolism studies. Epicuticular waxes may be approximated by long-chain hydrocarbons as a very viscous liquid or solid, but the existing form of pesticide molecules in waxes is still obscure. Either coexistence of formulation agents or steric constraint in the rigid medium would cause a change of molecular excitation, deactivation, and photodegradation mechanisms, which should be further investigated to understand the dissipation profiles of a pesticide in or on crops in the field. A thin-layer system with a coat of epicuticular waxes extracted from leaves or isolated cuticles has been utilized as a model, but its application has been very limited. There appear to be gaps in our knowledge about the surface chemistry and photochemistry of pesticides in both rigid media and plant metabolism. Photodegradation studies, for example, by using these models to eliminate contribution from metabolic conversion as much as possible, should be extensively conducted in conjunction with wax chemistry, with the controlling factors being clarified. As with soil surfaces, the effects of atmospheric oxidants should also be investigated. Based on this knowledge, new methods of kinetic analysis or a device simulating the fate of pesticides on these surfaces could be more rationally developed. Concerning soil photolysis, detailed mechanistic analysis of the mobility and fate of pesticides together with volatilization from soil surfaces has been initiated and its spatial distribution with time has been simulated with reasonable precision on a laboratory scale. Although mechanistic analyses have been conducted on penetration of pesticides through cuticular waxes, its combination with photodegradation to simulate the real environment is awaiting further investigation.", "title": "" }, { "docid": "b5c2e36e805f3ca96cde418137ed0239", "text": "PURPOSE\nTo report a novel method for measuring the degree of inferior oblique muscle overaction and to investigate the correlation with other factors.\n\n\nDESIGN\nCross-sectional diagnostic study.\n\n\nMETHODS\nOne hundred and forty-two eyes (120 patients) were enrolled in this study. Subjects underwent a full orthoptic examination and photographs were obtained in the cardinal positions of gaze. The images were processed using Photoshop and analyzed using the ImageJ program to measure the degree of inferior oblique muscle overaction. Reproducibility or interobserver variability was assessed by Bland-Altman plots and by calculation of the intraclass correlation coefficient (ICC). The correlation between the degree of inferior oblique muscle overaction and the associated factors was estimated with linear regression analysis.\n\n\nRESULTS\nThe mean angle of inferior oblique muscle overaction was 17.8 ± 10.1 degrees (range, 1.8-54.1 degrees). The 95% limit of agreement of interobserver variability for the degree of inferior oblique muscle overaction was ±1.76 degrees, and ICC was 0.98. The angle of inferior oblique muscle overaction showed significant correlation with the clinical grading scale (R = 0.549, P < .001) and with hypertropia in the adducted position (R = 0.300, P = .001). The mean angles of inferior oblique muscle overaction classified into grades 1, 2, 3, and 4 according to the clinical grading scale were 10.5 ± 9.1 degrees, 16.8 ± 7.8 degrees, 24.3 ± 8.8 degrees, and 40.0 ± 12.2 degrees, respectively (P < .001).\n\n\nCONCLUSIONS\nWe describe a new method for measuring the degree of inferior oblique muscle overaction using photographs of the cardinal positions. It has the potential to be a diagnostic tool that measures inferior oblique muscle overaction with minimal observer dependency.", "title": "" }, { "docid": "6e707e17ce2079a9c7cf5c02cd1744c7", "text": "A data-driven identification of dynamical systems requiring only minimal prior knowledge is promising whenever no analytically derived model structure is available, e.g., from first principles in physics. However, meta-knowledge on the system’s behavior is often given and should be exploited: Stability as fundamental property is essential when the model is used for controller design or movement generation. Therefore, this paper proposes a framework for learning stable stochastic systems from data. We focus on identifying a state-dependent coefficient form of the nonlinear stochastic model which is globally asymptotically stable according to probabilistic Lyapunov methods. We compare our approach to other state of the art methods on real-world datasets in terms of flexibility and stability.", "title": "" }, { "docid": "07295446da02d11750e05f496be44089", "text": "As robots become more ubiquitous and capable of performing complex tasks, the importance of enabling untrained users to interact with them has increased. In response, unconstrained natural-language interaction with robots has emerged as a significant research area. We discuss the problem of parsing natural language commands to actions and control structures that can be readily implemented in a robot execution system. Our approach learns a parser based on example pairs of English commands and corresponding control language expressions. We evaluate this approach in the context of following route instructions through an indoor environment, and demonstrate that our system can learn to translate English commands into sequences of desired actions, while correctly capturing the semantic intent of statements involving complex control structures. The procedural nature of our formal representation allows a robot to interpret route instructions online while moving through a previously unknown environment. 1 Motivation and Problem Statement In this paper, we discuss our work on grounding natural language–interpreting human language into semantically informed structures in the context of robotic perception and actuation. To this end, we explore the question of interpreting natural language commands so they can be executed by a robot, specifically in the context of following route instructions through a map. Natural language (NL) is a rich, intuitive mechanism by which humans can interact with systems around them, offering sufficient signal to support robot task planning. Human route instructions include complex language constructs, which robots must be able to execute without being given a fully specified world model such as a map. Our goal is to investigate whether it is possible to learn a parser that produces · All authors are affiliated with the University of Washington, Seattle, USA. · Email: {cynthia,eherbst,lsz,fox}@cs.washington.edu", "title": "" }, { "docid": "db42b2c5b9894943c3ba05fad07ee2f9", "text": "This paper deals principally with the grid connection problem of a kite-based system, named the “Kite Generator System (KGS).” It presents a control scheme of a closed-orbit KGS, which is a wind power system with a relaxation cycle. Such a system consists of a kite with its orientation mechanism and a power transformation system that connects the previous part to the electric grid. Starting from a given closed orbit, the optimal tether's length rate variation (the kite's tether radial velocity) and the optimal orbit's period are found. The trajectory-tracking problem is not considered in this paper; only the kite's tether radial velocity is controlled via the electric machine rotation velocity. The power transformation system transforms the mechanical energy generated by the kite into electrical energy that can be transferred to the grid. A Matlab/simulink model of the KGS is employed to observe its behavior, and to insure the control of its mechanical and electrical variables. In order to improve the KGS's efficiency in case of slow changes of wind speed, a maximum power point tracking (MPPT) algorithm is proposed.", "title": "" }, { "docid": "64a3877186106c911891f4f6fe7fbede", "text": "In this paper, we present a multimodal emotion recognition framework called EmotionMeter that combines brain waves and eye movements. To increase the feasibility and wearability of EmotionMeter in real-world applications, we design a six-electrode placement above the ears to collect electroencephalography (EEG) signals. We combine EEG and eye movements for integrating the internal cognitive states and external subconscious behaviors of users to improve the recognition accuracy of EmotionMeter. The experimental results demonstrate that modality fusion with multimodal deep neural networks can significantly enhance the performance compared with a single modality, and the best mean accuracy of 85.11% is achieved for four emotions (happy, sad, fear, and neutral). We explore the complementary characteristics of EEG and eye movements for their representational capacities and identify that EEG has the advantage of classifying happy emotion, whereas eye movements outperform EEG in recognizing fear emotion. To investigate the stability of EmotionMeter over time, each subject performs the experiments three times on different days. EmotionMeter obtains a mean recognition accuracy of 72.39% across sessions with the six-electrode EEG and eye movement features. These experimental results demonstrate the effectiveness of EmotionMeter within and between sessions.", "title": "" }, { "docid": "5ce93a1c09b4da41f0cc920d5c7e6bdc", "text": "Humanitarian operations comprise a wide variety of activities. These activities differ in temporal and spatial scope, as well as objectives, target population and with respect to the delivered goods and services. Despite a notable variety of agendas of the humanitarian actors, the requirements on the supply chain and supporting logistics activities remain similar to a large extent. This motivates the development of a suitably generic reference model for supply chain processes in the context of humanitarian operations. Reference models have been used in commercial environments for a range of purposes, such as analysis of structural, functional, and behavioural properties of supply chains. Our process reference model aims to support humanitarian organisations when designing appropriately adapted supply chain processes to support their operations, visualising their processes, measuring their performance and thus, improving communication and coordination of organisations. A top-down approach is followed in which modular process elements are developed sequentially and relevant performance measures are identified. This contribution is conceptual in nature and intends to lay the foundation for future research.", "title": "" }, { "docid": "b4f2d62f5c99fc3fb2b8c548adb71578", "text": "The successful motor rehabilitation of stroke patients requires early intensive and task-specific therapy. A recent Cochrane Review, although based on a limited number of randomized controlled trials (RCTs), showed that early robotic training of the upper limb (i.e., during acute or subacute phase) can enhance motor learning and improve functional abilities more than chronic-phase training. In this article, a new subacute-phase RCT with the Neuro-Rehabilitation-roBot (NeReBot) is presented. While in our first study we used the NeReBot in addition to conventional therapy, in this new trial we used the same device in substitution of standard proximal upper-limb rehabilitation. With this protocol, robot patients achieved similar reductions in motor impairment and enhancements in paretic upper-limb function to those gained by patients in a control group. By analyzing these results and those of previous studies, we hypothesize a new robotic protocol for acute and subacute stroke patients based on both treatment modalities (in addition and in substitution).", "title": "" }, { "docid": "e19d60d8638f1afa26830c4fe06a1c53", "text": "An option is a short-term skill consisting of a control policy for a specified region of the state space, and a termination condition recognizing leaving that region. In prior work, we proposed an algorithm called Deep Discovery of Options (DDO) to discover options to accelerate reinforcement learning in Atari games. This paper studies an extension to robot imitation learning, called Discovery of Deep Continuous Options (DDCO), where low-level continuous control skills parametrized by deep neural networks are learned from demonstrations. We extend DDO with: (1) a hybrid categorical–continuous distribution model to parametrize high-level policies that can invoke discrete options as well continuous control actions, and (2) a cross-validation method that relaxes DDO’s requirement that users specify the number of options to be discovered. We evaluate DDCO in simulation of a 3-link robot in the vertical plane pushing a block with friction and gravity, and in two physical experiments on the da Vinci surgical robot, needle insertion where a needle is grasped and inserted into a silicone tissue phantom, and needle bin picking where needles and pins are grasped from a pile and categorized into bins. In the 3-link arm simulation, results suggest that DDCO can take 3x fewer demonstrations to achieve the same reward compared to a baseline imitation learning approach. In the needle insertion task, DDCO was successful 8/10 times compared to the next most accurate imitation learning baseline 6/10. In the surgical bin picking task, the learned policy successfully grasps a single object in 66 out of 99 attempted grasps, and in all but one case successfully recovered from failed grasps by retrying a second time.", "title": "" }, { "docid": "a4e122d0b827d25bea48d41487437d74", "text": "We introduce UniAuth, a set of mechanisms for streamlining authentication to devices and web services. With UniAuth, a user first authenticates himself to his UniAuth client, typically his smartphone or wearable device. His client can then authenticate to other services on his behalf. In this paper, we focus on exploring the user experiences with an early iPhone prototype called Knock x Knock. To manage a variety of accounts securely in a usable way, Knock x Knock incorporates features not supported in existing password managers, such as tiered and location-aware lock control, authentication to laptops via knocking, and storing credentials locally while working with laptops seamlessly. In two field studies, 19 participants used Knock x Knock for one to three weeks with their own devices and accounts. Our participants were highly positive about Knock x Knock, demonstrating the desirability of our approach. We also discuss interesting edge cases and design implications.", "title": "" }, { "docid": "bd2fcdd0b7139bf719f1ec7ffb4fe5d5", "text": "Much is known on how facial expressions of emotion are produced, including which individual muscles are most active in each expression. Yet, little is known on how this information is interpreted by the human visual system. This paper presents a systematic study of the image dimensionality of facial expressions of emotion. In particular, we investigate how recognition degrades when the resolution of the image (i.e., number of pixels when seen as a 5.3 by 8 degree stimulus) is reduced. We show that recognition is only impaired in practice when the image resolution goes below 20 × 30 pixels. A study of the confusion tables demonstrates that each expression of emotion is consistently confused by a small set of alternatives and that the confusion is not symmetric, i.e., misclassifying emotion a as b does not imply we will mistake b for a. This asymmetric pattern is consistent over the different image resolutions and cannot be explained by the similarity of muscle activation. Furthermore, although women are generally better at recognizing expressions of emotion at all resolutions, the asymmetry patterns are the same. We discuss the implications of these results for current models of face perception.", "title": "" }, { "docid": "0d3403ce2d1613c1ea6b938b3ba9c5e6", "text": "Extracting a set of generalizable rules that govern the dynamics of complex, high-level interactions between humans based only on observations is a high-level cognitive ability. Mastery of this skill marks a significant milestone in the human developmental process. A key challenge in designing such an ability in autonomous robots is discovering the relationships among discriminatory features. Identifying features in natural scenes that are representative of a particular event or interaction (i.e. »discriminatory features») and then discovering the relationships (e.g., temporal/spatial/spatio-temporal/causal) among those features in the form of generalized rules are non-trivial problems. They often appear as a »chicken-and-egg» dilemma. This paper proposes an end-to-end learning framework to tackle these two problems in the context of learning generalized, high-level rules of human interactions from structured demonstrations. We employed our proposed deep reinforcement learning framework to learn a set of rules that govern a behavioral intervention session between two agents based on observations of several instances of the session. We also tested the accuracy of our framework with human subjects in diverse situations.", "title": "" } ]
scidocsrr
ada944c22cda2db0f760398c18033354
The Evolution of the Web and Implications for an Incremental Crawler
[ { "docid": "80e4748abbb22d2bfefa5e5cbd78fb86", "text": "A reimplementation of the UNIX file system is described. The reimplementation provides substantially higher throughput rates by using more flexible allocation policies that allow better locality of reference and can be adapted to a wide range of peripheral and processor characteristics. The new file system clusters data that is sequentially accessed and provides tw o block sizes to allo w fast access to lar ge files while not wasting large amounts of space for small files. File access rates of up to ten times f aster than the traditional UNIX file system are e xperienced. Longneeded enhancements to the programmers’ interface are discussed. These include a mechanism to place advisory locks on files, extensions of the name space across file systems, the ability to use long file names, and provisions for administrati ve control of resource usage. Revised February 18, 1984 CR", "title": "" } ]
[ { "docid": "b0bae633eb8b54a8a0a174da8eb59b26", "text": " Advancement in payment technologies have an important impact on the quality of life. The emerging payment technologies create both opportunities and challenges for future. Being a quick and convenient process, contactless payment gained its momentum, especially in merchants, where throughput is the main important parameter. However, it poses risk to issuers as no robust verification method of customer is available. Thus giving rise to quests to evolve and sustain a wellorganized, efficient, reliable and secure unified payment system, which may contribute to the smooth functioning of the market by eliminating scratch in business. This article presents an approach and module by which one card can communicate with the other using Near Field Communication (NFC) technology to transfer money from payer’s bank to payee’s bank by digital means. This approach eliminates the need of physical cash and also serves all types of payment and identity needs. Embodiments of this approach furnish a medium for cashless card-to-card transaction. The module, which is called Swing-Pay, communicates with its concerned bank via GSM. The security of this module is intensified using biometric authentication. The article also presents an app on Android platform, which works as a scanner of the proposed module to read the identity details of concerned person, the owner of the card. We have also presented the prototype of a digital card. This card can also be used as virtual identity card (ID), accumulating the information of all ID cards including electronic Passport, Voter ID, and Driving License.", "title": "" }, { "docid": "9bfd06aea9cbbce811a448fb0b1a5534", "text": "Sarcasm detection refers to the identification of the use of sarcasma humorous literary device in text. The use of sarcasm can be misleading and often hinders accurate comprehension of sentences. This paper aims at presenting the challenges in detection and ideas inculcated in existing literature to overcome these challenges. This study reveals that sarcasm detection is an entity of ongoing research and that more novel techniques are required to improve its efficiency. INTRODUCTION According to Cambridge’s dictionary, the term “sarcasm” means the use of remarks that clearly mean the opposite of what they say, made in order to hurt someone's feelings or to criticize someone/something in a humorous way. Sarcasm detection is the process of identifying whether a given piece of text is to be comprehended using its literal meaning, or as its opposite meaning. If a piece of text is to be comprehended opposite to its literal meaning, then it contains sarcasm. CHALLENGES FACED Natural Language Processing systems such as Online customer review summarization systems, dialogue systems or monitoring systems for brands face difficulty in identifying sarcasm. Typical Sentimental Analysis Systems do not support sarcasm detection. The results obtained from Sentimental analysis may provide a result opposite to what was intended. This adversely affects businesses employing this technology. Datasets used to perform sarcasm detection are usually derived from online review sites, social media posts and comments and micro-blogging sites. The structure of these sentences isn’t clearly defined, and they are short in nature, thus leading to ambiguity. Apart from these issues, there exist certain specific challenges in sarcasm detection as follows. •The datasets used for the detection process pose a problem due to the ambiguity and unnecessary plethora of details mentioned in the text. It is quite tough to extract useful information from the huge volumes of crude data. • It is an easier task to detect whether a person is using sarcasm in speech. The use of sarcasm can be detected from the tone of speech, body language and facial expressions. However, identifying sarcasm from a piece of text poses certain difficulties for many Natural Language Processing Systems.. • Sarcasm detection demands certain prerequisites such as being well informed regarding the current issues and trends. e.g. “Yet, another mind boggling performance by the Australian cricket team in England.” • Sarcasm detection, at times, requires the use of common sensical knowledge • Sometimes sarcasm uses the hyperbole. Hyperbole is the use of exaggeration i.e. use of words belonging to superlative degree. Eg. “They ran like greased lightning.”", "title": "" }, { "docid": "dbcae5be70fef927ccac30876b0a8bcf", "text": "Many operating system services require special privilege to execute their tasks. A programming error in a privileged service opens the door to system compromise in the form of unauthorized acquisition of privileges. In the worst case, a remote attacker may obtain superuser privileges. In this paper, we discuss the methodology and design of privilege separation, a generic approach that lets parts of an application run with different levels of privilege. Programming errors occurring in the unprivileged parts can no longer be abused to gain unauthorized privileges. Privilege separation is orthogonal to capability systems or application confinement and enhances the security of such systems even further. Privilege separation is especially useful for system services that authenticate users. These services execute privileged operations depending on internal state not known to an application confinement mechanism. As a concrete example, the concept of privilege separation has been implemented in OpenSSH. However, privilege separation is equally useful for other authenticating services. We illustrate how separation of privileges reduces the amount of OpenSSH code that is executed with special privilege. Privilege separation prevents known security vulnerabilities in prior OpenSSH versions including some that were unknown at the time of its implementation.", "title": "" }, { "docid": "e525a752409edc5165cfafed08ec6e57", "text": "In this paper, we propose a recurrent neural network architecture for early sequence classification, when the model is required to output a label as soon as possible with negligible decline in accuracy. Our model is capable of learning how many sequence tokens it needs to observe in order to make a prediction; moreover, the number of steps required differs for each sequence. Experiments on sequential MNIST show that the proposed architecture focuses on different sequence parts during inference, which correspond to contours of the handwritten digits. We also demonstrate the improvement in the prediction quality with a simultaneous reduction in the prefix size used, the extent of which depends on the distribution of distinct class features over time.", "title": "" }, { "docid": "c95894477d7279deb7ddbb365030c34e", "text": "Among mammals living in social groups, individuals form communication networks where they signal their identity and social status, facilitating social interaction. In spite of its importance for understanding of mammalian societies, the coding of individual-related information in the vocal signals of non-primate mammals has been relatively neglected. The present study focuses on the spotted hyena Crocuta crocuta, a social carnivore known for its complex female-dominated society. We investigate if and how the well-known hyena's laugh, also known as the giggle call, encodes information about the emitter. By analyzing acoustic structure in both temporal and frequency domains, we show that the hyena's laugh can encode information about age, individual identity and dominant/subordinate status, providing cues to receivers that could enable assessment of the social position of an emitting individual. The range of messages encoded in the hyena's laugh is likely to play a role during social interactions. This call, together with other vocalizations and other sensory channels, should ensure an array of communication signals that support the complex social system of the spotted hyena. Experimental studies are now needed to decipher precisely the communication network of this species.", "title": "" }, { "docid": "c3a3f4128d4268f174f278be4039f7b0", "text": "Suicide pacts are uncommon and mainly committed by male-female pairs in a consortial relationship. The victims frequently choose methods such as hanging, poisoning, using a firearm, etc; however, a case of a suicide pact by drowning is rare in forensic literature. We report a case where a male and a female, both young adults, in a relationship of adopted \"brother of convenience\" were found drowned in a river. The victims were bound together at their wrists which helped with our conclusion this was a suicide pact. The medico-legal importance of wrist binding in drowning cases is also discussed in this article.", "title": "" }, { "docid": "ace2fa767a14ee32f596256ebdf9554f", "text": "Computing systems have steadily evolved into more complex, interconnected, heterogeneous entities. Ad-hoc techniques are most often used in designing them. Furthermore, researchers and designers from both academia and industry have focused on vertical approaches to emphasizing the advantages of one specific feature such as fault tolerance, security or performance. Such approaches led to very specialized computing systems and applications. Autonomic systems, as an alternative approach, can control and manage themselves automatically with minimal intervention by users or system administrators. This paper presents an autonomic framework in developing and implementing autonomic computing services and applications. Firstly, it shows how to apply this framework to autonomically manage the security of networks. Then an approach is presented to develop autonomic components from existing legacy components such as software modules/applications or hardware resources (router, processor, server, etc.). Experimental evaluation of the prototype shows that the system can be programmed dynamically to enable the components to operate autonomously.", "title": "" }, { "docid": "affb15022a558f44e2117f08dd826bbe", "text": "We present a novel approach to free-viewpoint video. Our main contribution is the formulation of a hybrid approach between image morphing and depth-image based rendering. When rendering the scene from novel viewpoints, we use both dense pixel correspondences between image pairs as well as an underlying, view-dependent geometrical model. Our novel reconstruction scheme iteratively refines geometric and correspondence information. By combining the strengths of both depth and correspondence estimation, our approach enables free-viewpoint video also for challenging scenes as well as for recordings that may violate typical constraints in multiview reconstruction. For example, our method is robust against inaccurate camera calibration, asynchronous capture, and imprecise depth reconstruction. Rendering results for different scenes and applications demonstrate the versatility and robustness of our approach.", "title": "" }, { "docid": "b36f29d1d0f373a3aa209fc3185f5516", "text": "A natural generalization of the ARCH (Autoregressive Conditional Heteroskedastic) process introduced in Engle (1982) to allow for past conditional variances in the current conditional variance equation is proposed. Stationarity conditions and autocorrelation structure for this new class of parametric models are derived. Maximum likelihood estimation and testing are also considered. Finally an empirical example relating to the uncertainty of the inflation rate is presented.", "title": "" }, { "docid": "256afadf1604bd8c5c1413555cb892a4", "text": "A continuous-time dynamic model of a network of N nonlinear elements interacting via random asymmetric couplings is studied. A self-consistent mean-field theory, exact in the N ~ limit, predicts a transition from a stationary phase to a chaotic phase occurring at a critical value of the gain parameter. The autocorrelations of the chaotic flow as well as the maximal Lyapunov exponent are calculated.", "title": "" }, { "docid": "20b6881a9faf4811b504fd1791babe68", "text": "When users post photos on Facebook, they have the option of allowing their friends, followers, or anyone at all to subsequently reshare the photo. A portion of the billions of photos posted to Facebook generates cascades of reshares, enabling many additional users to see, like, comment, and reshare the photos. In this paper we present characteristics of such cascades in aggregate, finding that a small fraction of photos account for a significant proportion of reshare activity and generate cascades of non-trivial size and depth. We also show that the true influence chains in such cascades can be much deeper than what is visible through direct attribution. To illuminate how large cascades can form, we study the diffusion trees of two widely distributed photos: one posted on President Barack Obama’s page following his reelection victory, and another posted by an individual Facebook user hoping to garner enough likes for a cause. We show that the two cascades, despite achieving comparable total sizes, are markedly different in their time evolution, reshare depth distribution, predictability of subcascade sizes, and the demographics of users who propagate them. The findings suggest not only that cascades can achieve considerable size but that they can do so in distinct ways.", "title": "" }, { "docid": "ae8ad19049574cd52106e0df51cc4e68", "text": "In the domain of e-health, there are diverse and heterogeneous health care systems with different brands on various platforms. One of the most important challenges in this field is the interoperability which plays a key role on information exchange and sharing. Achieving the interoperability is a difficult task because of complexity and diversity of systems, standards, and kinds of information. The lack of interoperability would lead to increase costs and errors of medical operation in hospitals. The purpose of this article is to present a conceptual model for solving interoperability in health information systems. A Health Service Bus (HSB) as an integrated infrastructure is suggested to facilitate Service Oriented Architecture. A scenario-based evaluation on the proposed conceptual model shows that adopting web service technology is an effective way for this task.", "title": "" }, { "docid": "2d3adb98f6b1b4e161d84314958960e5", "text": "BACKGROUND\nBright light therapy was shown to be a promising treatment for depression during pregnancy in a recent open-label study. In an extension of this work, we report findings from a double-blind placebo-controlled pilot study.\n\n\nMETHOD\nTen pregnant women with DSM-IV major depressive disorder were randomly assigned from April 2000 to January 2002 to a 5-week clinical trial with either a 7000 lux (active) or 500 lux (placebo) light box. At the end of the randomized controlled trial, subjects had the option of continuing in a 5-week extension phase. The Structured Interview Guide for the Hamilton Depression Scale-Seasonal Affective Disorder Version was administered to assess changes in clinical status. Salivary melatonin was used to index circadian rhythm phase for comparison with antidepressant results.\n\n\nRESULTS\nAlthough there was a small mean group advantage of active treatment throughout the randomized controlled trial, it was not statistically significant. However, in the longer 10-week trial, the presence of active versus placebo light produced a clear treatment effect (p =.001) with an effect size (0.43) similar to that seen in antidepressant drug trials. Successful treatment with bright light was associated with phase advances of the melatonin rhythm.\n\n\nCONCLUSION\nThese findings provide additional evidence for an active effect of bright light therapy for antepartum depression and underscore the need for an expanded randomized clinical trial.", "title": "" }, { "docid": "cc8b634daad1088aa9f4c43222fab279", "text": "In this paper, a comparision between the conventional LSTM network and the one-dimensional grid LSTM network applied on single word speech recognition is conducted. The performance of the networks are measured in terms of accuracy and training time. The conventional LSTM model is the current state of the art method to model speech recognition. However, the grid LSTM architecture has proven to be successful in solving other emperical tasks such as translation and handwriting recognition. When implementing the two networks in the same training framework with the same training data of single word audio files, the conventional LSTM network yielded an accuracy rate of 64.8 % while the grid LSTM network yielded an accuracy rate of 65.2 %. Statistically, there was no difference in the accuracy rate between the models. In addition, the conventional LSTM network took 2 % longer to train. However, this difference in training time is considered to be of little significance when tralnslating it to absolute time. Thus, it can be concluded that the one-dimensional grid LSTM model performs just as well as the conventional one.", "title": "" }, { "docid": "f257b55e0cdffd6ab1129fa73a509e27", "text": "UNLABELLED\nA leak test performed according to ASTM F2338-09 Standard Test Method for Nondestructive Detection of Leaks in Packages by Vacuum Decay Method was developed and validated for container-closure integrity verification of a lyophilized product in a parenteral vial package system. This nondestructive leak test method is intended for use in manufacturing as an in-process package integrity check, and for testing product stored on stability in lieu of sterility tests. Method development and optimization challenge studies incorporated artificially defective packages representing a range of glass vial wall and sealing surface defects, as well as various elastomeric stopper defects. Method validation required 3 days of random-order replicate testing of a test sample population of negative-control, no-defect packages and positive-control, with-defect packages. Positive-control packages were prepared using vials each with a single hole laser-drilled through the glass vial wall. Hole creation and hole size certification was performed by Lenox Laser. Validation study results successfully demonstrated the vacuum decay leak test method's ability to accurately and reliably detect those packages with laser-drilled holes greater than or equal to approximately 5 μm in nominal diameter. All development and validation studies were performed at Whitehouse Analytical Laboratories in Whitehouse, NJ, under the direction of consultant Dana Guazzo of RxPax, LLC, using a VeriPac 455 Micro Leak Test System by Packaging Technologies & Inspection (Tuckahoe, NY). Bristol Myers Squibb (New Brunswick, NJ) fully subsidized all work.\n\n\nLAY ABSTRACT\nA leak test performed according to ASTM F2338-09 Standard Test Method for Nondestructive Detection of Leaks in Packages by Vacuum Decay Method was developed and validated to detect defects in stoppered vial packages containing lyophilized product for injection. This nondestructive leak test method is intended for use in manufacturing as an in-process package integrity check, and for testing product stored on stability in lieu of sterility tests. Test method validation study results proved the method capable of detecting holes laser-drilled through the glass vial wall greater than or equal to 5 μm in nominal diameter. Total test time is less than 1 min per package. All method development and validation studies were performed at Whitehouse Analytical Laboratories in Whitehouse, NJ, under the direction of consultant Dana Guazzo of RxPax, LLC, using a VeriPac 455 Micro Leak Test System by Packaging Technologies & Inspection (Tuckahoe, NY). Bristol Myers Squibb (New Brunswick, NJ) fully subsidized all work.", "title": "" }, { "docid": "6349e0444220d4a8ea3c34755954a58a", "text": "We present QuickNet, a fast and accurate network architecture that is both faster and significantly more accurate than other “fast” deep architectures like SqueezeNet. Furthermore, it uses less parameters than previous networks, making it more memory efficient. We do this by making two major modifications to the reference “Darknet” model (Redmon et al, 2015): 1) The use of depthwise separable convolutions and 2) The use of parametric rectified linear units. We make the observation that parametric rectified linear units are computationally equivalent to leaky rectified linear units at test time and the observation that separable convolutions can be interpreted as a compressed Inception network (Chollet, 2016). Using these observations, we derive a network architecture, which we call QuickNet, that is both faster and more accurate than previous models. Our architecture provides at least four major advantages: (1) A smaller model size, which is more tenable on memory constrained systems; (2) A significantly faster network which is more tenable on computationally constrained systems; (3) A high accuracy of 95.7% on the CIFAR-10 Dataset which outperforms all but one result published so far, although we note that our works are orthogonal approaches and can be combined (4) Orthogonality to previous model compression approaches allowing for further speed gains to be realized.", "title": "" }, { "docid": "77cfb72acbc2f077c3d9b909b0a79e76", "text": "In this paper, we analyze two general-purpose encoding types, trees and graphs systematically, focusing on trends over increasingly complex problems. Tree and graph encodings are similar in application but offer distinct advantages and disadvantages in genetic programming. We describe two implementations and discuss their evolvability. We then compare performance using symbolic regression on hundreds of random nonlinear target functions of both 1-dimensional and 8-dimensional cases. Results show the graph encoding has less bias for bloating solutions but is slower to converge and deleterious crossovers are more frequent. The graph encoding however is found to have computational benefits, suggesting it to be an advantageous trade-off between regression performance and computational effort.", "title": "" }, { "docid": "ea19c92701d9171c57ee380ac5346358", "text": "The motor skills of patients with spinal muscular atrophy, type I (SMA-I) are very limited. It is difficult to quantify the motor abilities of these patients and as a result there is currently no validated measure of motor function that can be utilized as an outcome measure in clinical trials of SMA-I. We have developed the Children's Hospital of Philadelphia Infant Test of Neuromuscular Disorders (\"CHOP INTEND\") to evaluate the motor skills of patients with SMA-I. The test was developed following the evaluation of 26 infants with SMA-I mean age 11.5 months (1.4-37.9 months) with the Test of Infant Motor Performance and The Children's Hospital of Philadelphia Test of Strength in SMA, a newly devised motor assessment for SMA. Items for the CHOP INTEND were selected by an expert panel based on item mean and standard deviation, item frequency distribution, and Chronbach's alpha. Intra-rater reliability of the resulting test was established by test-retest of 9 infants with SMA-I over a 2 month period; Intraclass correlation coefficient (ICC) (3,1)=0.96. Interrater reliability was by video analysis of a mixed group of infants with neuromuscular disease by 4 evaluators; ICC (3,4)=0.98 and in a group of 8 typically developing infants by 5 evaluators ICC (3,5)=0.93. The face validity of the CHOP INTEND is supported by the use of an expert panel in item selection; however, further validation is needed. The CHOP INTEND is a reliable measure of motor skills in patients with SMA-I and neuromuscular disorders presenting in infancy.", "title": "" }, { "docid": "357b798f0429a29bb3210cfc3f031c3a", "text": "The Facial Action Coding System (FACS) is a widely used protocol for recognizing and labelling facial expression by describing the movement of muscles of the face. FACS is used to objectively measure the frequency and intensity of facial expressions without assigning any emotional meaning to those muscle movements. Instead FACS breaks down facial expressions into their smallest discriminable movements called Action Units. Each Action Unit creates a distinct change in facial appearance, such as an eyebrow lift or nose wrinkle. FACS coders can identify the Action Units which are present on the face when viewing still images or videos. Psychological research has used FACS to examine a variety of research questions including social-emotional development, neuropsychiatric disorders, and deception. In the course of this report we provide an overview of FACS and the Action Units, its reliability as a measure, and how it has been applied in some key areas of psychological research.", "title": "" } ]
scidocsrr
6acd4886598cf47c4acabfb2c1cf0bdb
Coffee Ingestion Enhances 1-Mile Running Race Performance.
[ { "docid": "c9bfd3b31a8a95898d45819037341307", "text": "OBJECTIVE\nInvestigation of the effect of a green tea-caffeine mixture on weight maintenance after body weight loss in moderately obese subjects in relation to habitual caffeine intake.\n\n\nRESEARCH METHODS AND PROCEDURES\nA randomized placebo-controlled double blind parallel trial in 76 overweight and moderately obese subjects, (BMI, 27.5 +/- 2.7 kg/m2) matched for sex, age, BMI, height, body mass, and habitual caffeine intake was conducted. A very low energy diet intervention during 4 weeks was followed by 3 months of weight maintenance (WM); during the WM period, the subjects received a green tea-caffeine mixture (270 mg epigallocatechin gallate + 150 mg caffeine per day) or placebo.\n\n\nRESULTS\nSubjects lost 5.9 +/-1.8 (SD) kg (7.0 +/- 2.1%) of body weight (p < 0.001). At baseline, satiety was positively, and in women, leptin was inversely, related to subjects' habitual caffeine consumption (p < 0.01). High caffeine consumers reduced weight, fat mass, and waist circumference more than low caffeine consumers; resting energy expenditure was reduced less and respiratory quotient was reduced more during weight loss (p < 0.01). In the low caffeine consumers, during WM, green tea still reduced body weight, waist, respiratory quotient and body fat, whereas resting energy expenditure was increased compared with a restoration of these variables with placebo (p < 0.01). In the high caffeine consumers, no effects of the green tea-caffeine mixture were observed during WM.\n\n\nDISCUSSION\nHigh caffeine intake was associated with weight loss through thermogenesis and fat oxidation and with suppressed leptin in women. In habitual low caffeine consumers, the green tea-caffeine mixture improved WM, partly through thermogenesis and fat oxidation.", "title": "" } ]
[ { "docid": "bda892eb6cdcc818284f56b74c932072", "text": "In this paper, a low power and low jitter 12-bit CMOS digitally controlled oscillator (DCO) design is presented. The CMOS DCO is designed based on a ring oscillator implemented with Schmitt trigger based inverters. Simulations of the proposed DCO using 32 nm CMOS predictive transistor model (PTM) achieves controllable frequency range of 570 MHz~850 MHz with a wide linearity. Monte Carlo simulation demonstrates that the time-period jitter due to random power supply fluctuation is under 75 ps and the power consumption is 2.3 mW at 800 MHz with 0.9 V power supply.", "title": "" }, { "docid": "d4362c98e92d93d40f99dd483f4157fe", "text": "BACKGROUND\nBlastic plasmacytoid dendritic cell neoplasm (BPDC) is a rare hematologic neoplasm, which almost always involves the skin and shows poor prognosis.\n\n\nOBJECTIVE\nThe aim of our study was to enhance BPDC diagnosis and indications for prognosis.\n\n\nMETHODS\nThis study involved 26 patients with BPDC. To investigate the histogenesis of BPDC, we reviewed the clinical features and stained markers of various hematopoietic lineages, chemokines, and their receptors.\n\n\nRESULTS\nBone-marrow infiltration was detected in 13 of the 19 cases examined and leukemic changes in 18. Complete remission was achieved in 14 cases, but more than half of the patients showed recurrence within a short time, and 14 patients died of the disease after 1 to 25 months (mean 8.5 months). Positivity for CD123 was detected in 18 of 24 cases and for T-cell leukemia 1 in 18 of 22 cases. Of the chemokines and their receptors, 8 of 15 skin biopsy specimens proved to be positive for CXCL12. Leukemic change subsequent to skin lesions occurred in 7 of 8 CXCL12-positive cases (87.5%) and in 3 of 6 CXCL12-negative cases (50%). Seven of the 8 CXCL12-positive patients (87.5%) and two of the 6 CXCL12-negative patients (33.3%) have died, whereas one of 8 CXCL12-positive patients (12.5%) and 4 of 6 CXCL12-negative patients (66.7%) remain alive.\n\n\nLIMITATIONS\nThe number of patients was limited.\n\n\nCONCLUSIONS\nWe speculate that the presence of CXCL12-positive cells in the skin may be associated with leukemic change and a poor prognosis.", "title": "" }, { "docid": "3ad45560c2a375fac8881910a36355b1", "text": "Procedural methods for terrain synthesis are capable of creating realistic depictions of heightfield terrains with little user intervention. However, users often do wish to intervene in controlling the placement and shape of landforms, but without sacrificing realism. In this paper, we present a sketching interface to procedural terrain generation. This system enables users to draw the silhouette, spine and bounding curves of both extruding (hills and mountains) and embedding landforms (river courses and canyons).\n Terrain is interactively generated to match the sketched constraints using multiresolution surface deformation. In addition, the wavelet noise characteristics of silhouette strokes are propagated to the surrounding terrain. With terrain sketching users can interactively create or modify landscapes incorporating varied and complex land-forms.", "title": "" }, { "docid": "37c005b87b3ccdfad86c760ecba7b8de", "text": "Intelligent processing of complex signals such as images is often performed by a hierarchy of nonlinear processing layers, such as a deep net or an object recognition cascade. Joint estimation of the parameters of all the layers is a difficult nonconvex optimization. We describe a general strategy to learn the parameters and, to some extent, the architecture of nested systems, which we call themethod of auxiliary coordinates (MAC) . This replaces the original problem involving a deeply nested function with a constrained problem involving a different function in an augmented space without nesting. The constrained problem may be solved with penalty-based methods using alternating optimization over the parameters and the auxiliary coordinates. MAC has provable convergence, is easy to implement reusing existing algorithms for single layers, can be parallelized trivially and massively, applies even when parameter derivatives are not available or not desirable, can perform some model selection on the fly, and is competitive with stateof-the-art nonlinear optimizers even in the serial computation setting, often providing reasonable models within a few iterations. The continued increase in recent years in data availability and processing power has enabled the development and practical applicability of ever more powerful models in sta tistical machine learning, for example to recognize faces o r speech, or to translate natural language. However, physical limitations in serial computation suggest that scalabl e processing will require algorithms that can be massively parallelized, so they can profit from the thousands of inexpensive processors available in cloud computing. We focus on hierarchical, or nested, processing architectures. As a particular but important example, consider deep neuAppearing in Proceedings of the 17 International Conference on Artificial Intelligence and Statistics (AISTATS) 2014, Reykjavik, Iceland. JMLR: W&CP volume 33. Copyright 2014 by the authors. ral nets (fig. 1), which were originally inspired by biological systems such as the visual and auditory cortex in the mammalian brain (Serre et al., 2007), and which have been proven very successful at learning sophisticated task s, such as recognizing faces or speech, when trained on data.", "title": "" }, { "docid": "110742230132649f178d2fa99c8ffade", "text": "Recent approaches based on artificial neural networks (ANNs) have shown promising results for named-entity recognition (NER). In order to achieve high performances, ANNs need to be trained on a large labeled dataset. However, labels might be difficult to obtain for the dataset on which the user wants to perform NER: label scarcity is particularly pronounced for patient note de-identification, which is an instance of NER. In this work, we analyze to what extent transfer learning may address this issue. In particular, we demonstrate that transferring an ANN model trained on a large labeled dataset to another dataset with a limited number of labels improves upon the state-of-the-art results on two different datasets for patient note de-identification.", "title": "" }, { "docid": "95350d45a65cb6932f26be4c4d417a30", "text": "This paper presents a detailed performance comparison (including efficiency, EMC performance and component electrical stress) between boost and buck type PFC under critical conduction mode (CRM). In universal input (90–265Vac) applications, the CRM buck PFC has around 1% higher efficiency compared to its counterpart at low-line (90Vac) condition. Due to the low voltage swing of switch, buck PFC has a better CM EMI performance than boost PFC. It seems that the buck PFC is more attractive in low power applications which only need to meet the IEC61000-3-2 Class D standard based on the comparison. The experimental results from two 100-W prototypes are also presented for side by side comparison.", "title": "" }, { "docid": "4a8448ab4c1c9e0a1df5e2d1c1d20417", "text": "We present an empirical framework for testing game strategies in The Settlers of Catan, a complex win-lose game that lacks any analytic solution. This framework provides the means to change different components of an autonomous agent's strategy, and to test them in suitably controlled ways via performance metrics in game simulations and via comparisons of the agent's behaviours with those exhibited in a corpus of humans playing the game. We provide changes to the game strategy that not only improve the agent's strength, but corpus analysis shows that they also bring the agent closer to a model of human players.", "title": "" }, { "docid": "0b1af361042c7372955804c287ebb6a7", "text": "We propose a new vector encoding scheme (tree quantization) that obtains lossy compact codes for high-dimensional vectors via tree-based dynamic programming. Similarly to several previous schemes such as product quantization, these codes correspond to codeword numbers within multiple codebooks. We propose an integer programming-based optimization that jointly recovers the coding tree structure and the codebooks by minimizing the compression error on a training dataset. In the experiments with diverse visual descriptors (SIFT, neural codes, Fisher vectors), tree quantization is shown to combine fast encoding and state-of-the-art accuracy in terms of the compression error, the retrieval performance, and the image classification error.", "title": "" }, { "docid": "42366db7e9c27dd30b64557e2c413bec", "text": "This paper discusses plasma-assisted conversion of pyrolysis gas (pyrogas) fuel to synthesis gas (syngas, combination of hydrogen and carbon monoxide). Pyrogas is a product of biomass, municipal wastes, or coal-gasification process that usually contains hydrogen, carbon monoxide, carbon dioxide, water, unreacted light and heavy hydrocarbons, and tar. These hydrocarbons diminish the fuel value of pyrogas, thereby necessitating the need for the conversion of the hydrocarbons. Various conditions and reforming reactions were considered for the conversion of pyrogas into syngas. Nonequilibrium plasma reforming is an effective homogenous process which makes use of catalysts unnecessary for fuel reforming. The effectiveness of gliding arc plasma as a nonequilibrium plasma discharge is demonstrated in the fuel reforming reaction processes with the aid of a specially designed low current device also known as gliding arc plasma reformer. Experimental results obtained focus on yield, molar concentration, carbon balance, and enthalpy at different conditions.", "title": "" }, { "docid": "8aaaa2b1410522afe5dd604af1140ec2", "text": "This paper provides a pragmatic approach to analysing qualitative data, using actual data from a qualitative dental public health study for demonstration purposes. The paper also critically explores how computers can be used to facilitate this process, the debate about the verification (validation) of qualitative analyses and how to write up and present qualitative research studies.", "title": "" }, { "docid": "27775805c45a82cbd31fd9a5e93f3df1", "text": "In a dynamic world, mechanisms allowing prediction of future situations can provide a selective advantage. We suggest that memory systems differ in the degree of flexibility they offer for anticipatory behavior and put forward a corresponding taxonomy of prospection. The adaptive advantage of any memory system can only lie in what it contributes for future survival. The most flexible is episodic memory, which we suggest is part of a more general faculty of mental time travel that allows us not only to go back in time, but also to foresee, plan, and shape virtually any specific future event. We review comparative studies and find that, in spite of increased research in the area, there is as yet no convincing evidence for mental time travel in nonhuman animals. We submit that mental time travel is not an encapsulated cognitive system, but instead comprises several subsidiary mechanisms. A theater metaphor serves as an analogy for the kind of mechanisms required for effective mental time travel. We propose that future research should consider these mechanisms in addition to direct evidence of future-directed action. We maintain that the emergence of mental time travel in evolution was a crucial step towards our current success.", "title": "" }, { "docid": "282424d3a055bcc2d0d5c99c6f8e58e9", "text": "Over the last few years, neuroimaging techniques have contributed greatly to the identification of the structural and functional neuroanatomy of anxiety disorders. The amygdala seems to be a crucial structure for fear and anxiety, and has consistently been found to be activated in anxiety-provoking situations. Apart from the amygdala, the insula and anterior cinguiate cortex seem to be critical, and ail three have been referred to as the \"fear network.\" In the present article, we review the main findings from three major lines of research. First, we examine human models of anxiety disorders, including fear conditioning studies and investigations of experimentally induced panic attacks. Then we turn to research in patients with anxiety disorders and take a dose look at post-traumatic stress disorder and obsessive-compulsive disorder. Finally, we review neuroimaging studies investigating neural correlates of successful treatment of anxiety, focusing on exposure-based therapy and several pharmacological treatment options, as well as combinations of both.", "title": "" }, { "docid": "a5d96c0cda59ad304d2f9b052611220f", "text": "Behavior-based systems (BBS) have been effective in a variety of applications, but due to their limited use of representation they have not been applied much to more complex problems, such as ones involving temporal sequences, or hierarchical task representations. This paper presents an approach to implementing these AI-level concepts into BBS, without compromising BBS' key properties. We describe a Hierarchical Abstract Behavior Architecture that allows for the representation and execution of complex, sequential, hierarchically structured tasks within a behavior-based framework. The architecture, obtained by introducing the notion of abstract behaviors into BBS, also enables reusability of behaviors across different tasks. The basis for task representation is the behavior network construct which encodes complex, hierarchical plan-like strategies. The approach is validated in experiments on a Pioneer 2DX mobile robot.", "title": "" }, { "docid": "e35994d3f2cb82666115a001dbd002d0", "text": "Filtering relevant documents with respect to entities is an essential task in the context of knowledge base construction and maintenance. It entails processing a time-ordered stream of documents that might be relevant to an entity in order to select only those that contain vital information. State-of-the-art approaches to document filtering for popular entities are entity-dependent: they rely on and are also trained on the specifics of differentiating features for each specific entity. Moreover, these approaches tend to use so-called extrinsic information such as Wikipedia page views and related entities which is typically only available only for popular head entities. Entity-dependent approaches based on such signals are therefore ill-suited as filtering methods for long-tail entities. In this paper we propose a document filtering method for long-tail entities that is entity-independent and thus also generalizes to unseen or rarely seen entities. It is based on intrinsic features, i.e., features that are derived from the documents in which the entities are mentioned. We propose a set of features that capture informativeness, entity-saliency, and timeliness. In particular, we introduce features based on entity aspect similarities, relation patterns, and temporal expressions and combine these with standard features for document filtering. Experiments following the TREC KBA 2014 setup on a publicly available dataset show that our model is able to improve the filtering performance for long-tail entities over several baselines. Results of applying the model to unseen entities are promising, indicating that the model is able to learn the general characteristics of a vital document. The overall performance across all entities---i.e., not just long-tail entities---improves upon the state-of-the-art without depending on any entity-specific training data.", "title": "" }, { "docid": "47a484d75b1635139f899d2e1875d8f4", "text": "This work presents the concept and methodology as well as the architecture and physical implementation of an integrated node for smart-city applications. The presented integrated node lies on active RFID technology whereas the use case illustrated, with results from a small-scale verification of the presented node, refers to common-type waste-bins. The sensing units deployed for the use case are ultrasonic sensors that provide ranging information which is translated to fill-level estimations; however the use of a versatile active RFID tag within the node is able to afford multiple sensors for a variety of smart-city applications. The most important benefits of the presented node are power minimization, utilization of low-cost components and accurate fill-level estimation with a tiny data-load fingerprint, regarding the specific use case on waste-bins, whereas the node has to be deployed on public means of transportation or similar standard route vehicles within an urban or suburban context.", "title": "" }, { "docid": "ab62cea43aa3ddc848cf129f0ea391a4", "text": "In this work we propose a method for converting triangular meshes into LEGO bricks through a voxel representation of boundary meshes. We present a novel voxelization approach that uses points sampled from a surface model to define which cubes (voxels) and their associated colors will compose the model. All steps of the algorithm were implemented on the GPU and real-time performance was achieved with satisfactory volumetric resolutions. Rendering results are illustrated using realistic graphics techniques such as screen space ambient occlusion and irradiance maps.", "title": "" }, { "docid": "d4cd0dabcf4caa22ad92fab40844c786", "text": "NA", "title": "" }, { "docid": "0845902210ac0d4dfcb41902623845ad", "text": "Advances in data storage and image acquisition technologies have enabled the creation of large image datasets. In this scenario, it is necessary to develop appropriate information systems to efficiently manage these collections. The commonest approaches use the so-called Content-Based Image Retrieval (CBIR) systems. Basically, these systems try to retrieve images similar to a user-defined specification or pattern (e.g., shape sketch, image example). Their goal is to support image retrieval based on content properties (e.g., shape, color, texture), usually encoded into feature vectors. One of the main advantages of the CBIR approach is the possibility of an automatic retrieval process, instead of the traditional keyword-based approach, which usually requires very laborious and time-consuming previous annotation of database images. The CBIR technology has been used in several applications such as fingerprint identification, biodiversity information systems, digital libraries, crime prevention, medicine, historical research, among others. This paper aims to introduce the problems and challenges concerned with the creation of CBIR systems, to describe the existing solutions and applications, and to present the state of the art of the existing research in this area.", "title": "" }, { "docid": "aab6a2166b9d39a67ec9ebb127f0956a", "text": "A heuristic approximation algorithm that can optimise the order of firewall rules to minimise packet matching is presented. It has been noted that firewall operators tend to make use of the fact that some firewall rules match most of the traffic, and conversely that others match little of the traffic. Consequently, ordering the rules such that the highest matched rules are as high in the table as possible reduces the processing load in the firewall. Due to dependencies between rules in the rule set this problem, optimising the cost of the packet matching process, has been shown to be NP-hard. This paper proposes an algorithm that is designed to give good performance in terms of minimising the packet matching cost of the firewall. The performance of the algorithm is related to complexity of the firewall rule set and is compared to an alternative algorithm demonstrating that the algorithm here has improved the packet matching cost in all cases.", "title": "" }, { "docid": "1ed692fd2da9c4f6d75fe3c15c7a3492", "text": "The objective of this preliminary study is to investigate whether educational video games can be integrated into a classroom with positive effects for the teacher and students. The challenges faced when introducing a video game into a classroom are twofold: overcoming the notion that a \"toy\" does not belong in the school and developing software that has real educational value while stimulating the learner. We conducted an initial pilot study with 39 second grade students using our mathematic drill software Skills Arena. Early data from the pilot suggests that not only do teachers and students enjoy using Skills Arena, students have exceeded our expectations by doing three times more math problems in 19 days than they would have using traditional worksheets. Based on this encouraging qualitative study, future work that focuses on quantitative benefits should likely uncover additional positive results.", "title": "" } ]
scidocsrr
f744b78cbd3ea899e11fb1a037bb712a
An Information-Extraction System for Urdu - A Resource-Poor Language
[ { "docid": "9a6ce56536585e54d3e15613b2fa1197", "text": "This paper discusses the Urdu script characteristics, Urdu Nastaleeq and a simple but a novel and robust technique to recognize the printed Urdu script without a lexicon. Urdu being a family of Arabic script is cursive and complex script in its nature, the main complexity of Urdu compound/connected text is not its connections but the forms/shapes the characters change when it is placed at initial, middle or at the end of a word. The characters recognition technique presented here is using the inherited complexity of Urdu script to solve the problem. A word is scanned and analyzed for the level of its complexity, the point where the level of complexity changes is marked for a character, segmented and feeded to Neural Networks. A prototype of the system has been tested on Urdu text and currently achieves 93.4% accuracy on the average. Keywords— Cursive Script, OCR, Urdu.", "title": "" } ]
[ { "docid": "2b3e78940de9d9a924139e7ce3241e8c", "text": "In today’s world people are extensively using internet and thus are also vulnerable to its flaws. Cyber security is the main area where these flaws are exploited. Intrusion is one way to exploit the internet for search of valuable information that may cause devastating damage, which can be personal or on a large scale. Thus Intrusion detection systems are placed for timely detection of such intrusion and alert the user about the same. Intrusion Detection using hybrid classification technique consist of a hybrid model i.e. misuse detection model (AdTree based) and Anomaly model (svm based).NSL-KDD intrusion detection dataset plays a vital role in calibrating intrusion detection system and is extensively used by the researchers working in the field of intrusion detection. This paper presents Association rule mining technique for IDS.", "title": "" }, { "docid": "0ae0e78ac068d8bc27d575d90293c27b", "text": "Deep web refers to the hidden part of the Web that remains unavailable for standard Web crawlers. To obtain content of Deep Web is challenging and has been acknowledged as a significant gap in the coverage of search engines. To this end, the paper proposes a novel deep web crawling framework based on reinforcement learning, in which the crawler is regarded as an agent and deep web database as the environment. The agent perceives its current state and selects an action (query) to submit to the environment according to Q-value. The framework not only enables crawlers to learn a promising crawling strategy from its own experience, but also allows for utilizing diverse features of query keywords. Experimental results show that the method outperforms the state of art methods in terms of crawling capability and breaks through the assumption of full-text search implied by existing methods.", "title": "" }, { "docid": "a310039e0fd3f732805a6088ad3d1777", "text": "Unsupervised learning of visual similarities is of paramount importance to computer vision, particularly due to lacking training data for fine-grained similarities. Deep learning of similarities is often based on relationships between pairs or triplets of samples. Many of these relations are unreliable and mutually contradicting, implying inconsistencies when trained without supervision information that relates different tuples or triplets to each other. To overcome this problem, we use local estimates of reliable (dis-)similarities to initially group samples into compact surrogate classes and use local partial orders of samples to classes to link classes to each other. Similarity learning is then formulated as a partial ordering task with soft correspondences of all samples to classes. Adopting a strategy of self-supervision, a CNN is trained to optimally represent samples in a mutually consistent manner while updating the classes. The similarity learning and grouping procedure are integrated in a single model and optimized jointly. The proposed unsupervised approach shows competitive performance on detailed pose estimation and object classification.", "title": "" }, { "docid": "a9d948498c0ad0d99759636ea3ba4d1a", "text": "Recently, Real Time Location Systems (RTLS) have been designed to provide location information of positioning target. The kernel of RTLS is localization algorithm, range-base localization algorithm is concerned as high precision. This paper introduces real-time range-based indoor localization algorithms, including Time of Arrival, Time Difference of Arrival, Received Signal Strength Indication, Time of Flight, and Symmetrical Double Sided Two Way Ranging. Evaluation criteria are proposed for assessing these algorithms, namely positioning accuracy, scale, cost, energy efficiency, and security. We also introduce the latest some solution, compare their Strengths and weaknesses. Finally, we give a recommendation about selecting algorithm from the viewpoint of the practical application need.", "title": "" }, { "docid": "a30c2a8d3db81ae121e62af5994d3128", "text": "Recent advances in the fields of robotics, cyborg development, moral psychology, trust, multi agent-based systems and socionics have raised the need for a better understanding of ethics, moral reasoning, judgment and decision-making within the system of man and machines. Here we seek to understand key research questions concerning the interplay of ethical trust at the individual level and the social moral norms at the collective end. We review salient works in the fields of trust and machine ethics research, underscore the importance and the need for a deeper understanding of ethical trust at the individual level and the development of collective social moral norms. Drawing upon the recent findings from neural sciences on mirror-neuron system (MNS) and social cognition, we present a bio-inspired Computational Model of Ethical Trust (CMET) to allow investigations of the interplay of ethical trust and social moral norms.", "title": "" }, { "docid": "fb5c9e78960ab840e423741059cbf8b8", "text": "Text mining, also known as text data mining or knowledge discovery from textual databases, refers to the process of extracting interesting and non-trivial patterns or knowledge from text documents. Regarded by many as the next wave of knowledge discovery, text mining has very high commercial values. Last count reveals that there are more than ten high-tech companies offering products for text mining. Has text mining evolved so rapidly to become a mature field? This article attempts to shed some lights to the question. We first present a text mining framework consisting of two components: Text refining that transforms unstructured text documents into an intermediate form; and knowledge distillation that deduces patterns or knowledge from the intermediate form. We then survey the state-of-the-art text mining products/applications and align them based on the text refining and knowledge distillation functions as well as the intermediate form that they adopt. In conclusion, we highlight the upcoming challenges of text mining and the opportunities it offers.", "title": "" }, { "docid": "ceb02ddf8b2085d67ccf27c3c5b57dfd", "text": "We present a novel latent embedding model for learning a compatibility function between image and class embeddings, in the context of zero-shot classification. The proposed method augments the state-of-the-art bilinear compatibility model by incorporating latent variables. Instead of learning a single bilinear map, it learns a collection of maps with the selection, of which map to use, being a latent variable for the current image-class pair. We train the model with a ranking based objective function which penalizes incorrect rankings of the true class for a given image. We empirically demonstrate that our model improves the state-of-the-art for various class embeddings consistently on three challenging publicly available datasets for the zero-shot setting. Moreover, our method leads to visually highly interpretable results with clear clusters of different fine-grained object properties that correspond to different latent variable maps.", "title": "" }, { "docid": "d7ca5db3257c5aaf0524cd3a855ac2a7", "text": "This paper presented the clinical results of breast cancer detection using a radar-based UWB microwave system developed at the University of Bristol. Additionally, the system overview and some experimental laboratory results are presented as well. For the clinical result shown in this contribution, we compare images obtained using the standard X-ray mammography and the radar-based microwave system. The developed microwave system has apparently successfully detected the tumor in correct position, as confirmed on the X-ray image, although the compression suffered by the breast during X-ray makes a precise positional determination impossible.", "title": "" }, { "docid": "395859bbc6c78a8b19eda2ef422dc35b", "text": "Ann Saudi Med 2006;26(4):318-320 Amelia is the complete absence of a limb, which may occur in isolation or as part of multiple congenital malformations.1-3 The condition is uncommon and very little is known with certainty about the etiology. Whatever the cause, however, it results from an event which must have occurred between the fourth and eighth week of embryogenesis.1,3 The causal factors that have been proposed include amniotic band disruption,4 maternal diabetes,5 autosomal recessive mutation6 and drugs such as thalidomide,7 alcohol8 and cocaine.9 We report a case of a female baby with a complex combination of two rare limb abnormalities: left-sided humero-radial synostosis and amelia of the other limbs.", "title": "" }, { "docid": "3738d3c5d5bf4a3de55aa638adac07bb", "text": "The term malware stands for malicious software. It is a program installed on a system without the knowledge of owner of the system. It is basically installed by the third party with the intention to steal some private data from the system or simply just to play pranks. This in turn threatens the computer’s security, wherein computer are used by one’s in day-to-day life as to deal with various necessities like education, communication, hospitals, banking, entertainment etc. Different traditional techniques are used to detect and defend these malwares like Antivirus Scanner (AVS), firewalls, etc. But today malware writers are one step forward towards then Malware detectors. Day-by-day they write new malwares, which become a great challenge for malware detectors. This paper focuses on basis study of malwares and various detection techniques which can be used to detect malwares.", "title": "" }, { "docid": "05258afb93d6a509369860f93c8935ab", "text": "Nasal deformity associated with typical cleft lip can cause aesthetic and functional issues that are difficult to address. The degree of secondary nasal deformity is based on the extent of the original cleft deformity, growth over time, and any prior surgical correction to the nose or lip. Repair and reconstruction of these deformities require comprehensive understanding of embryologic growth, the cleft anatomy, as well as meticulous surgical technique and using a spectrum of structural grafting. This article reviews cleft lip nasal deformity, presurgical care, primary cleft rhinoplasty, and definitive cleft septorhinoplasty with a focus on aesthetics and function.", "title": "" }, { "docid": "c5dd31facf6d1f7709d58e7b0ddc0bab", "text": "Website fingerprinting attacks allow a local, passive eavesdropper to identify a web browsing client’s destination web page by extracting noticeable and unique features from her traffic. Such attacks magnify the gap between privacy and security — a client who encrypts her communication traffic may still have her browsing behaviour exposed to lowcost eavesdropping. Previous authors have shown that privacysensitive clients who use anonymity technologies such as Tor are susceptible to website fingerprinting attacks, and some attacks have been shown to outperform others in specific experimental conditions. However, as these attacks differ in data collection, feature extraction and experimental setup, they cannot be compared directly. On the other side of the coin, proposed website fingerprinting defenses (countermeasures) are generally designed and tested only against specific attacks. Some defenses have been shown to fail against more advanced attacks, and it is unclear which defenses would be effective against all attacks. In this paper, we propose a feature-based comparative methodology that allows us to systematize attacks and defenses in order to compare them. We analyze attacks for their sensitivity to different packet sequence features, and analyze the effect of proposed defenses on these features by measuring whether or not the features are hidden. If a defense fails to hide a feature that an attack is sensitive to, then the defense will not work against this attack. Using this methodology, we propose a new network layer defense that can more effectively hide all of the features we consider.", "title": "" }, { "docid": "c0c30c3b9539511e9079ec7894ad754f", "text": "Cardiovascular disease remains the world's leading cause of death. Yet, we have known for decades that the vast majority of atherosclerosis and its subsequent morbidity and mortality are influenced predominantly by diet. This paper will describe a health-promoting whole food, plant-based diet; delineate macro- and micro-nutrition, emphasizing specific geriatric concerns; and offer guidance to physicians and other healthcare practitioners to support patients in successfully utilizing nutrition to improve their health.", "title": "" }, { "docid": "e2af17b368fef36187c895ad5fd20a58", "text": "We study in this paper the problem of jointly clustering and learning representations. As several previous studies have shown, learning representations that are both faithful to the data to be clustered and adapted to the clustering algorithm can lead to better clustering performance, all the more so that the two tasks are performed jointly. We propose here such an approach for k-Means clustering based on a continuous reparametrization of the objective function that leads to a truly joint solution. The behavior of our approach is illustrated on various datasets showing its efficacy in learning representations for objects while clustering them.", "title": "" }, { "docid": "ee06f781207415db38de63f89ca198c4", "text": "State-of-the-art hearing prostheses are equipped with acoustic noise reduction algorithms to improve speech intelligibility. Currently, one of the major challenges is to perform acoustic noise reduction in so-called cocktail party scenarios with multiple speakers, in particular because it is difficult-if not impossible-for the algorithm to determine which are the target speaker(s) that should be enhanced, and which speaker(s) should be treated as interfering sources. Recently, it has been shown that electroencephalography (EEG) can be used to perform auditory attention detection, i.e., to detect to which speaker a subject is attending based on recordings of neural activity. In this paper, we combine such an EEG-based auditory attention detection (AAD) paradigm with an acoustic noise reduction algorithm based on the multi-channel Wiener filter (MWF), leading to a neuro-steered MWF. In particular, we analyze how the AAD accuracy affects the noise suppression performance of an adaptive MWF in a sliding-window implementation, where the user switches his attention between two speakers.", "title": "" }, { "docid": "f5b027fedefe929e9530f038c3fb219a", "text": "Outfits in online fashion data are composed of items of many different types (e.g . top, bottom, shoes) that share some stylistic relationship with one another. A representation for building outfits requires a method that can learn both notions of similarity (for example, when two tops are interchangeable) and compatibility (items of possibly different type that can go together in an outfit). This paper presents an approach to learning an image embedding that respects item type, and jointly learns notions of item similarity and compatibility in an end-toend model. To evaluate the learned representation, we crawled 68,306 outfits created by users on the Polyvore website. Our approach obtains 3-5% improvement over the state-of-the-art on outfit compatibility prediction and fill-in-the-blank tasks using our dataset, as well as an established smaller dataset, while supporting a variety of useful queries.", "title": "" }, { "docid": "7e17c1842a70e416f0a90bdcade31a8e", "text": "A novel feeding system using substrate integrated waveguide (SIW) technique for antipodal linearly tapered slot array antenna (ALTSA) is presented in this paper. After making studies by simulations for a SIW fed ALTSA cell, a 1/spl times/8 ALTSA array fed by SIW feeding system at X-band is fabricated and measured, and the measured results show that this array antenna has a wide bandwidth and good performances.", "title": "" }, { "docid": "8296ce0143992c7513051c70758541be", "text": "This artic,le introduces Adaptive Resonance Theor) 2-A (ART 2-A), an efjCicient algorithm that emulates the self-organizing pattern recognition and hypothesis testing properties of the ART 2 neural network architect~~rc, hut at a speed two to three orders of magnitude fbster. Analysis and simulations show how’ the ART 2-A systems correspond to ART 2 rivnamics at both the fast-learn limit and at intermediate learning rate.r. Intermediate ieurning rates permit fust commitment of category nodes hut slow recoding, analogous to properties of word frequency effects. encoding specificity ef@cts, and episodic memory. Better noise tolerunce is hereby achieved ti’ithout a loss of leurning stability. The ART 2 and ART 2-A systems are contrasted with the leader algorithm. The speed of ART 2-A makes pructical the use of ART 2 modules in large scale neural computation. Keywords-Neural networks, Pattern recognition. Category formation. Fast learning, Adaptive resonance.", "title": "" }, { "docid": "0d945a9c0d17cb317c15cb9ec8595fe8", "text": "Executive dysfunction has been shown to be a promising endophenotype in neurodevelopmental disorders such as autism spectrum disorder (ASD) and attention-deficit/hyperactivity disorder (ADHD). This article reviewed 26 studies that examined executive function comparing ASD and/or ADHD children. In light of findings from this review, the ASD + ADHD group appears to share impairment in both flexibility and planning with the ASD group, while it shares the response inhibition deficit with the ADHD group. Conversely, deficit in attention, working memory, preparatory processes, fluency, and concept formation does not appear to be distinctive in discriminating from ASD, ADHD, or ASD + ADHD group. On the basis of neurocognitive endophenotype, the common co-occurrence of executive function deficits seems to reflect an additive comorbidity, rather than a separate condition with distinct impairments.", "title": "" }, { "docid": "e7ac73f581ae7799021374ddd3e4d3a2", "text": "Table: Coherence evaluation results on Discrimination and Insertion tasks. † indicates a neural model is significantly superior to its non-neural counterpart with p-value < 0.01. Discr. Ins. Acc F1 Random 50.00 50.00 12.60 Graph-based (G&S) 64.23 65.01 11.93 Dist. sentence (L&H) 77.54 77.54 19.32 Grid-all nouns (E&C) 81.58 81.60 22.13 Extended Grid (E&C) 84.95 84.95 23.28 Grid-CNN 85.57† 85.57† 23.12 Extended Grid-CNN 88.69† 88.69† 25.95†", "title": "" } ]
scidocsrr
682e5efd5090e567c9bf087df7f160bc
Yahoo! music recommendations: modeling music ratings with temporal dynamics and item taxonomy
[ { "docid": "13b887760a87bc1db53b16eb4fba2a01", "text": "Customer preferences for products are drifting over time. Product perception and popularity are constantly changing as new selection emerges. Similarly, customer inclinations are evolving, leading them to ever redefine their taste. Thus, modeling temporal dynamics should be a key when designing recommender systems or general customer preference models. However, this raises unique challenges. Within the eco-system intersecting multiple products and customers, many different characteristics are shifting simultaneously, while many of them influence each other and often those shifts are delicate and associated with a few data instances. This distinguishes the problem from concept drift explorations, where mostly a single concept is tracked. Classical time-window or instance-decay approaches cannot work, as they lose too much signal when discarding data instances. A more sensitive approach is required, which can make better distinctions between transient effects and long term patterns. The paradigm we offer is creating a model tracking the time changing behavior throughout the life span of the data. This allows us to exploit the relevant components of all data instances, while discarding only what is modeled as being irrelevant. Accordingly, we revamp two leading collaborative filtering recommendation approaches. Evaluation is made on a large movie rating dataset by Netflix. Results are encouraging and better than those previously reported on this dataset.", "title": "" }, { "docid": "b48d9053c70f51aa766a3f4706912654", "text": "Social tags are free text labels that are applied to items such as artists, albums and songs. Captured in these tags is a great deal of information that is highly relevant to Music Information Retrieval (MIR) researchers including information about genre, mood, instrumentation, and quality. Unfortunately there is also a great deal of irrelevant information and noise in the tags. Imperfect as they may be, social tags are a source of human-generated contextual knowledge about music that may become an essential part of the solution to many MIR problems. In this article, we describe the state of the art in commercial and research social tagging systems for music. We describe how tags are collected and used in current systems. We explore some of the issues that are encountered when using tags, and we suggest possible areas of exploration for future research.", "title": "" }, { "docid": "44791d65e5f5e4645a6f99c0b2cdac8f", "text": "Electronic Music Distribution (EMD) is in demand of robust, automatically extracted music descriptors. We introduce a timbral similarity measures for comparing music titles. This measure is based on a Gaussian model of cepstrum coefficients. We describe the timbre extractor and the corresponding timbral similarity relation. We describe experiments in assessing the quality of the similarity relation, and show that the measure is able to yield interesting similarity relations, in particular when used in conjunction with other similarity relations. We illustrate the use of the descriptor in several EMD applications developed in the context of the Cuidado European project.", "title": "" } ]
[ { "docid": "8a311f11dc99aab0fc1675b8ee16b776", "text": "Credit scoring has become very important issue due to the recent growth of the credit industry, so the credit department of the bank faces a large amount of credit data. Clearly it is impossible analyzing this huge amount of data both in economic and manpower terms, so data mining techniques were employed for this purpose. So far many data mining methods are proposed to handle credit scoring problems that each of them, has some prominences and limitations than the others, but there is no a comprehensive reference introducing most used data mining method in credit scoring problem. The aim of this study is providing a comprehensive literature survey related to applied data mining techniques in credit scoring context. Such reference can help the researchers to be aware of most common methods in credit scoring evaluation, find their limitations, improve them and suggest new method with better capabilities. At the end we notice the limitation of the most proposed methods and suggest the more applicable method than other proposed.", "title": "" }, { "docid": "23a77ef19b59649b50f168b1cb6cb1c5", "text": "A novel interleaved high step-up converter with voltage multiplier cell is proposed in this paper to avoid the extremely narrow turn-off period and to reduce the current ripple, which flows through the power devices compared with the conventional interleaved boost converter in high step-up applications. Interleaved structure is employed in the input side to distribute the input current, and the voltage multiplier cell is adopted in the output side to achieve a high step-up gain. The voltage multiplier cell is composed of the secondary windings of the coupled inductors, a series capacitor, and two diodes. Furthermore, the switch voltage stress is reduced due to the transformer function of the coupled inductors, which makes low-voltage-rated MOSFETs available to reduce the conduction losses. Moreover, zero-current-switching turn- on soft-switching performance is realized to reduce the switching losses. In addition, the output diode turn-off current falling rate is controlled by the leakage inductance of the coupled inductors, which alleviates the diode reverse recovery problem. Additional active device is not required in the proposed converter, which makes the presented circuit easy to design and control. Finally, a 1-kW 40-V-input 380-V-output prototype operating at 100 kHz switching frequency is built and tested to verify the effectiveness of the presented converter.", "title": "" }, { "docid": "bd2d864aa8c4871e883a2e1f199160de", "text": "This paper proposes a framework for describing, comparing and understanding visualization tools that provide awareness of human activities in software development. The framework has several purposes -- it can act as a formative evaluation mechanism for tool designers; as an assessment tool for potential tool users; and as a comparison tool so that tool researchers can compare and understand the differences between various tools and identify potential new research areas. We use this framework to structure a survey of visualization tools for activity awareness in software development. Based on this survey we suggest directions for future research.", "title": "" }, { "docid": "cb196f1bd373110cf7428a46f73f3a8f", "text": "We present Corona, a wearable device that allows constant high-voltage electrostatic charge to be continuously accumulated in the human body. We propose the usages of Corona for three basic functions; generating haptic sensations, generating electric power from body static charge and near-body electric field, and inducing physical force near the body. We describe detailed principle of operation, analysis of produced energy and force, discussion on safety issues, as well as demonstration of proof-of-concept applications for aforementioned basic functions. We conclude with discussion of our experiments using the prototype and applications, which also involves a study to gather user feedbacks. To the best of our knowledge, Corona is the first work to exploit continuous high-voltage static charge on the human body for Human-Computer Interaction purposes.", "title": "" }, { "docid": "52b481885dc7ad62dc4e8b3e31b9e71e", "text": "In this paper, we propose a novel deep learning based video sa li ncy prediction method, named DeepVS. Specifically, we establ i h a large-scale eye-tracking database of videos (LEDOV), which includes 32 ubjects’ fixations on 538 videos. We find from LEDOV that human attention is more likely to be attracted by objects, particularly the moving objects or the moving parts of objects. Hence, an object-to-motion convolutional neural network (OM-CNN) is developed to predict the intra-frame saliency for DeepVS, w hich is composed of the objectness and motion subnets. In OM-CNN, cross-net m ask and hierarchical feature normalization are proposed to combine the sp atial features of the objectness subnet and the temporal features of the motion su b et. We further find from our database that there exists a temporal correlati on of human attention with a smooth saliency transition across video frames. We th us propose saliencystructured convolutional long short-term memory (SS-Conv LSTM) network, using the extracted features from OM-CNN as the input. Consequ ently, the interframe saliency maps of a video can be generated, which consid er both structured output with center-bias and cross-frame transitions of hum an attention maps. Finally, the experimental results show that DeepVS advances t he tate-of-the-art in video saliency prediction.", "title": "" }, { "docid": "25e50a3e98b58f833e1dd47aec94db21", "text": "Sharing knowledge for multiple related machine learning tasks is an effective strategy to improve the generalization performance. In this paper, we investigate knowledge sharing across categories for action recognition in videos. The motivation is that many action categories are related, where common motion pattern are shared among them (e.g. diving and high jump share the jump motion). We propose a new multi-task learning method to learn latent tasks shared across categories, and reconstruct a classifier for each category from these latent tasks. Compared to previous methods, our approach has two advantages: (1) The learned latent tasks correspond to basic motion patterns instead of full actions, thus enhancing discrimination power of the classifiers. (2) Categories are selected to share information with a sparsity regularizer, avoiding falsely forcing all categories to share knowledge. Experimental results on multiple public data sets show that the proposed approach can effectively transfer knowledge between different action categories to improve the performance of conventional single task learning methods.", "title": "" }, { "docid": "07e2b3550183fd4d2a42591a9726f77c", "text": "Modern cryptocurrency systems, such as Ethereum, permit complex financial transactions through scripts called smart contracts. These smart contracts are executed many, many times, always without real concurrency. First, all smart contracts are serially executed by miners before appending them to the blockchain. Later, those contracts are serially re-executed by validators to verify that the smart contracts were executed correctly by miners. Serial execution limits system throughput and fails to exploit today's concurrent multicore and cluster architectures. Nevertheless, serial execution appears to be required: contracts share state, and contract programming languages have a serial semantics.\n This paper presents a novel way to permit miners and validators to execute smart contracts in parallel, based on techniques adapted from software transactional memory. Miners execute smart contracts speculatively in parallel, allowing non-conflicting contracts to proceed concurrently, and \"discovering\" a serializable concurrent schedule for a block's transactions, This schedule is captured and encoded as a deterministic fork-join program used by validators to re-execute the miner's parallel schedule deterministically but concurrently.\n Smart contract benchmarks run on a JVM with ScalaSTM show that a speedup of 1.33x can be obtained for miners and 1.69x for validators with just three concurrent threads.", "title": "" }, { "docid": "5d154a62b22415cbedd165002853315b", "text": "Unaccompanied immigrant children are a highly vulnerable population, but research into their mental health and psychosocial context remains limited. This study elicited lawyers’ perceptions of the mental health needs of unaccompanied children in U.S. deportation proceedings and their mental health referral practices with this population. A convenience sample of 26 lawyers who work with unaccompanied children completed a semi-structured, online survey. Lawyers surveyed frequently had mental health concerns about their unaccompanied child clients, used clinical and lay terminology to describe symptoms, referred for both expert testimony and treatment purposes, frequently encountered barriers to accessing appropriate services, and expressed interest in mental health training. The results of this study suggest a complex intersection between the legal and mental health needs of unaccompanied children, and the need for further research and improved service provision in support of their wellbeing.", "title": "" }, { "docid": "ef089e236b937e8410c70c251dfbe923", "text": "the fast development of Graphics Processing Unit (GPU) leads to the popularity of General-purpose usage of GPU (GPGPU). So far, most modern computers are CPU-GPGPU heterogeneous architecture and CPU is used as host processor. In this work, we promote a multithread file chunking prototype system, which is able to exploit the hardware organization of the CPU-GPGPU heterogeneous computer and determine which device should be used to chunk the file to accelerate the content based file chunking operation of deduplication. We built rules for the system to choose which device should be used to chunk file and also found the optimal choice of other related parameters of both CPU and GPGPU subsystem like segment size and block dimension. This prototype was implemented and tested. The result of using GTX460(336 cores) and Intel i5 (four cores) shows that this system can increase the chunking speed 63% compared to using GPGPU alone and 80% compared to using CPU alone.", "title": "" }, { "docid": "6b44bd202f964033a2a2433d6322f160", "text": "We apply convolutional neural networks (CNN) to the problem of image orientation detection in the context of determining the correct orientation (from 0, 90, 180, and 270 degrees) of a consumer photo. The problem is especially important for digitazing analog photographs. We substantially improve on the published state of the art in terms of the performance on one of the standard datasets, and test our system on a more difficult large dataset of consumer photos. We use Guided Backpropagation to obtain insights into how our CNN detects photo orientation, and to explain its mistakes.", "title": "" }, { "docid": "f0a22a060fe9df0c2ea46f8d9639a093", "text": "Discourse structure is the hidden link between surface features and document-level properties, such as sentiment polarity. We show that the discourse analyses produced by Rhetorical Structure Theory (RST) parsers can improve document-level sentiment analysis, via composition of local information up the discourse tree. First, we show that reweighting discourse units according to their position in a dependency representation of the rhetorical structure can yield substantial improvements on lexicon-based sentiment analysis. Next, we present a recursive neural network over the RST structure, which offers significant improvements over classificationbased methods.", "title": "" }, { "docid": "302f92267ae6120112f24e685a775a68", "text": "Network measurement remains a missing piece in today's software packet processing platforms. Sketches provide a promising building block for filling this void by monitoring every packet with fixed-size memory and bounded errors. However, our analysis shows that existing sketch-based measurement solutions suffer from severe performance drops under high traffic load. Although sketches are efficiently designed, applying them in network measurement inevitably incurs heavy computational overhead.\n We present SketchVisor, a robust network measurement framework for software packet processing. It augments sketch-based measurement in the data plane with a fast path, which is activated under high traffic load to provide high-performance local measurement with slight accuracy degradations. It further recovers accurate network-wide measurement results via compressive sensing. We have built a SketchVisor prototype on top of Open vSwitch. Extensive testbed experiments show that SketchVisor achieves high throughput and high accuracy for a wide range of network measurement tasks and microbenchmarks.", "title": "" }, { "docid": "a85c13406ddc3dc057f029ba96fdffe1", "text": "We apply statistical machine translation (SMT) tools to generate novel paraphrases of input sentences in the same language. The system is trained on large volumes of sentence pairs automatically extracted from clustered news articles available on the World Wide Web. Alignment Error Rate (AER) is measured to gauge the quality of the resulting corpus. A monotone phrasal decoder generates contextual replacements. Human evaluation shows that this system outperforms baseline paraphrase generation techniques and, in a departure from previous work, offers better coverage and scalability than the current best-of-breed paraphrasing approaches.", "title": "" }, { "docid": "046bcb0a39184bdf5a97dba120d8ba0f", "text": "Finishing 90-epoch ImageNet-1k training with ResNet-50 on a NVIDIA M40 GPU takes 14 days. This training requires 10 single precision operations in total. On the other hand, the world’s current fastest supercomputer can finish 2× 10 single precision operations per second (Dongarra et al. 2017). If we can make full use of the supercomputer for DNN training, we should be able to finish the 90-epoch ResNet-50 training in five seconds. However, the current bottleneck for fast DNN training is in the algorithm level. Specifically, the current batch size (e.g. 512) is too small to make efficient use of many processors For large-scale DNN training, we focus on using large-batch data-parallelism synchronous SGD without losing accuracy in the fixed epochs. The LARS algorithm (You, Gitman, and Ginsburg 2017) enables us to scale the batch size to extremely large case (e.g. 32K). We finish the 100-epoch ImageNet training with AlexNet in 24 minutes. Same as Facebook’s result (Goyal et al. 2017), we finish the 90-epoch ImageNet training with ResNet-50 in one hour by 512 Intel KNLs.", "title": "" }, { "docid": "ba4d30e7ea09d84f8f7d96c426e50f34", "text": "Submission instructions: These questions require thought but do not require long answers. Please be as concise as possible. You should submit your answers as a writeup in PDF format via GradeScope and code via the Snap submission site. Submitting writeup: Prepare answers to the homework questions into a single PDF file and submit it via http://gradescope.com. Make sure that the answer to each question is on a separate page. On top of each page write the number of the question you are answering. Please find the cover sheet and the recommended templates located here: Not including the cover sheet in your submission will result in a 2-point penalty. It is also important to tag your answers correctly on Gradescope. We will deduct 5/N points for each incorrectly tagged subproblem (where N is the number of subproblems). This means you can lose up to 5 points for incorrect tagging. Put all the code for a single question into a single file and upload it. Consider a user-item bipartite graph where each edge in the graph between user U to item I, indicates that user U likes item I. We also represent the ratings matrix for this set of users and items as R, where each row in R corresponds to a user and each column corresponds to an item. If user i likes item j, then R i,j = 1, otherwise R i,j = 0. Also assume we have m users and n items, so matrix R is m × n.", "title": "" }, { "docid": "0974cee877ff2fecfda81d48012c07d3", "text": "New method of blinking detection is proposed. The utmost important of blinking detection method is robust against different users, noise, and also change of eye shape. In this paper, we propose blinking detection method by measuring the distance between two arcs of eye (upper part and lower part). We detect eye arcs by apply Gabor filter onto eye image. As we know that Gabor filter has advantage on image processing application since it able to extract spatial localized spectral features such as line, arch, and other shapes. After two of eye arcs are detected, we measure the distance between arcs of eye by using connected labeling method. The open eye is marked by the distance between two arcs is more than threshold and otherwise, the closed eye is marked by the distance less than threshold. The experiment result shows that our proposed method robust enough against different users, noise, and eye shape changes with perfectly accuracy.", "title": "" }, { "docid": "48119edba399efba09c1c8198ee78f0d", "text": "In the Markov decision process model, policies are usually evaluated by expected cumulative rewards. As this decision criterion is not always suitable, we propose in this paper an algorithm for computing a policy optimal for the quantile criterion. Both finite and infinite horizons are considered. Finally we experimentally evaluate our approach on random MDPs and on a data center control problem.", "title": "" }, { "docid": "52c8e39a4d6d11a36e46d655cc032a24", "text": "Hundreds of bacterial species make up the mammalian intestinal microbiota. Following perturbations by antibiotics, diet, immune deficiency or infection, this ecosystem can shift to a state of dysbiosis. This can involve overgrowth (blooming) of otherwise under-represented or potentially harmful bacteria (for example, pathobionts). Here, we present evidence suggesting that dysbiosis fuels horizontal gene transfer between members of this ecosystem, facilitating the transfer of virulence and antibiotic resistance genes and thereby promoting pathogen evolution.", "title": "" }, { "docid": "cb2e556dcd7ee57998dbc0c4746f59ff", "text": "Affective understanding of film plays an important role in sophisticated movie analysis, ranking and indexing. However, due to the seemingly inscrutable nature of emotions and the broad affective gap from low-level features, this problem is seldom addressed. In this paper, we develop a systematic approach grounded upon psychology and cinematography to address several important issues in affective understanding. An appropriate set of affective categories are identified and steps for their classification developed. A number of effective audiovisual cues are formulated to help bridge the affective gap. In particular, a holistic method of extracting affective information from the multifaceted audio stream has been introduced. Besides classifying every scene in Hollywood domain movies probabilistically into the affective categories, some exciting applications are demonstrated. The experimental results validate the proposed approach and the efficacy of the audiovisual cues.", "title": "" }, { "docid": "b0593843ce815016a003c60f8f154006", "text": "This paper introduces a method for acquiring forensic-grade evidence from Android smartphones using open source tools. We investigate in particular cases where the suspect has made use of the smartphone's Wi-Fi or Bluetooth interfaces. We discuss the forensic analysis of four case studies, which revealed traces that were left in the inner structure of three mobile Android devices and also indicated security vulnerabilities. Subsequently, we propose a detailed plan for forensic examiners to follow when dealing with investigations of potential crimes committed using the wireless facilities of a suspect Android smartphone. This method can be followed to perform physical acquisition of data without using commercial tools and then to examine them safely in order to discover any activity associated with wireless communications. We evaluate our method using the Association of Chief Police Officers' (ACPO) guidelines of good practice for computer-based, electronic evidence and demonstrate that it is made up of an acceptable host of procedures for mobile forensic analysis, focused specifically on device Bluetooth and Wi-Fi facilities.", "title": "" } ]
scidocsrr
92008e43fc7926560d9ead316baac9d4
On the Two-Dimensional Simplification of Three-Dimensional Cementless Hip Stem Numerical Models.
[ { "docid": "ddaf11dd14952ca864d386a84a0b0f9d", "text": "Bone loss around femoral hip stems is one of the problems threatening the long-term fixation of uncemented stems. Many believe that this phenomenon is caused by reduced stresses in the bone (stress shielding). In the present study the mechanical consequences of different femoral stem materials were investigated using adaptive bone remodeling theory in combination with the finite element method. Bone-remodeling in the femur around the implant and interface stresses between bone and implant were investigated for fully bonded femoral stems. Cemented stems (cobalt-chrome or titanium alloy) caused less bone resorption and lower interface stresses than uncemented stems made from the same materials. The range of the bone resorption predicted in the simulation models was from 23% in the proximal medial cortex surrounding the cemented titanium alloy stem to 76% in the proximal medial cortex around the uncemented cobalt-chrome stem. Very little bone resorption was predicted around a flexible, uncemented \"iso-elastic\" stem, but the proximal interface stresses increased drastically relative to the stiffer uncemented stems composed of cobalt-chrome or titanium alloy. However, the proximal interface stress peak was reduced and shifted during the adaptive remodeling process. The latter was found particularly in the stiffer uncemented cobalt-chrome-molybdenum implant and less for the flexible iso-elastic implant.", "title": "" } ]
[ { "docid": "88804f285f4d608b81a1cd741dbf2b7e", "text": "Predicting ad click-through rates (CTR) is a massive-scale learning problem that is central to the multi-billion dollar online advertising industry. We present a selection of case studies and topics drawn from recent experiments in the setting of a deployed CTR prediction system. These include improvements in the context of traditional supervised learning based on an FTRL-Proximal online learning algorithm (which has excellent sparsity and convergence properties) and the use of per-coordinate learning rates.\n We also explore some of the challenges that arise in a real-world system that may appear at first to be outside the domain of traditional machine learning research. These include useful tricks for memory savings, methods for assessing and visualizing performance, practical methods for providing confidence estimates for predicted probabilities, calibration methods, and methods for automated management of features. Finally, we also detail several directions that did not turn out to be beneficial for us, despite promising results elsewhere in the literature. The goal of this paper is to highlight the close relationship between theoretical advances and practical engineering in this industrial setting, and to show the depth of challenges that appear when applying traditional machine learning methods in a complex dynamic system.", "title": "" }, { "docid": "45dfa7f6b1702942b5abfb8de920d1c2", "text": "Loneliness is a common condition in older adults and is associated with increased morbidity and mortality, decreased sleep quality, and increased risk of cognitive decline. Assessing loneliness in older adults is challenging due to the negative desirability biases associated with being lonely. Thus, it is necessary to develop more objective techniques to assess loneliness in older adults. In this paper, we describe a system to measure loneliness by assessing in-home behavior using wireless motion and contact sensors, phone monitors, and computer software as well as algorithms developed to assess key behaviors of interest. We then present results showing the accuracy of the system in detecting loneliness in a longitudinal study of 16 older adults who agreed to have the sensor platform installed in their own homes for up to 8 months. We show that loneliness is significantly associated with both time out-of-home (β = -0.88 andp <; 0.01) and number of computer sessions (β = 0.78 and p <; 0.05). R2 for the model was 0.35. We also show the model's ability to predict out-of-sample loneliness, demonstrating that the correlation between true loneliness and predicted out-of-sample loneliness is 0.48. When compared with the University of California at Los Angeles loneliness score, the normalized mean absolute error of the predicted loneliness scores was 0.81 and the normalized root mean squared error was 0.91. These results represent first steps toward an unobtrusive, objective method for the prediction of loneliness among older adults, and mark the first time multiple objective behavioral measures that have been related to this key health outcome.", "title": "" }, { "docid": "8b3f597acb5a5a1333176a13e7dbbe43", "text": "Generalization bounds for time series prediction and other non-i.i.d. learning scenarios that can be found in the machine learning and statistics literature assume that observations come from a (strictly) stationary distribution. The first bounds for completely non-stationary setting were proved in [6]. In this work we present an extension of these results and derive novel algorithms for forecasting nonstationary time series. Our experimental results show that our algorithms significantly outperform standard autoregressive models commonly used in practice.", "title": "" }, { "docid": "fb70de7ed3e42c37b130686bfa3aee47", "text": "Data from vehicles instrumented with GPS or other localization technologies are increasingly becoming widely available due to the investments in Connected and Automated Vehicles (CAVs) and the prevalence of personal mobile devices such as smartphones. Tracking or trajectory data from these probe vehicles are already being used in practice for travel time or speed estimation and for monitoring network conditions. However, there has been limited work on extracting other critical traffic flow variables, in particular density and flow, from probe data. This paper presents a microscopic approach (akin to car-following) for inferring the number of unobserved vehicles in between a set of probe vehicles in the traffic stream. In particular, we develop algorithms to extract and exploit the somewhat regular patterns in the trajectories when the probe vehicles travel through stop-and-go waves in congested traffic. Using certain critical points of trajectories as the input, the number of unobserved vehicles between consecutive probes are then estimated through a Naïve Bayes model. The parameters needed for the Naïve Bayes include means and standard deviations for the probability density functions (pdfs) for the distance headways between vehicles. These parameters are estimated through supervised as well as unsupervised learning methods. The proposed ideas are tested based on the trajectory data collected from US 101 and I-80 in California for the FHWA's NGSIM (next generation simulation) project. Under the dense traffic conditions analyzed, the results show that the number of unobserved vehicles between two probes can be predicted with an accuracy of ±1 vehicle almost always.", "title": "" }, { "docid": "e144a814723d205855a61cb52466ce96", "text": "In this article, we discuss the development of automatic artifact reconstruction systems capable of coping with the realities of real-world geometric puzzles that anthropologists and archaeologists face on a daily basis. Such systems must do more than find matching fragments and subsequently align these matched fragments; these systems must be capable of simultaneously solving an unknown number of multiple puzzles where all of the puzzle pieces are mixed together in an unorganized pile and each puzzle may be missing an unknown number of its pieces. Discussion has cast the puzzle reconstruction problem into a generic terminology that is formalized appropriately for the 2-D and 3-D artifact reconstruction problems. Two leading approaches for 2-D tablet reconstruction and four leading approaches for 3-D object reconstruction have been discussed in detail, including partial or complete descriptions for the numerous algorithms upon which these systems rely. Several extensions to the geometric matching problem that use patterns apparent on the fragment outer surface were also discussed that generalize the problem beyond that of matching strictly geometry. The models needed for solving these problems are new and challenging, and most involve 3-D that is largely unexplored by the signal processing community. This work is highly relevant to the new 3-D signal processing that is looming on the horizon for tele-immersion.", "title": "" }, { "docid": "d3ac14fd1ac21c4d67060ab914859247", "text": "Decision making in uncertain and risky environments is a prominent area of research. Standard economic theories fail to fully explain human behaviour, while a potentially promising alternative may lie in the direction of Reinforcement Learning (RL) theory. We analyse data for 46 players extracted from a financial market online game and test whether Reinforcement Learning (Q-Learning) could capture these players behaviour using a riskiness measure based on financial modeling. Moreover we test an earlier hypothesis that players are “naíve” (short-sighted). Our results indicate that Reinforcement Learning is a component of the decision-making process. We also find that there is a significant improvement of fitting for some of the players when using a full RL model against a reduced version (myopic), where only immediate reward is valued by the players, indicating that not all players are naíve.", "title": "" }, { "docid": "7f1625c0d1ed39245c77db9cd3ca2bd7", "text": "We address the computational problem of novel human pose synthesis. Given an image of a person and a desired pose, we produce a depiction of that person in that pose, retaining the appearance of both the person and background. We present a modular generative neural network that synthesizes unseen poses using training pairs of images and poses taken from human action videos. Our network separates a scene into different body part and background layers, moves body parts to new locations and refines their appearances, and composites the new foreground with a hole-filled background. These subtasks, implemented with separate modules, are trained jointly using only a single target image as a supervised label. We use an adversarial discriminator to force our network to synthesize realistic details conditioned on pose. We demonstrate image synthesis results on three action classes: golf, yoga/workouts and tennis, and show that our method produces accurate results within action classes as well as across action classes. Given a sequence of desired poses, we also produce coherent videos of actions.", "title": "" }, { "docid": "1743f93da0dfb4910022a4aaba961a4b", "text": "The ability to recognize and predict temporal sequences of sensory inputs is vital for survival in natural environments. Based on many known properties of cortical neurons, hierarchical temporal memory (HTM) sequence memory recently has been proposed as a theoretical framework for sequence learning in the cortex. In this letter, we analyze properties of HTM sequence memory and apply it to sequence learning and prediction problems with streaming data. We show the model is able to continuously learn a large number of variable order temporal sequences using an unsupervised Hebbian-like learning rule. The sparse temporal codes formed by the model can robustly handle branching temporal sequences by maintaining multiple predictions until there is sufficient disambiguating evidence. We compare the HTM sequence memory with other sequence learning algorithms, including statistical methods—autoregressive integrated moving average; feedforward neural networks—time delay neural network and online sequential extreme learning machine; and recurrent neural networks—long short-term memory and echo-state networks on sequence prediction problems with both artificial and real-world data. The HTM model achieves comparable accuracy to other state-of-the-art algorithms. The model also exhibits properties that are critical for sequence learning, including continuous online learning, the ability to handle multiple predictions and branching sequences with high-order statistics, robustness to sensor noise and fault tolerance, and good performance without task-specific hyperparameter tuning. Therefore, the HTM sequence memory not only advances our understanding of how the brain may solve the sequence learning problem but is also applicable to real-world sequence learning problems from continuous data streams.", "title": "" }, { "docid": "606bc892776616ffd4f9f9dc44565019", "text": "Despite the various attractive features that Cloud has to offer, the rate of Cloud migration is rather slow, primarily due to the serious security and privacy issues that exist in the paradigm. One of the main problems in this regard is that of authorization in the Cloud environment, which is the focus of our research. In this paper, we present a systematic analysis of the existing authorization solutions in Cloud and evaluate their effectiveness against well-established industrial standards that conform to the unique access control requirements in the domain. Our analysis can benefit organizations by helping them decide the best authorization technique for deployment in Cloud; a case study along with simulation results is also presented to illustrate the procedure of using our qualitative analysis for the selection of an appropriate technique, as per Cloud consumer requirements. From the results of this evaluation, we derive the general shortcomings of the extant access control techniques that are keeping them from providing successful authorization and, therefore, widely adopted by the Cloud community. To that end, we enumerate the features an ideal access control mechanisms for the Cloud should have, and combine them to suggest the ultimate solution to this major security challenge — access control as a service (ACaaS) for the software as a service (SaaS) layer. We conclude that a meticulous research is needed to incorporate the identified authorization features into a generic ACaaS framework that should be adequate for providing high level of extensibility and security by integrating multiple access control models.", "title": "" }, { "docid": "609b1df5196de8809b6293a481868c93", "text": "In this paper, a new localization system utilizing afocal optical flow sensor (AOFS) based sensor fusion for indoor service robots in low luminance and slippery environment is proposed, where conventional localization systems do not perform well. To accurately estimate the moving distance of a robot in a slippery environment, the robot was equipped with an AOFS along with two conventional wheel encoders. To estimate the orientation of the robot, we adopted a forward-viewing mono-camera and a gyroscope. In a very low luminance environment, it is hard to conduct conventional feature extraction and matching for localization. Instead, the interior space structure from an image and robot orientation was assessed. To enhance the appearance of image boundary, rolling guidance filter was applied after the histogram equalization. The proposed system was developed to be operable on a low-cost processor and implemented on a consumer robot. Experiments were conducted in low illumination condition of 0.1 lx and carpeted environment. The robot moved for 20 times in a 1.5 × 2.0 m square trajectory. When only wheel encoders and a gyroscope were used for robot localization, the maximum position error was 10.3 m and the maximum orientation error was 15.4°. Using the proposed system, the maximum position error and orientation error were found as 0.8 m and within 1.0°, respectively.", "title": "" }, { "docid": "a4ae0d8042316362380b1976f8278743", "text": "We are able to recognise familiar faces easily across large variations in image quality, though our ability to match unfamiliar faces is strikingly poor. Here we ask how the representation of a face changes as we become familiar with it. We use a simple image-averaging technique to derive abstract representations of known faces. Using Principal Components Analysis, we show that computational systems based on these averages consistently outperform systems based on collections of instances. Furthermore, the quality of the average improves as more images are used to derive it. These simulations are carried out with famous faces, over which we had no control of superficial image characteristics. We then present data from three experiments demonstrating that image averaging can also improve recognition by human observers. Finally, we describe how PCA on image averages appears to preserve identity-specific face information, while eliminating non-diagnostic pictorial information. We therefore suggest that this is a good candidate for a robust face representation.", "title": "" }, { "docid": "0f85d732e3b964b43b7fd613f960756d", "text": "Recent advances in mobile devices and their sensing capabilities have enabled the collection of rich contextual information and mobile device usage records through the device logs. These context-rich logs open a venue for mining the personal preferences of mobile users under varying contexts and thus enabling the development of personalized context-aware recommendation and other related services, such as mobile online advertising. In this article, we illustrate how to extract personal context-aware preferences from the context-rich device logs, or context logs for short, and exploit these identified preferences for building personalized context-aware recommender systems. A critical challenge along this line is that the context log of each individual user may not contain sufficient data for mining his or her context-aware preferences. Therefore, we propose to first learn common context-aware preferences from the context logs of many users. Then, the preference of each user can be represented as a distribution of these common context-aware preferences. Specifically, we develop two approaches for mining common context-aware preferences based on two different assumptions, namely, context-independent and context-dependent assumptions, which can fit into different application scenarios. Finally, extensive experiments on a real-world dataset show that both approaches are effective and outperform baselines with respect to mining personal context-aware preferences for mobile users.", "title": "" }, { "docid": "ad14a9f120aedc84abc99f1715e6769b", "text": "We introduce an interactive tool which enables a user to quickly assemble an architectural model directly over a 3D point cloud acquired from large-scale scanning of an urban scene. The user loosely defines and manipulates simple building blocks, which we call SmartBoxes, over the point samples. These boxes quickly snap to their proper locations to conform to common architectural structures. The key idea is that the building blocks are smart in the sense that their locations and sizes are automatically adjusted on-the-fly to fit well to the point data, while at the same time respecting contextual relations with nearby similar blocks. SmartBoxes are assembled through a discrete optimization to balance between two snapping forces defined respectively by a data-fitting term and a contextual term, which together assist the user in reconstructing the architectural model from a sparse and noisy point cloud. We show that a combination of the user's interactive guidance and high-level knowledge about the semantics of the underlying model, together with the snapping forces, allows the reconstruction of structures which are partially or even completely missing from the input.", "title": "" }, { "docid": "d8caa26f9f52802e9b7da57a0447ece4", "text": "Design equations for satisfying the off-nominal operating condition [i.e., only the zero-voltage switching (ZVS) condition] of the Class-E amplifier with a linear shunt capacitance at a duty ratio D=0.5 are derived. A new parameter s (V/s), called the slope of switch voltage when the switch turns on is introduced to obtain an image of the distance from the nominal conditions. By examining off-nominal Class-E operation degree of the design freedom of the Class-E amplifier increases by one. In addition various amplifier parameters such as operating frequency, output power, and load resistance range can be set as design specifications. For example, the peak switch voltage and switch current can be taken into account in the design procedure. Examples of a design procedure of the Class-E amplifier for off-nominal operation are given. The theoretical results were verified with PSpice simulation and experiments.", "title": "" }, { "docid": "45a24862022bbc1cf3e33aea1e4f8b12", "text": "Biohybrid consists of a living organism or cell and at least one engineered component. Designing robot-plant biohybrids is a great challenge: it requires interdisciplinary reconsideration of capabilities intimate specific to the biology of plants. Envisioned advances should improve agricultural/horticultural/social practice and could open new directions in utilization of plants by humans. Proper biohybrid cooperation depends upon effective communication. During evolution, plants developed many ways to communicate with each other, with animals, and with microorganisms. The most notable examples are: the use of phytohormones, rapid long-distance signaling, gravity, and light perception. These processes can now be intentionally re-shaped to establish plant-robot communication. In this article, we focus on plants physiological and molecular processes that could be used in bio-hybrids. We show phototropism and biomechanics as promising ways of effective communication, resulting in an alteration in plant architecture, and discuss the specifics of plants anatomy, physiology and development with regards to the bio-hybrids. Moreover, we discuss ways how robots could influence plants growth and development and present aims, ideas, and realized projects of plant-robot biohybrids.", "title": "" }, { "docid": "86f25f09b801d28ce32f1257a39ddd44", "text": "Modern mobile devices have access to a wealth of data suitable for learning models, which in turn can greatly improve the user experience on the device. For example, language models can improve speech recognition and text entry, and image models can automatically select good photos. However, this rich data is often privacy sensitive, large in quantity, or both, which may preclude logging to the data-center and training there using conventional approaches. We advocate an alternative that leaves the training data distributed on the mobile devices, and learns a shared model by aggregating locally-computed updates. We term this decentralized approach Federated Learning. We present a practical method for the federated learning of deep networks that proves robust to the unbalanced and non-IID data distributions that naturally arise. This method allows high-quality models to be trained in relatively few rounds of communication, the principal constraint for federated learning. The key insight is that despite the non-convex loss functions we optimize, parameter averaging over updates from multiple clients produces surprisingly good results, for example decreasing the communication needed to train an LSTM language model by two orders of magnitude.", "title": "" }, { "docid": "1ed9f257129a45388fcf976b87e37364", "text": "Mobile cloud computing is an extension of cloud computing that allow the users to access the cloud service via their mobile devices. Although mobile cloud computing is convenient and easy to use, the security challenges are increasing significantly. One of the major issues is unauthorized access. Identity Management enables to tackle this issue by protecting the identity of users and controlling access to resources. Although there are several IDM frameworks in place, they are vulnerable to attacks like timing attacks in OAuth, malicious code attack in OpenID and huge amount of information leakage when user’s identity is compromised in Single Sign-On. Our proposed framework implicitly authenticates a user based on user’s typing behavior. The authentication information is encrypted into homomorphic signature before being sent to IDM server and tokens are used to authorize users to access the cloud resources. Advantages of our proposed framework are: user’s identity protection and prevention from unauthorized access.", "title": "" }, { "docid": "a2a633c972cb84d9b7d27e347bb59cfa", "text": "This study investigated three-dimensional (3D) texture as a possible diagnostic marker of Alzheimer’s disease (AD). T1-weighted magnetic resonance (MR) images were obtained from 17 AD patients and 17 age and gender-matched healthy controls. 3D texture features were extracted from the circular 3D ROIs placed using a semi-automated technique in the hippocampus and entorhinal cortex. We found that classification accuracies based on texture analysis of the ROIs varied from 64.3% to 96.4% due to different ROI selection, feature extraction and selection options, and that most 3D texture features selected were correlated with the mini-mental state examination (MMSE) scores. The results indicated that 3D texture could detect the subtle texture differences between tissues in AD patients and normal controls, and texture features of MR images in the hippocampus and entorhinal cortex might be related to the severity of AD cognitive impairment. These results suggest that 3D texture might be a useful aid in AD diagnosis.", "title": "" }, { "docid": "2809e4b07123e5d594481e423c001821", "text": "In the current driving environment, the top priority is the safety of person. There are two methods proposed to solve safety problems. One is active sensors method and another is passive sensor method. Though with high accuracy, active sensors method has many disadvantages such as high cost, failure to adapt to complex change of environments, and problems relating to laws. Thus there is no way to popularize it. In contrast, passive sensor method is more suitable to current assist systems in virtue of low cost, ability to acquire lots of information. In this paper, the passive sensor method is applied to front and rear vision-based collision warning application. Meanwhile, time-to-contact is used to collision judgment analysis and dedicated short range communications is used to give alert information to near vehicle.", "title": "" } ]
scidocsrr
b713b49d4da4b3c3367b9b14b5eb566c
IoT and Cloud Computing in Automation of Assembly Modeling Systems
[ { "docid": "e33dd9c497488747f93cfcc1aa6fee36", "text": "The phrase Internet of Things (IoT) heralds a vision of the future Internet where connecting physical things, from banknotes to bicycles, through a network will let them take an active part in the Internet, exchanging information about themselves and their surroundings. This will give immediate access to information about the physical world and the objects in it leading to innovative services and increase in efficiency and productivity. This paper studies the state-of-the-art of IoT and presents the key technological drivers, potential applications, challenges and future research areas in the domain of IoT. IoT definitions from different perspective in academic and industry communities are also discussed and compared. Finally some major issues of future research in IoT are identified and discussed briefly.", "title": "" } ]
[ { "docid": "a722bc4688ec23f0547d192e5a41fc05", "text": "This study investigated the aggressive components of the dream content of 120 Spanish children and adolescents of 4 different age groups. The C. S. Hall and R. L. Van de Castle (1966) coding system was used to rate the number of dream characters and aggressions, and the content findings were analyzed via the indicators presented by G. W. Domhoff (1993, 1996, 2003). Results confirm the findings of previous studies of gender and age differences in dream content: Boys tend to have more aggressive dream content, which tends to decrease with age until reaching a pattern similar to the normative group; younger children, especially boys, tend to be victims of aggression more frequently than do older children. In addition, a data analysis procedure involving cumulative scoring of the aggression scale as well as nonparametric statistics yielded significant differences between boys and girls of the youngest group for severity of aggression.", "title": "" }, { "docid": "a1bff389a9a95926a052ded84c625a9e", "text": "Automatically assessing the subjective quality of a photo is a challenging area in visual computing. Previous works study the aesthetic quality assessment on a general set of photos regardless of the photo's content and mainly use features extracted from the entire image. In this work, we focus on a specific genre of photos: consumer photos with faces. This group of photos constitutes an important part of consumer photo collections. We first conduct an online study on Mechanical Turk to collect ground-truth and subjective opinions for a database of consumer photos with faces. We then extract technical features, perceptual features, and social relationship features to represent the aesthetic quality of a photo, by focusing on face-related regions. Experiments show that our features perform well for categorizing or predicting the aesthetic quality.", "title": "" }, { "docid": "27465b2c8ce92ccfbbda6c802c76838f", "text": "Nonlinear hyperelastic energies play a key role in capturing the fleshy appearance of virtual characters. Real-world, volume-preserving biological tissues have Poisson’s ratios near 1/2, but numerical simulation within this regime is notoriously challenging. In order to robustly capture these visual characteristics, we present a novel version of Neo-Hookean elasticity. Our model maintains the fleshy appearance of the Neo-Hookean model, exhibits superior volume preservation, and is robust to extreme kinematic rotations and inversions. We obtain closed-form expressions for the eigenvalues and eigenvectors of all of the system’s components, which allows us to directly project the Hessian to semipositive definiteness, and also leads to insights into the numerical behavior of the material. These findings also inform the design of more sophisticated hyperelastic models, which we explore by applying our analysis to Fung and Arruda-Boyce elasticity. We provide extensive comparisons against existing material models.", "title": "" }, { "docid": "aa61fb7a263fc3e27914f3763c9ad464", "text": "Analog computation architectures such as artificial neural network models have received phenomenal attention lately. Massive parallel processing is natural in neural networks, which also meets the development trend of computer science. Because parallelism is a straight way to speed up computation. Thus, considering parallel algorithms on neural networks is quite reasonable. Neural network models have been shown to be able to find “good” solutions for some optimization problems in a short time [51. But, it cannot guarantee finding the real optimal solution except by techniques similar to simulated annealing. Under such circumstances, there is no systematic way to determine the annealing schedule and the time needed for convergence. Vergis et al. [ll] showed that analog computation can be simulated efficiently by a Turing machine in polynomial time. Unless P = NP, NP-complete problems cannot be solved by analog computation in polynomial time. But, from the view of time complexity, analog computation is better than digital computation in one perspective. The basic operation", "title": "" }, { "docid": "6b881473c6d4425c26b9de053c30b703", "text": "Current content-based video copy detection approaches mostly concentrate on the visual cues and neglect the audio information. In this paper, we attempt to tackle the video copy detection task resorting to audio information, which is equivalently important as well as visual information in multimedia processing. Firstly, inspired by bag-of visual words model, a bag-of audio words (BoA) representation is proposed to characterize each audio frame. Different from naive single-based modeling audio retrieval approaches, BoA is a high-level model due to its perceptual and semantical property. Within the BoA model, a coherency vocabulary indexing structure is adopted to achieve more efficient and effective indexing than single vocabulary of standard BoW model. The coherency vocabulary takes advantage of multiple audio features by computing co-occurrence of them across different feature spaces. By enforcing the tight coherency constraint across feature spaces, coherency vocabulary makes the BoA model more discriminative and robust to various audio transforms. 2D Hough transform is then applied to aggregate scores from matched audio segments. The segements fall into the peak bin is identified as the copy segments in reference video. In addition, we also accomplish video copy detection from both audio and visual cues by performing four late fusion strategies to demonstrate complementarity of audio and visual information in video copy detection. Intensive experiments are conducted on the large-scale dataset of TRECVID 2009 and competitve results are achieved.", "title": "" }, { "docid": "4d3468bb14b7ad933baac5c50feec496", "text": "Conventional material removal techniques, like CNC milling, have been proven to be able to tackle nearly any machining challenge. On the other hand, the major drawback of using conventional CNC machines is the restricted working area and their produced shape limitation limitations. From a conceptual point of view, industrial robot technology could provide an excellent base for machining being both flexible and cost efficient. However, industrial machining robots lack absolute positioning accuracy, are unable to reject/absorb disturbances in terms of process forces and lack reliable programming and simulation tools to ensure right first time machining, at production startups. This paper reviews the penetration of industrial robots in the challenging field of machining.", "title": "" }, { "docid": "e1222c5c6c4134b9b90ff2feea6efae2", "text": "Character recognition is one of the challenging tasks of pattern recognition and machine learning arena. Though a level of saturation has been obtained in machine printed character recognition, there still remains a void while recognizing handwritten scripts. We, in this paper, have summarized all the existing research efforts on the recognition of printed as well as handwritten Odia alphanumeric characters. Odia is a classical and popular language in the Indian subcontinent used by more than 50 million people. In spite of its rich history, popularity and usefulness, not much research efforts have been made to achieve human level accuracy in case of Odia OCR. This review is expected to serve a benchmark reference for research on Odia character recognition and inspire OCR research communities to make tangible impact on its growth. Here several preprocessing methodologies, segmentation approaches, feature extraction techniques and classifier models with their respective accuracies so far reported are critically reviewed, evaluated and compared. The shortcomings and deficiencies in the current state-of-the-art are discussed in detail for each stage of character recognition. A new handwritten alphanumeric character database for Odia is created and reported in this paper in order to address the paucity of benchmark Odia database. From the existing research work, future research paradigms on Odia character recognition are suggested. We hope that such a comprehensive survey on Odia character recognition will serve its purpose of being a solid reference and help creating high accuracy Odia character recognition systems.", "title": "" }, { "docid": "a3ebadf449537b5df8de3c5ab96c74cb", "text": "Do conglomerate firms have the ability to allocate resources efficiently across business segments? We address this question by comparing the performance of firms that follow passive benchmark strategies in their capital allocation process to those that actively deviate from those benchmarks. Using three measures of capital allocation style to capture various aspects of activeness, we show that active firms have a lower average industry-adjusted profitability than passive firms. This result is robust to controlling for potential endogeneity using matching analysis and regression analysis with firm fixed effects. Moreover, active firms obtain lower valuation and lower excess stock returns in subsequent periods. Our findings suggest that, on average, conglomerate firms that actively allocate resources across their business segments do not do so efficiently and that the stock market does not fully incorporate information revealed in the internal capital allocation process. Guedj and Huang are from the McCombs School of Business, University of Texas at Austin. Guedj: guedj@mail.utexas.edu and (512) 471-5781. Huang: jennifer.huang@mccombs.utexas.edu and (512) 232-9375. Sulaeman is from the Cox School of Business, Southern Methodist University, sulaeman@smu.edu and (214) 768-8284. The authors thank Alexander Butler, Amar Gande, Mark Leary, Darius Miller, Maureen O’Hara, Owen Lamont, Gordon Phillips, Mike Roberts, Oleg Rytchkov, Gideon Saar, Zacharias Sautner, Clemens Sialm, Rex Thompson, Sheridan Titman, Yuhai Xuan, participants at the Financial Research Association meeting and seminars at Cornell University, Southern Methodist University, the University of Texas at Austin, and the University of Texas at Dallas for their helpful comments.", "title": "" }, { "docid": "ab3fb8980fa8d88e348f431da3d21ed4", "text": "PIECE (Plant Intron Exon Comparison and Evolution) is a web-accessible database that houses intron and exon information of plant genes. PIECE serves as a resource for biologists interested in comparing intron-exon organization and provides valuable insights into the evolution of gene structure in plant genomes. Recently, we updated PIECE to a new version, PIECE 2.0 (http://probes.pw.usda.gov/piece or http://aegilops.wheat.ucdavis.edu/piece). PIECE 2.0 contains annotated genes from 49 sequenced plant species as compared to 25 species in the previous version. In the current version, we also added several new features: (i) a new viewer was developed to show phylogenetic trees displayed along with the structure of individual genes; (ii) genes in the phylogenetic tree can now be also grouped according to KOG (The annotation of Eukaryotic Orthologous Groups) and KO (KEGG Orthology) in addition to Pfam domains; (iii) information on intronless genes are now included in the database; (iv) a statistical summary of global gene structure information for each species and its comparison with other species was added; and (v) an improved GSDraw tool was implemented in the web server to enhance the analysis and display of gene structure. The updated PIECE 2.0 database will be a valuable resource for the plant research community for the study of gene structure and evolution.", "title": "" }, { "docid": "44101197b4db055c667da3d86a820fd3", "text": "A simple approach for obstacle detection and collision avoidance of an autonomous flying quadcopter using low-cost ultrasonic sensors and simple data fusion is presented here. The approach has been implemented and tested in a self-developed quadcopter and its evaluation shows the general realizability as well as the drawbacks of this approach. In this paper, we propose a complete MICRO-UNMANNED AERIAL VEHICLE (MUAV) platform including hardware setup and processing pipeline-that is able to perceive obstacles in (almost) all directions in its surrounding. In this paper, we propose a complete micro aerial vehicle platform—including hardware setup and processing pipeline—that is able to perceive obstacles in (almost) all directions in its surrounding. Quadcopter is equipped with ultrasonic sensor. All signals from sensors are processed by Arduino microcontroller board. [5] Output from Arduino microcontroller board used to control Quadcopter propellers. [5] Keywords— Obstacle Detection, Collision Avoidance, PID controller programming, components.", "title": "" }, { "docid": "c0cec61d37c4e0fe1fa82f8c182c5fc7", "text": "PURPOSE OF REVIEW\nCompassion has been recognized as a key aspect of high-quality healthcare, particularly in palliative care. This article provides a general review of the current understanding of compassion in palliative care and summarizes emergent compassionate initiatives in palliative care at three interdependent levels: compassion for patients, compassion in healthcare professionals, and compassionate communities at the end of life.\n\n\nRECENT FINDINGS\nCompassion is a constructive response to suffering that enhances treatment outcomes, fosters the dignity of the recipient, and provides self-care for the giver. Patients and healthcare professionals value compassion and perceive a general lack of compassion in healthcare systems. Compassion for patients and for professionals' self-care can be trained and implemented top-down (institutional policies) and bottom-up (compassion training). 'Compassionate communities' is an important emerging movement that complements regular healthcare and social services with a community-level approach to offer compassionate care for people at the end of life.\n\n\nSUMMARY\nCompassion can be enhanced through diverse methodologies at the organizational, professional, and community levels. This enhancement of compassion has the potential to improve quality of palliative care treatments, enhance healthcare providers' satisfaction, and reduce healthcare costs.", "title": "" }, { "docid": "07b2355844efc85862fb5b8122be6edf", "text": "As with other types of evidence, the courts make no presumption that digital evidence is reliable without some evidence of empirical testing in relation to the theories and techniques associated with its production. The issue of reliability means that courts pay close attention to the manner in which electronic evidence has been obtained and in particular the process in which the data is captured and stored. Previous process models have tended to focus on one particular area of digital forensic practice, such as law enforcement, and have not incorporated a formal description. We contend that this approach has prevented the establishment of generally-accepted standards and processes that are urgently needed in the domain of digital forensics. This paper presents a generic process model as a step towards developing such a generally-accepted standard for a fundamental digital forensic activity–the acquisition of digital evidence.", "title": "" }, { "docid": "26439bd538c8f0b5d6fba3140e609aab", "text": "A planar antenna with a broadband feeding structure is presented and analyzed for ultrawideband applications. The proposed antenna consists of a suspended radiator fed by an n-shape microstrip feed. Study shows that this antenna achieves an impedance bandwidth from 3.1-5.1 GHz (48%) for a reflection of coefficient of iotaS11iota < -10 dB, and an average gain of 7.7 dBi. Stable boresight radiation patterns are achieved across the entire operating frequency band, by suppressing the high order mode resonances. This design exhibits good mechanical tolerance and manufacturability.", "title": "" }, { "docid": "b5a349b6d805c2b5afac86bfe22050df", "text": "By setting apart the two functions of a support vector machine: separation of points by a nonlinear surface in the original space of patterns, and maximizing the distance between separating planes in a higher dimensional space, we are able to deene indeenite, possibly discontinuous, kernels, not necessarily inner product ones, that generate highly nonlin-ear separating surfaces. Maximizing the distance between the separating planes in the higher dimensional space is surrogated by support vector suppression, which is achieved by minimizing any desired norm of support vector multipliers. The norm may be one induced by the separation kernel if it happens to be positive deenite, or a Euclidean or a polyhe-dral norm. The latter norm leads to a linear program whereas the former norms lead to convex quadratic programs, all with an arbitrary separation kernel. A standard support vector machine can be recovered by using the same kernel for separation and support vector suppression. On a simple test example, all models perform equally well when a positive deenite kernel is used. When a negative deenite kernel is used, we are unable to solve the nonconvex quadratic program associated with a conventional support vector machine, while all other proposed models remain convex and easily generate a surface that separates all given points.", "title": "" }, { "docid": "71cac5680dafbc3c56dbfffa4472b67a", "text": "Three-dimensional printing has significant potential as a fabrication method in creating scaffolds for tissue engineering. The applications of 3D printing in the field of regenerative medicine and tissue engineering are limited by the variety of biomaterials that can be used in this technology. Many researchers have developed novel biomaterials and compositions to enable their use in 3D printing methods. The advantages of fabricating scaffolds using 3D printing are numerous, including the ability to create complex geometries, porosities, co-culture of multiple cells, and incorporate growth factors. In this review, recently-developed biomaterials for different tissues are discussed. Biomaterials used in 3D printing are categorized into ceramics, polymers, and composites. Due to the nature of 3D printing methods, most of the ceramics are combined with polymers to enhance their printability. Polymer-based biomaterials are 3D printed mostly using extrusion-based printing and have a broader range of applications in regenerative medicine. The goal of tissue engineering is to fabricate functional and viable organs and, to achieve this, multiple biomaterials and fabrication methods need to be researched.", "title": "" }, { "docid": "0b0e9d5bedcb24a65a9a43b6b0875860", "text": "Purpose – This paper summarizes and discusses the results from the LIVING LAB design study, a project within the 7 Framework Programme of the European Union. The aim of this project was to develop the conceptual design of the LIVING LAB Research Infrastructure that will be used to research human interaction with, and stimulate the adoption of, sustainable, smart and healthy innovations around the home. Design/methodology/approach – A LIVING LAB is a combined lab-/household system, analysing existing product-service-systems as well as technical and socioeconomic influences focused on the social needs of people, aiming at the development of integrated technical and social innovations and simultaneously promoting the conditions of sustainable development (highest resource efficiency, highest user orientation, etc.). This approach allows the development and testing of sustainable domestic technologies, while putting the user on centre stage. Findings – As this paper discusses the design study, no actual findings can be presented here but the focus is on presenting the research approach. LIVING LAB: Research and development of sustainable products and services through userdriven innovation in experimental-oriented environments 2 Originality/value – The two elements (real homes and living laboratories) of this approach are what make the LIVING LAB research infrastructure unique. The research conducted in LIVING LAB will be innovative in several respects. First, it will contribute to market innovation by producing breakthroughs in sustainable domestic technologies that will be easy to install, user friendly and that meet environmental performance standards in real life. Second, research from LIVING LAB will contribute to innovation in practice by pioneering new forms of in-context, user-centred research, including long-term and cross-cultural research.", "title": "" }, { "docid": "5e0cff7f2b8e5aa8d112eacf2f149d60", "text": "THEORIES IN AI FALL INT O TWO broad categories: mechanismtheories and contenttheories. Ontologies are content the ories about the sor ts of objects, properties of objects,and relations between objects tha t re possible in a specif ed domain of kno wledge. They provide potential ter ms for descr ibing our knowledge about the domain. In this article, we survey the recent de velopment of the f ield of ontologies in AI. We point to the some what different roles ontologies play in information systems, naturallanguage under standing, and knowledgebased systems. Most r esear ch on ontologies focuses on what one might characterize as domain factual knowledge, because kno wlede of that type is par ticularly useful in natural-language under standing. There is another class of ontologies that are important in KBS—one that helps in shar ing knoweldge about reasoning str ategies or pr oblemsolving methods. In a f ollow-up article, we will f ocus on method ontolo gies.", "title": "" }, { "docid": "60bdd255a19784ed2d19550222e61b69", "text": "Haptic feedback on touch-sensitive displays provides significant benefits in terms of reducing error rates, increasing interaction speed and minimizing visual distraction. This particularly holds true for multitasking situations such as the interaction with mobile devices or touch-based in-vehicle systems. In this paper, we explore how the interaction with tactile touchscreens can be modeled and enriched using a 2+1 state transition model. The model expands an approach presented by Buxton. We present HapTouch -- a force-sensitive touchscreen device with haptic feedback that allows the user to explore and manipulate interactive elements using the sense of touch. We describe the results of a preliminary quantitative study to investigate the effects of tactile feedback on the driver's visual attention, driving performance and operating error rate. In particular, we focus on how active tactile feedback allows the accurate interaction with small on-screen elements during driving. Our results show significantly reduced error rates and input time when haptic feedback is given.", "title": "" }, { "docid": "7256d6c5bebac110734275d2f985ab31", "text": "The location-based social networks (LBSN) enable users to check in their current location and share it with other users. The accumulated check-in data can be employed for the benefit of users by providing personalized recommendations. In this paper, we propose a context-aware location recommendation system for LBSNs using a random walk approach. Our proposed approach considers the current context (i.e., current social relations, personal preferences and current location) of the user to provide personalized recommendations. We build a graph model of LBSNs for performing a random walk approach with restart. Random walk is performed to calculate the recommendation probabilities of the nodes. A list of locations are recommended to users after ordering the nodes according to the estimated probabilities. We compare our algorithm, CLoRW, with popularity-based, friend-based and expert-based baselines, user-based collaborative filtering approach and a similar work in the literature. According to experimental results, our algorithm outperforms these approaches in all of the test cases.", "title": "" }, { "docid": "d91e3f1a92052d48b8033e3d9c3dd695", "text": "This work investigates the impact of syntactic features in a completely unsupervised semantic relation extraction experiment. Automated relation extraction deals with identifying semantic relation instances in a text and classifying them according to the type of relation. This task is essential in information and knowledge extraction and in knowledge base population. Supervised relation extraction systems rely on annotated examples [ , – , ] and extract di erent kinds of features from the training data, and eventually from external knowledge sources. The types of extracted relations are necessarily limited to a pre-defined list. In Open Information Extraction (OpenIE) [ , ] relation types are inferred directly from the data: concept pairs representing the same relation are grouped together and relation labels can be generated from context segments or through labeling by domain experts [ , , ]. A commonly used method [ , ] is to represent entity couples by a pair-pattern matrix, and cluster relation instances according to the similarity of their distribution over patterns. Pattern-based approaches [ , , , , ] typically use lexical context patterns, assuming that the semantic relation between two entities is explicitly mentioned in the text. Patterns can be defined manually [ ], obtained by Latent Relational Analysis [ ], or from a corpus by sequential pattern mining [ , , ]. Previous works, especially in the biomedical domain, have shown that not only lexical patterns, but also syntactic dependency trees can be beneficial in supervised and semi-supervised relation extraction [ , , – ]. Early experiments on combining lexical patterns with di erent types of distributional information in unsupervised relation clustering did not bring significant improvement [ ]. The underlying di culty is that while supervised classifiers can learn to weight attributes from di erent sources, it is not trivial to combine di erent types of features in a single clustering feature space. In our experiments, we propose to combine syntactic features with sequential lexical patterns for unsupervised clustering of semantic relation instances in the context of (NLP-related) scientific texts. We replicate the experiments of [ ] and augment them with dependency-based syntactic features. We adopt a pairpattern matrix for clustering relation instances. The task can be described as follows: if a1, a2, b1, b2 are pre-annotated domain concepts extracted from a corpus, we would like to classify concept pairs a = (a1, a2) and b = (b1, b2) in homogeneous groups according to their semantic relation. We need an e cient", "title": "" } ]
scidocsrr
e8b5b712825b890324dc3b5e1fdf2bca
Automatic Number Plate Recognition (ANPR) system for Indian conditions
[ { "docid": "083f43f1cc8fe2ad186567f243ee04de", "text": "We consider the task of recognition of Australian vehicle number plates (also called license plates or registration plates in other countries). A system for Australian number plate recognition must cope with wide variations in the appearance of the plates. Each state uses its own range of designs with font variations between the designs. There are special designs issued for significant events such as the Sydney 2000 Olympic Games. Also, vehicle owners may place the plates inside glass covered frames or use plates made of non-standard materials. These issues compound the complexity of automatic number plate recognition, making existing approaches inadequate. We have developed a system that incorporates a novel combination of image processing and artificial neural network technologies to successfully locate and read Australian vehicle number plates in digital images. Commercial application of the system is envisaged.", "title": "" } ]
[ { "docid": "5d6cb50477423bf9fc1ea6c27ad0f1b9", "text": "We propose a framework for general probabilistic multi-step time series regression. Specifically, we exploit the expressiveness and temporal nature of Sequence-to-Sequence Neural Networks (e.g. recurrent and convolutional structures), the nonparametric nature of Quantile Regression and the efficiency of Direct Multi-Horizon Forecasting. A new training scheme, forking-sequences, is designed for sequential nets to boost stability and performance. We show that the approach accommodates both temporal and static covariates, learning across multiple related series, shifting seasonality, future planned event spikes and coldstarts in real life large-scale forecasting. The performance of the framework is demonstrated in an application to predict the future demand of items sold on Amazon.com, and in a public probabilistic forecasting competition to predict electricity price and load.", "title": "" }, { "docid": "55d7db89621dc57befa330c6dea823bf", "text": "In this paper we propose CUDA-based implementations of two 3D point sets registration algorithms: Soft assign and EM-ICP. Both algorithms are known for being time demanding, even on modern multi-core CPUs. Our GPUbased implementations vastly outperform CPU ones. For instance, our CUDA EM-ICP aligns 5000 points in less than 7 seconds on a GeForce 8800GT, while the same implementation in OpenMP on an Intel Core 2 Quad would take 7 minutes.", "title": "" }, { "docid": "566e2777b6a17d333c3cf2e5438cde5c", "text": "The eventual goal of a language model is to accurately predict the value of a missing word given its context. We present an approach to word prediction that is based on learning a representation for each word as a function of words and linguistics predicates in its context. This approach raises a few new questions that we address. First, in order to learn good word representations it is necessary to use an expressive representation of the context. We present a way that uses external knowledge to generate expressive context representations, along with a learning method capable of handling the large number of features generated this way that can, potentially, contribute to each prediction. Second, since the number of words “competing” for each prediction is large, there is a need to “focus the attention” on a smaller subset of these. We exhibit the contribution of a “focus of attention” mechanism to the performance of the word predictor. Finally, we describe a large scale experimental study in which the approach presented is shown to yield significant improvements in word prediction tasks.", "title": "" }, { "docid": "acf390e07ab773d3f82ba4f8e807669a", "text": "The increasing popularity of server usage has brought a plenty of anomaly log events, which have threatened a vast collection of machines. Recognizing and categorizing the anomalous events thereby is a much salient work for our systems, especially the ones generate the massive amount of data and harness it for technology value creation and business development. To assist in focusing on the classification and the prediction of anomaly events, and gaining critical insights from system event records, we propose a novel log preprocessing method which is very effective to filter abundant information and retain critical characteristics. Additionally, a competitive approach for automated classification of anomalous events detected from the distributed system logs with the state-ofthe-art deep (Convolutional Neural Network) architectures is proposed in this paper. We measure a series of deep CNN algorithms with varied hyper-parameter combinations by using standard evaluation metrics, the results of our study reveals the advantages and potential capabilities of the proposed deep CNN models for anomaly event classification tasks on real-world systems. The optimal classification precision of our approach is 98.14%, which surpasses the popular traditional machine learning methods. Keywords-anomaly event classification; deep learning; convolutional neural network; log preprocessing; distributed system", "title": "" }, { "docid": "43c0a08c4acebbc764fe708728026bf7", "text": "We present a fast, fully automatic morphing algorithm for creating simulatable flesh and muscle models for human and humanoid faces. Current techniques for creating such models require a significant amount of time and effort, making them infeasible or impractical. In fact, the vast majority of research papers use only a floating mask with no inner lips, teeth, tongue, eyelids, eyes, head, ears, etc.---and even those that build the full visual model would typically still lack the cranium, jaw, muscles, and other internal anatomy. Our method requires only the target surface mesh as input and can create a variety of models in only a few hours with no user interaction. We start with a symmetric, high resolution, anatomically accurate template model that includes auxiliary information such as feature points and curves. Then given a target mesh, we automatically orient it to the template, detect feature points, and use these to bootstrap the detection of corresponding feature curves. These curve correspondences are used to deform the surface mesh of the template model to match the target mesh. Then, the calculated displacements of the template surface mesh are used to drive a three-dimensional morph of the full template model including all interior anatomy. The resulting target model can be simulated to generate a large range of expressions that are consistent across characters using the same muscle activations. Full automation of this entire process makes it readily available to a wide range of users.", "title": "" }, { "docid": "cc63fa999bed5abf05a465ae7313c053", "text": "In this paper, we consider the development of a rotorcraft micro aerial vehicle (MAV) system capable of vision-based state estimation in complex environments. We pursue a systems solution for the hardware and software to enable autonomous flight with a small rotorcraft in complex indoor and outdoor environments using only onboard vision and inertial sensors. As rotorcrafts frequently operate in hover or nearhover conditions, we propose a vision-based state estimation approach that does not drift when the vehicle remains stationary. The vision-based estimation approach combines the advantages of monocular vision (range, faster processing) with that of stereo vision (availability of scale and depth information), while overcoming several disadvantages of both. Specifically, our system relies on fisheye camera images at 25 Hz and imagery from a second camera at a much lower frequency for metric scale initialization and failure recovery. This estimate is fused with IMU information to yield state estimates at 100 Hz for feedback control. We show indoor experimental results with performance benchmarking and illustrate the autonomous operation of the system in challenging indoor and outdoor environments.", "title": "" }, { "docid": "00f106ff157e515ed8fde53fdaf1491e", "text": "In this paper we present a technique to train neural network models on small amounts of data. Current methods for training neural networks on small amounts of rich data typically rely on strategies such as fine-tuning a pre-trained neural network or the use of domain-specific hand-engineered features. Here we take the approach of treating network layers, or entire networks, as modules and combine pre-trained modules with untrained modules, to learn the shift in distributions between data sets. The central impact of using a modular approach comes from adding new representations to a network, as opposed to replacing representations via fine-tuning. Using this technique, we are able surpass results using standard fine-tuning transfer learning approaches, and we are also able to significantly increase performance over such approaches when using smaller amounts of data.", "title": "" }, { "docid": "98a700116ba846945927a0dd8e27586b", "text": "Automatic recording of user behavior within a system (instrumentation) to develop and test theories has a rich history in psychology and system design. Often, researchers analyze instrumented behavior in isolation from other data. The problem with collecting instrumented behaviors without attitudinal, demographic, and contextual data is that researchers have no way to answer the 'why' behind the 'what'. We have combined the collection and analysis of behavioral instrumentation with other HCI methods to develop a system for Tracking Real-Time User Experience (TRUE). Using two case studies as examples, we demonstrate how we have evolved instrumentation methodology and analysis to extensively improve the design of video games. It is our hope that TRUE is adopted and adapted by the broader HCI community, becoming a useful tool for gaining deep insights into user behavior and improvement of design for other complex systems.", "title": "" }, { "docid": "8093219e7e2b4a7067f8d96118a5ea93", "text": "We model knowledge graphs for their completion by encoding each entity and relation into a numerical space. All previous work including Trans(E, H, R, and D) ignore the heterogeneity (some relations link many entity pairs and others do not) and the imbalance (the number of head entities and that of tail entities in a relation could be different) of knowledge graphs. In this paper, we propose a novel approach TranSparse to deal with the two issues. In TranSparse, transfer matrices are replaced by adaptive sparse matrices, whose sparse degrees are determined by the number of entities (or entity pairs) linked by relations. In experiments, we design structured and unstructured sparse patterns for transfer matrices and analyze their advantages and disadvantages. We evaluate our approach on triplet classification and link prediction tasks. Experimental results show that TranSparse outperforms Trans(E, H, R, and D) significantly, and achieves state-ofthe-art performance.", "title": "" }, { "docid": "36fd1784579212b1df6248bfee7cc18a", "text": "The 'invisible hand' is a term originally coined by Adam Smith in The Theory of Moral Sentiments to describe the forces of self-interest, competition and supply and demand that regulate the resources in society. This metaphor continues to be used by economists to describe the self-regulating nature of a market economy. The same metaphor can be used to describe the RHO-specific guanine nucleotide dissociation inhibitor (RHOGDI) family, which operates in the background, as an invisible hand, using similar forces to regulate the RHO GTPase cycle.", "title": "" }, { "docid": "ee141b7fd5c372fb65d355fe75ad47af", "text": "As 100-Gb/s coherent systems based on polarization- division multiplexed quadrature phase shift keying (PDM-QPSK), with aggregate wavelength-division multiplexed (WDM) capacities close to 10 Tb/s, are getting widely deployed, the use of high-spectral-efficiency quadrature amplitude modulation (QAM) to increase both per-channel interface rates and aggregate WDM capacities is the next evolutionary step. In this paper we review high-spectral-efficiency optical modulation formats for use in digital coherent systems. We look at fundamental as well as at technological scaling trends and highlight important trade-offs pertaining to the design and performance of coherent higher-order QAM transponders.", "title": "" }, { "docid": "271ef45008e7fa92673b1c0076ed29e9", "text": "Multidocument summarization has gained popularity in many real world applications because vital information can be extracted within a short time. Extractive summarization aims to generate a summary of a document or a set of documents by ranking sentences and the ranking results rely heavily on the quality of sentence features. However, almost all previous algorithms require hand-crafted features for sentence representation. In this paper, we leverage on word embedding to represent sentences so as to avoid the intensive labor in feature engineering. An enhanced convolutional neural networks (CNNs) termed multiview CNNs is successfully developed to obtain the features of sentences and rank sentences jointly. Multiview learning is incorporated into the model to greatly enhance the learning capability of original CNN. We evaluate the generic summarization performance of our proposed method on five Document Understanding Conference datasets. The proposed system outperforms the state-of-the-art approaches and the improvement is statistically significant shown by paired $t$ -test.", "title": "" }, { "docid": "35f439b86c07f426fd127823a45ffacf", "text": "The paper concentrates on the fundamental coordination problem that requires a network of agents to achieve a specific but arbitrary formation shape. A new technique based on complex Laplacian is introduced to address the problems of which formation shapes specified by inter-agent relative positions can be formed and how they can be achieved with distributed control ensuring global stability. Concerning the first question, we show that all similar formations subject to only shape constraints are those that lie in the null space of a complex Laplacian satisfying certain rank condition and that a formation shape can be realized almost surely if and only if the graph modeling the inter-agent specification of the formation shape is 2-rooted. Concerning the second question, a distributed and linear control law is developed based on the complex Laplacian specifying the target formation shape, and provable existence conditions of stabilizing gains to assign the eigenvalues of the closed-loop system at desired locations are given. Moreover, we show how the formation shape control law is extended to achieve a rigid formation if a subset of knowledgable agents knowing the desired formation size scales the formation while the rest agents do not need to re-design and change their control laws.", "title": "" }, { "docid": "b6376259827dfc04f7c7c037631443f3", "text": "In this brief, a low-power flip-flop (FF) design featuring an explicit type pulse-triggered structure and a modified true single phase clock latch based on a signal feed-through scheme is presented. The proposed design successfully solves the long discharging path problem in conventional explicit type pulse-triggered FF (P-FF) designs and achieves better speed and power performance. Based on post-layout simulation results using TSMC CMOS 90-nm technology, the proposed design outperforms the conventional P-FF design data-close-to-output (ep-DCO) by 8.2% in data-to-Q delay. In the mean time, the performance edges on power and power- delay-product metrics are 22.7% and 29.7%, respectively.", "title": "" }, { "docid": "a39828a79f276d94a4edf69c0317d801", "text": "The demand for high bit-rate service transmission is increasing for digital terrestrial broadcasting just like other transmission technologies. Multiple-input multiple-output (MIMO) transmission is one of the most promising techniques to fulfill this demand for high transmission rates. This paper introduces enhanced spatial multiplexing (eSM) scheme adopted as technical baseline for the DVB next generation handheld (DVB-NGH) system. Most technical challenges for MIMO transmission in broadcasting comes from the rate-2 spatial multiplexing performance degradation in high correlation channel condition, which frequently happens in broadcasting owing to the line-of-sight (LOS) channel conditions. The proposed eSM exploits a pre-coding matrix optimized to overcome this condition. When combined with phase hopping to avoid specific harmful condition, eSM minimizes the performance loss under high correlation channels whilst keeping maximum multiplexing gain over rich scattering channels.", "title": "" }, { "docid": "fb15c88052883b11a34d0911979c30a1", "text": "What explains patterns of compliance with and resistance to autocratic rule? This paper provides a theoretical framework for understanding how individuals living under dictatorship calibrate their political behaviors. I argue that the types of non-compliance observed in autocratic contexts differ depending on the intensity of expected punishment and the extent to which sanctions are directed at individuals, families or larger communities. Using data from documents captured by US forces during the 2003 invasion of Iraq, I use unanticipated political shocks to examine over-time discontinuities in citizen behavior in Iraq under Saddam Hussein during two distinct periods — before and after the First Gulf War and the associated Kurdish and Shi‘a anti-regime uprisings. Prior to 1991 and the establishment of a Kurdish autonomous zone in northern Iraq, severe repression and widespread use of collective punishment created the conditions for Iraqi Kurds to engage in a widespread anti-regime rebellion. Before 1991, Shi‘a Iraqis were able to express limited forms of political discontent; after 1991, however, Shi‘a were forced to publicly signal compliance while shifting to more private forms of anti-regime activity. While Iraqis living in and around Saddam Hussein’s hometown of Tikrit almost universally self-identified as Ba‘thists and enjoyed privileges as a result of close ties to the regime, Sunnis living in areas distant from Tikrit became increasingly estranged from the regime as international sanctions closed off economic opportunities. ∗Many thanks to the staff at the Library and Archives of the Hoover Institution and the W. Glenn Campbell and Rita Ricardo-Campbell National Fellows Program at the Hoover Institution.", "title": "" }, { "docid": "fde2aefec80624ff4bc21d055ffbe27b", "text": "Object detector with region proposal networks such as Fast/Faster R-CNN [1, 2] have shown the state-of-the art performance on several benchmarks. However, they have limited success for detecting small objects. We argue the limitation is related to insufficient performance of Fast R-CNN block in Faster R-CNN. In this paper, we propose a refining block for Fast R-CNN. We further merge the block and Faster R-CNN into a single network (RF-RCNN). The RF-RCNN was applied on plate and human detection in RoadView image that consists of high resolution street images (over 30M pixels). As a result, the RF-RCNN showed great improvement over the Faster-RCNN.", "title": "" }, { "docid": "81fc9abd3e2ad86feff7bd713cff5915", "text": "With the advance of the Internet, e-commerce systems have become extremely important and convenient to human being. More and more products are sold on the web, and more and more people are purchasing products online. As a result, an increasing number of customers post product reviews at merchant websites and express their opinions and experiences in any network space such as Internet forums, discussion groups, and blogs. So there is a large amount of data records related to products on the Web, which are useful for both manufacturers and customers. Mining product reviews becomes a hot research topic, and prior researches mostly base on product features to analyze the opinions. So mining product features is the first step to further reviews processing. In this paper, we present how to mine product features. The proposed extraction approach is different from the previous methods because we only mine the features of the product in opinion sentences which the customers have expressed their positive or negative experiences on. In order to find opinion sentence, a SentiWordNet-based algorithm is proposed. There are three steps to perform our task: (1) identifying opinion sentences in each review which is positive or negative via SentiWordNet; (2) mining product features that have been commented on by customers from opinion sentences; (3) pruning feature to remove those incorrect features. Compared to previous work, our experimental result achieves higher precision and recall.", "title": "" }, { "docid": "9b4c240bd55523360e92dbed26cb5dc2", "text": "CBT has been seen as an alternative to the unmanageable population of undergraduate students in Nigerian universities. This notwithstanding, the peculiar nature of some courses hinders its total implementation. This study was conducted to investigate the students’ perception of CBT for undergraduate chemistry courses in University of Ilorin. To this end, it examined the potential for using student feedback in the validation of assessment. A convenience sample of 48 students who had taken test on CBT in chemistry was surveyed and questionnaire was used for data collection. Data analysis demonstrated an auspicious characteristics of the target context for the CBT implementation as majority (95.8%) of students said they were competent with the use of computers and 75% saying their computer anxiety was only mild or low but notwithstanding they have not fully accepted the testing mode with only 29.2% in favour of it, due to the impaired validity of the test administration which they reported as being many erroneous chemical formulas, equations and structures in the test items even though they have nonetheless identified the achieved success the testing has made such as immediate scoring, fastness and transparency in marking. As quality of designed items improves and sufficient time is allotted according to the test difficulty, the test experience will become favourable for students and subsequently CBT will gain its validation in this particular context.", "title": "" }, { "docid": "1c8ac344f85ff4d4a711536841168b6a", "text": "Internet Protocol Television (IPTV) is an increasingly popular multimedia service which is used to deliver television, video, audio and other interactive content over proprietary IP-based networks. Video on Demand (VoD) is one of the most popular IPTV services, and is very important for IPTV providers since it represents the second most important revenue stream after monthly subscriptions. In addition to high-quality VoD content, profitable VoD service provisioning requires an enhanced content accessibility to greatly improve end-user experience. Moreover, it is imperative to offer innovative features to attract new customers and retain existing ones. To achieve this goal, IPTV systems typically employ VoD recommendation engines to offer personalized lists of VoD items that are potentially interesting to a user from a large amount of available titles. In practice, a good recommendation engine does not offer popular and well-known titles, but is rather able to identify interesting among less popular items which would otherwise be hard to find. In this paper we report our experience in building a VoD recommendation system. The presented evaluation shows that our recommendation system is able to recommend less popular items while operating under a high load of end-user requests.", "title": "" } ]
scidocsrr
47df39433a117fd8ae68e3f0787f5c47
Time prediction based on process mining
[ { "docid": "c36f2fd7bf8ef65bf443954e6be7107a", "text": "Process mining is a tool to extract non-trivial and useful information from process execution logs. These so-called event logs (also called audit trails, or transaction logs) are the starting point for various discovery and analysis techniques that help to gain insight into certain characteristics of the process. In this paper we use a combination of process mining techniques to discover multiple perspectives (namely, the control-flow, data, performance, and resource perspective) of the process from historic data, and we integrate them into a comprehensive simulation model. This simulation model is represented as a Coloured Petri net (CPN) and can be used to analyze the process, e.g., evaluate the performance of different alternative designs. The discovery of simulation models is explained using a running example. Moreover, the approach has been applied in two case studies; the workflows in two different municipalities in the Netherlands have been analyzed using a combination of process mining and simulation. Furthermore, the quality of the CPN models generated for the running example and the two case studies has been evaluated by comparing the original logs with the logs of the generated models.", "title": "" }, { "docid": "2c92948916257d9b164e7d65aa232d3e", "text": "Contemporary workflow management systems are driven by explicit process models, i.e., a completely specified workflow design is required in order to enact a given workflow process. Creating a workflow design is a complicated time-consuming process and typically, there are discrepancies between the actual workflow processes and the processes as perceived by the management. Therefore, we propose a technique for rediscovering workflow models. This technique uses workflow logs to discover the workflow process as it is actually being executed. The workflow log contains information about events taking place. We assume that these events are totally ordered and each event refers to one task being executed for a single case. This information can easily be extracted from transactional information systems (e.g., Enterprise Resource Planning systems such as SAP and Baan). The rediscovering technique proposed in this paper can deal with noise and can also be used to validate workflow processes by uncovering and measuring the discrepancies between prescriptive models and actual process executions.", "title": "" } ]
[ { "docid": "7250d1ea22aac1690799089d2ba1acd5", "text": "Music plays an important part in people’s lives to regulate their emotions throughout the day. We conducted an online user study to investigate how the emotional state relates to the use of emotionally laden music. We found among 359 participants that they in general prefer emotionally laden music that correspond with their emotional state. However, when looking at personality traits, different patterns emerged. We found that when in a negative emotional state, those who scored high on openness, extraversion, and agreeableness tend to cheer themselves up with happy music, while those who scored high on neuroticism tend to increase their worry with sad music. With our results we show general patterns of music usage, but also individual differences. Our results contribute to the improvement of applications such as recommender systems in order to provide tailored recommendations based on users’ personality and emotional state.", "title": "" }, { "docid": "76dc2077f52886ef7c16a9dd28084e6b", "text": "On the Internet, electronic tribes structured around consumer interests have been growing rapidly. To be effective in this new environment, managers must consider the strategic implications of the existence of different types of both virtual community and community participation. Contrasted with database-driven relationship marketing, marketers seeking success with consumers in virtual communities should consider that they: (1) are more active and discerning; (2) are less accessible to one-on-one processes, and (3) provide a wealth of valuable cultural information. Strategies for effectively targeting more desirable types of virtual communities and types of community members include: interaction-based segmentation, fragmentation-based segmentation, co-opting communities, paying-for-attention, and building networks by giving product away. Purchase Export Previous article Next article Check if you have access through your login credentials or your institution.", "title": "" }, { "docid": "4bc6533ecd463ae204823ec18e814039", "text": "Malaria is a major public health problem in many tropical and subtropical countries and the burden of this disease is getting worse, mainly due to the increasing resistance of Plasmodium falciparum against the widely available antimalarial drugs. There is an urgent need for discovery of new antimalarial agents. Herbal medicines for the treatment of various diseases including malaria are an important part of the cultural diversity and traditions of which Kenya's biodiversity has been an integral part. Two major antimalarial drugs widely used today came originally from indigenous medical systems, that is quinine and artemisinin, from Peruvian and Chinese ancestral treatments, respectively. Thus ethnopharmacology is a very important resource in which new therapies may be discovered. The present review is an analysis of ethnopharmacological publications on antimalarial therapies from some Kenyan medicinal plants.", "title": "" }, { "docid": "b5e66fbded6c7be46a8d7c724fd18be9", "text": "In augmented reality (AR), virtual objects and information are overlaid onto the user’s view of the physical world and can appear to become part of the real-world. Accurate registration of virtual objects is a key requirement for an effective and natural AR system, but misregistration can break the illusion of virtual objects being part of the real-world and disrupt immersion. End-to-end system latency severely impacts the quality of AR registration. In this research, we present a controlled study that aims at a deeper understanding of the effects of latency on virtual and real-world imagery and its influences on task performance in an AR training task. We utilize an AR simulation approach, in which an outdoor AR training task is simulated in a high-fidelity virtual reality (VR) system. The real and augmented portions of the AR training scenarios are simulated in VR, affording us detailed control over a variety of immersion parameters and the ability to explore the effects of different types of simulated latency. We utilized a representative task inspired by outdoor AR military training systems to compare various AR system configurations, including optical see-through and video see-through setups with both matched and unmatched levels of real and virtual objects latency. Our findings indicate that users are able to perform significantly better when virtual and real-world latencies are matched (as in the case of simulated video see-through AR with perfect augmentation-to-real-world registration). Unequal levels of latency led to reduction in performance, even when overall latency levels were lower compared to the matched case. The relative results hold up with increased overall latency.", "title": "" }, { "docid": "b40e5cf2b979c51f87c0e517f8578fae", "text": "The osteopathic treatment of the fascia involves several techniques, each aimed at allowing the various layers of the connective system to slide over each other, improving the responses of the afferents in case of dysfunction. However, before becoming acquainted with a method, one must be aware of the structure and function of the tissue that needs treating, in order to not only better understand the manual approach, but also make a more conscious choice of the therapeutic technique to employ, in order to adjust the treatment to the specific needs of the patient. This paper examines the current literature regarding the function and structure of the fascial system and its foundation, that is, the fibroblasts. These connective cells have many properties, including the ability to contract and to communicate with one another. They play a key role in the transmission of the tension produced by the muscles and in the management of the interstitial fluids. They are a source of nociceptive and proprioceptive information as well, which is useful for proper functioning of the body system. Therefore, the fibroblasts are an invaluable instrument, essential to the understanding of the therapeutic effects of osteopathic treatment. Scientific research should make greater efforts to better understand their functioning and relationships.", "title": "" }, { "docid": "c2da932aec6f3d8c6fddc9aaa994c9cd", "text": "As more companies embrace the concepts of sustainable development, there is a need to bring the ideas inherent in eco-efficiency and the \" triple-bottom line \" thinking down to a practical implementation level. Putting this concept into operation requires an understanding of the key indicators of sustainability and how they can be measured to determine if, in fact, progress is being made. Sustainability metrics are intended as simple yardsticks that are applicable across industry. The primary objective of this approach is to improve internal management decision-making with respect to the sustainability of processes, products and services. This approach can be used to make better decisions at any stage of the stage-gate process: from identification of an innovation to design to manufacturing and ultimately to exiting a business. More specifically, sustainability metrics can assist decision makers in setting goals, benchmarking, and comparing alternatives such as different suppliers, raw materials, and improvement options from the sustainability perspective. This paper provides a review on the early efforts and recent progress in the development of sustainability metrics. The experience of BRIDGES to Sustainability™, a not-for-profit organization, in testing, adapting, and refining the sustainability metrics are summarized. Basic and complementary metrics under six impact categories: material, energy, water, solid wastes, toxic release, and pollutant effects, are discussed. The development of BRIDGESworks™ Metrics, a metrics management software tool, is also presented. The software was designed to be both easy to use and flexible. It incorporates a base set of metrics and their heuristics for calculation, as well as a robust set of impact assessment data for use in identifying pollutant effects. While providing a metrics management starting point, the user has the option of creating other metrics defined by the user. The sustainability metrics work at BRIDGES to Sustainability™ was funded partially by the U.S. Department of Energy through a subcontract with the American Institute of Chemical Engineers and through corporate pilots.", "title": "" }, { "docid": "a2ea1e604b484758ec2316aeb6b93338", "text": "Virtual customer communities enable firms to establish distributed innovation models that involve varied customer roles in new product development. In this article I use a multitheorotic lens to examine the design of such virtual customer environments, focusing on four underlying theoretical themes (interaction pattern, knowledge creation, customer motivation, and virtual customer community-new product development team integration) and deriving their implications for virtual customer environment design. I offer propositions that relate specific virtual customer environment design elements to successful customer value creation, and thereby to new product development success.", "title": "" }, { "docid": "d08f6780b44c9272035a3263a0553101", "text": "A novel two-stage Doherty power amplifier (PA) was designed and fully integrated on a 0.25-μm GaN on SiC monolithic microwave integrated circuit die with a dimension of 3.3 × 2.6 mm2 to build small-cell base stations. An asymmetric Doherty configuration was adopted for the power stage with the reversed uneven input power splitting network for better performance. To improve linearity, the third-order intermodulation distortion (IMD3) was minimized by cancelling IMD3s between the carrier and peaking amplifiers. The two-section quarterwave transformer was used for more uniform in-band frequency responses. The fabricated PA showed a power-added efficiency of 46.8% and a power gain of 30.9 dB at an average power of 35.1 dBm for a 2.655-GHz long-term evolution signal with a 7.1-dB peak-to-average power ratio. The adjacent channel leakage ratio was -40.2 dBc without any linearization, and it was lowered to -49.3 dBc by a digital pre-distortion linearization.", "title": "" }, { "docid": "33ee29c4ccab435b8b64058b584e13cd", "text": "In this paper, we present a music recommendation system, which provides a personalized service of music recommendation. The polyphonic music objects of MIDI format are first analyzed for deriving information for music grouping. For this purpose, the representative track of each polyphonic music object is first determined, and then six features are extracted from this track for proper music grouping. Moreover, the user access histories are analyzed to derive the profiles of user interests and behaviors for user grouping. The content-based, collaborative, and statistics-based recommendation methods are proposed based on the favorite degrees of the users to the music groups, and the user groups they belong to. A series of experiments are carried out to show that our approach performs well.", "title": "" }, { "docid": "9f5998ebc2457c330c29a10772d8ee87", "text": "Fuzzy hashing is a known technique that has been adopted to speed up malware analysis processes. However, Hashing has not been fully implemented for malware detection because it can easily be evaded by applying a simple obfuscation technique such as packing. This challenge has limited the usage of hashing to triaging of the samples based on the percentage of similarity between the known and unknown. In this paper, we explore the different ways fuzzy hashing can be used to detect similarities in a file by investigating particular hashes of interest. Each hashing method produces independent but related interesting results which are presented herein. We further investigate combination techniques that can be used to improve the detection rates in hashing methods. Two such evidence combination theory based methods are applied in this work in order propose a novel way of combining the results achieved from different hashing algorithms. This study focuses on file and section Ssdeep hashing, PeHash and Imphash techniques to calculate the similarity of the Portable Executable files. Our results show that the detection rates are improved when evidence combination techniques are used.", "title": "" }, { "docid": "bebd8b3ff0430258291de91d756eeb1b", "text": "Infection of cells by microorganisms activates the inflammatory response. The initial sensing of infection is mediated by innate pattern recognition receptors (PRRs), which include Toll-like receptors, RIG-I-like receptors, NOD-like receptors, and C-type lectin receptors. The intracellular signaling cascades triggered by these PRRs lead to transcriptional expression of inflammatory mediators that coordinate the elimination of pathogens and infected cells. However, aberrant activation of this system leads to immunodeficiency, septic shock, or induction of autoimmunity. In this Review, we discuss the role of PRRs, their signaling pathways, and how they control inflammatory responses.", "title": "" }, { "docid": "f3f2184b1fd6a62540f8547df3014b44", "text": "Social Media Analytics is an emerging interdisciplinary research field that aims on combining, extending, and adapting methods for analysis of social media data. On the one hand it can support IS and other research disciplines to answer their research questions and on the other hand it helps to provide architectural designs as well as solution frameworks for new social media-based applications and information systems. The authors suggest that IS should contribute to this field and help to develop and process an interdisciplinary research agenda.", "title": "" }, { "docid": "f8f36ef5822446478b154c9d98847070", "text": "The objective of this research is to improve traffic safety through collecting and distributing up-to-date road surface condition information using mobile phones. Road surface condition information is seen useful for both travellers and for the road network maintenance. The problem we consider is to detect road surface anomalies that, when left unreported, can cause wear of vehicles, lesser driving comfort and vehicle controllability, or an accident. In this work we developed a pattern recognition system for detecting road condition from accelerometer and GPS readings. We present experimental results from real urban driving data that demonstrate the usefulness of the system. Our contributions are: 1) Performing a throughout spectral analysis of tri-axis acceleration signals in order to get reliable road surface anomaly labels. 2) Comprehensive preprocessing of GPS and acceleration signals. 3) Proposing a speed dependence removal approach for feature extraction and demonstrating its positive effect in multiple feature sets for the road surface anomaly detection task. 4) A framework for visually analyzing the classifier predictions over the validation data and labels.", "title": "" }, { "docid": "0cc25de8ea70fe1fd85824e8f3155bf7", "text": "When integrating information from multiple websites, the same data objects can exist in inconsistent text formats across sites, making it difficult to identify matching objects using exact text match. We have developed an object identification system called Active Atlas, which compares the objects’ shared attributes in order to identify matching objects. Certain attributes are more important for deciding if a mapping should exist between two objects. Previous methods of object identification have required manual construction of object identification rules or mapping rules for determining the mappings between objects. This manual process is time consuming and error-prone. In our approach, Active Atlas learns to tailor mapping rules, through limited user input, to a specific application domain. The experimental results demonstrate that we achieve higher accuracy and require less user involvement than previous methods across various application domains.", "title": "" }, { "docid": "461ec14463eb20962ef168de781ac2a2", "text": "Local descriptors based on the image noise residual have proven extremely effective for a number of forensic applications, like forgery detection and localization. Nonetheless, motivated by promising results in computer vision, the focus of the research community is now shifting on deep learning. In this paper we show that a class of residual-based descriptors can be actually regarded as a simple constrained convolutional neural network (CNN). Then, by relaxing the constraints, and fine-tuning the net on a relatively small training set, we obtain a significant performance improvement with respect to the conventional detector.", "title": "" }, { "docid": "201f576423ed88ee97d1505b6d5a4d3f", "text": "The effectiveness of the treatment of breast cancer depends on its timely detection. An early step in the diagnosis is the cytological examination of breast material obtained directly from the tumor. This work reports on advances in computer-aided breast cancer diagnosis based on the analysis of cytological images of fine needle biopsies to characterize these biopsies as either benign or malignant. Instead of relying on the accurate segmentation of cell nuclei, the nuclei are estimated by circles using the circular Hough transform. The resulting circles are then filtered to keep only high-quality estimations for further analysis by a support vector machine which classifies detected circles as correct or incorrect on the basis of texture features and the percentage of nuclei pixels according to a nuclei mask obtained using Otsu's thresholding method. A set of 25 features of the nuclei is used in the classification of the biopsies by four different classifiers. The complete diagnostic procedure was tested on 737 microscopic images of fine needle biopsies obtained from patients and achieved 98.51% effectiveness. The results presented in this paper demonstrate that a computerized medical diagnosis system based on our method would be effective, providing valuable, accurate diagnostic information.", "title": "" }, { "docid": "1c796b924f558ad6b883e93ad2575b72", "text": "Researchers have obtained conflicting results about the role of prosocial motivation in persistence, performance, and productivity. To resolve this discrepancy, I draw on self-determination theory, proposing that prosocial motivation is most likely to predict these outcomes when it is accompanied by intrinsic motivation. Two field studies support the hypothesis that intrinsic motivation moderates the association between prosocial motivation and persistence, performance, and productivity. In Study 1, intrinsic motivation strengthened the relationship between prosocial motivation and the overtime hour persistence of 58 firefighters. In Study 2, intrinsic motivation strengthened the relationship between prosocial motivation and the performance and productivity of 140 fundraising callers. Callers who reported high levels of both prosocial and intrinsic motivations raised more money 1 month later, and this moderated association was mediated by a larger number of calls made. I discuss implications for theory and research on work motivation.", "title": "" }, { "docid": "006793685095c0772a1fe795d3ddbd76", "text": "Legislators, designers of legal information systems, as well as citizens face often problems due to the interdependence of the laws and the growing number of references needed to interpret them. In this paper, we introduce the ”Legislation Network” as a novel approach to address several quite challenging issues for identifying and quantifying the complexity inside the Legal Domain. We have collected an extensive data set of a more than 60-year old legislation corpus, as published in the Official Journal of the European Union, and we further analysed it as a complex network, thus gaining insight into its topological structure. Among other issues, we have performed a temporal analysis of the evolution of the Legislation Network, as well as a robust resilience test to assess its vulnerability under specific cases that may lead to possible breakdowns. Results are quite promising, showing that our approach can lead towards an enhanced explanation in respect to the structure and evolution of legislation properties.", "title": "" }, { "docid": "9081cb169f74b90672f84afa526f40b3", "text": "The paper presents an analysis of the main mechanisms of decryption of SSL/TLS traffic. Methods and technologies for detecting malicious activity in encrypted traffic that are used by leading companies are also considered. Also, the approach for intercepting and decrypting traffic transmitted over SSL/TLS is developed, tested and proposed. The developed approach has been automated and can be used for remote listening of the network, which will allow to decrypt transmitted data in a mode close to real time.", "title": "" }, { "docid": "f0846b4e74110ed469704c4a24407cc6", "text": "Presently, a very large number of public and private data sets are available from local governments. In most cases, they are not semantically interoperable and a huge human effort would be needed to create integrated ontologies and knowledge base for smart city. Smart City ontology is not yet standardized, and a lot of research work is needed to identify models that can easily support the data reconciliation, the management of the complexity, to allow the data reasoning. In this paper, a system for data ingestion and reconciliation of smart cities related aspects as road graph, services available on the roads, traffic sensors etc., is proposed. The system allows managing a big data volume of data coming from a variety of sources considering both static and dynamic data. These data are mapped to a smart-city ontology, called KM4City (Knowledge Model for City), and stored into an RDF-Store where they are available for applications via SPARQL queries to provide new services to the users via specific applications of public administration and enterprises. The paper presents the process adopted to produce the ontology and the big data architecture for the knowledge base feeding on the basis of open and private data, and the mechanisms adopted for the data verification, reconciliation and validation. Some examples about the possible usage of the coherent big data knowledge base produced are also offered and are accessible from the RDF-store and related services. The article also presented the work performed about reconciliation algorithms and their comparative assessment and selection. & 2014 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).", "title": "" } ]
scidocsrr
873889c026f3162a4c07a6dfda0fb96f
3D cell nuclei segmentation based on gradient flow tracking
[ { "docid": "f8071cfa96286882defc85c46b7ab866", "text": "A novel method for finding active contours, or snakes as developed by Xu and Prince [1] is presented in this paper. The approach uses a regularization based technique and calculus of variations to find what the authors call a Gradient Vector Field or GVF in binary-values or grayscale images. The GVF is in turn applied to ’pull’ the snake towards the required feature. The approach presented here differs from other snake algorithms in its ability to extend into object concavities and its robust initialization technique. Although their algorithm works better than existing active contour algorithms, it suffers from computational complexity and associated costs in execution, resulting in slow execution time.", "title": "" } ]
[ { "docid": "00ff2d5e2ca1d913cbed769fe59793d4", "text": "In recent work, we showed that putatively adaptive emotion regulation strategies, such as reappraisal and acceptance, have a weaker association with psychopathology than putatively maladaptive strategies, such as rumination, suppression, and avoidance (e.g., Aldao & Nolen-Hoeksema, 2010; Aldao, Nolen-Hoeksema, & Schweizer, 2010). In this investigation, we examined the interaction between adaptive and maladaptive emotion regulation strategies in the prediction of psychopathology symptoms (depression, anxiety, and alcohol problems) concurrently and prospectively. We assessed trait emotion regulation and psychopathology symptoms in a sample of community residents at Time 1 (N = 1,317) and then reassessed psychopathology at Time 2 (N = 1,132). Cross-sectionally, we found that the relationship between adaptive strategies and psychopathology symptoms was moderated by levels of maladaptive strategies: adaptive strategies had a negative association with psychopathology symptoms only at high levels of maladaptive strategies. In contrast, adaptive strategies showed no prospective relationship to psychopathology symptoms either alone or in interaction with maladaptive strategies. We discuss the implications of this investigation for future work on the contextual factors surrounding the deployment of emotion regulation strategies.", "title": "" }, { "docid": "c97eb53dcf3c1a1ecf6455f6489fa93e", "text": "Emotions form a very important and basic aspect of our lives. Whatever we do, whatever we say, somehow does reflect some of our emotions, though may not be directly. To understand the very fundamental behavior of a human, we need to analyze these emotions through some emotional data, also called, the affect data. This data can be text, voice, facial expressions etc. Using this emotional data for analyzing the emotions also forms an interdisciplinary field, called Affective Computing. Computation of emotions is a very challenging task, much work has been done but many more increments are also possible. With the advent of social networking sites, many people tend to get attracted towards analyzing this very text available on these various sites. Analyzing this data over the Internet means we are spanning across the whole continent, going through all the cultures and communities across. This paper summarizes the previous works done in the field of textual emotion analysis based on various emotional models and computational approaches used.", "title": "" }, { "docid": "00ef155ef1cabf2eb1771f3e4e51e8d2", "text": "Clustered regularly interspaced short palindromic repeats (CRISPR) and CRISPR-associated (Cas) proteins constitute an adaptive immune system in prokaryotes. The system preserves memories of prior infections by integrating short segments of foreign DNA, termed spacers, into the CRISPR array in a process termed adaptation. During the past 3 years, significant progress has been made on the genetic requirements and molecular mechanisms of adaptation. Here we review these recent advances, with a focus on the experimental approaches that have been developed, the insights they generated, and a proposed mechanism for self- versus non-self-discrimination during the process of spacer selection. We further describe the regulation of adaptation and the protein players involved in this fascinating process that allows bacteria and archaea to harbor adaptive immunity.", "title": "" }, { "docid": "f7a9c40a3b91b95695395d6af6647cea", "text": "U.S. Air Force special tactics operators at times use small wearable computers (SWCs) for mission objectives. The primary pointing device of a SWC is either a touchpad or trackpoint, which is embedded into the chassis of the SWC. In situations where the user cannot directly interact with these pointing devices, the utility of the SWC is decreased. We developed a pointing device called the G3 that can be used for SWCs used by operators. The device utilizes gyroscopic sensors attached to the user’s index finger to move the computer cursor according to the angular velocity of his finger. We showed that, as measured by Fitts’ law, the overall performance and accuracy of the G3 was better than that of the touchpad and trackpoint. These findings suggest that the G3 can adequately be used with SWCs. Additionally, we investigated the G3 ’s utility as a control device for operating micro remotely piloted aircrafts", "title": "" }, { "docid": "d449155f5cbf60e942f0206fea151308", "text": "This paper presents, for the first time, the 3D Glass Photonics (3DGP) technology being developed by Georgia Tech, based on ultra-thin 3D glass interposer [1]. The 3DGP system integrates both optical and electrical interconnects in the same glass substrate using photo-sensitive polymer core, and polymer cladding within an ultra-thin glass substrate. The 3DGP processes are demonstrated using 180 & 100 um thick glass substrates with 30 um diameter via and 8 um wide waveguide structures. The optical vias are used as mode transformer and high-tolerance coupler between fibers and chips. Finite-difference analysis is performed to determine the alignment tolerances of such vias.", "title": "" }, { "docid": "b068cd17374110aab59e2b6a4ae2877d", "text": "For an autonomous mobile robot, an important task to accomplish while maneuvering in outdoor rugged environments is terrain traversability analyzing. Due to the large variety of terrain, a general representation cannot be obtained a priori. Thus, the ability to determine the traversability based on the vehicle motion information and its environments is necessary, and more likely to enable access to interesting sites while insuring the soundness and stability of the mobile robot. We introduce a novel method which can predict motion information based on extracted image features from outdoor university campus environments, to finally estimate the traversability of terrains. A wheeled mobile robot equipped with an optical sensor and an acceleration sensor was used to conduct experiments.", "title": "" }, { "docid": "90494c890c7f9625fa69ea3d8aa3f6ae", "text": "Mobile phones' increasing ubiquity has created many opportunities for personal context sensing. Personal activity is an important part of a user's context, and automatically recognizing it is vital for health and fitness monitoring applications. Recording a stream of activity data enables monitoring patients with chronic conditions affecting ambulation and motion, as well as those undergoing rehabilitation treatments. Modern mobile phones are powerful enough to perform activity classification in real time, but they typically use a static classifier that is trained in advance or require the user to manually add training data after the application is on his/her device. This paper investigates ways of automatically augmenting activity classifiers after they are deployed in an application. It compares active learning and three different semi-supervised learning methods, self-learning, En-Co-Training, and democratic co-learning, to determine which show promise for this purpose. The results show that active learning, En-Co-Training, and democratic co-learning perform well when the initial classifier's accuracy is low (75–80%). When the initial accuracy is already high (90%), these methods are no longer effective, but they do not hurt the accuracy either. Overall, active learning gave the highest improvement, but democratic co-learning was almost as good and does not require user interaction. Thus, democratic co-learning would be the best choice for most applications, since it would significantly increase the accuracy for initial classifiers that performed poorly.", "title": "" }, { "docid": "f89e22fc5849415e7a4a2f4f7ee6ea33", "text": "Recently-proposed processor microarchitectures for high Memory Level Parallelism (MLP) promise substantial performance gains. Unfortunately, current cache hierarchies have Miss-Handling Architectures (MHAs) that are too limited to support the required MLPthey need to be redesigned to support 1-2 orders of magnitude more outstanding misses. Yet, designing scalable MHAs is challenging: designs must minimize cache lock-up time and deliver high bandwidth while keeping the area consumption reasonable. This paper presents a novel scalable MHA design for high-MLP processors. Our design introduces two main innovations. First, it is hierarchical, with a small MSHR file per cache bank, and a larger MSHR file shared by all banks. Second, it uses a Bloom filter to reduce searches in the larger MSHR file. The result is a highperformance, area-efficient design. Compared to a state-of-the-art MHA on a high-MLP processor, our design speeds-up some SPECint, SPECfp, and multiprogrammed workloads by a geometric mean of 32%, 50%, and 95%, respectively. Moreover, compared to two extrapolations of current MHA designs, namely a large monolithic MSHR file and a large banked MSHR file, all consuming the same area, our design speeds-up the workloads by a geometric mean of 1-18% and 10-21%, respectively. Finally, our design performs very close to an unlimited-size, ideal MHA.", "title": "" }, { "docid": "363cdcc34c855e712707b5b920fbd113", "text": "This paper presents the design and experimental validation of an anthropomorphic underactuated robotic hand with 15 degrees of freedom and a single actuator. First, the force transmission design of underactuated fingers is revisited. An optimal geometry of the tendon-driven fingers is then obtained. Then, underactuation between the fingers is addressed using differential mechanisms. Tendon routings are proposed and verified experimentally. Finally, a prototype of a 15-degree-of-freedom hand is built and tested. The results demonstrate the feasibility of a humanoid hand with many degrees of freedom and one single degree of actuation.", "title": "" }, { "docid": "c1006d8f8f5f398f171502716b2d07ac", "text": "Performance of instrumental actions in rats is initially sensitive to postconditioning changes in reward value, but after more extended training, behavior comes to be controlled by stimulus-response (S-R) habits that are no longer goal directed. To examine whether sensitization of dopaminergic systems leads to a more rapid transition from action-outcome processes to S-R habits, we examined performance of amphetamine-sensitized rats in an instrumental devaluation task. Animals were either sensitized (7 d, 2 mg/kg/d) before training (experiment 1) or sensitized between training and testing (experiment 2). Rats were trained to press a lever for a reward (three sessions) and were then given a test of goal sensitivity by devaluation of the instrumental outcome before testing in extinction. Control animals showed selective sensitivity to devaluation of the instrumental outcome. However, amphetamine sensitization administered before training caused the animals' responding to persist despite the changed value of the reinforcer. This deficit resulted from an inability to use representations of the outcome to guide behavior, because a reacquisition test confirmed that all of the animals had acquired an aversion to the reinforcer. In experiment 2, post-training sensitization did not disrupt normal goal-directed behavior. These findings indicate that amphetamine sensitization leads to a rapid progression from goal-directed to habit-based responding but does not affect the performance of established goal-directed actions.", "title": "" }, { "docid": "490dc6ee9efd084ecf2496b72893a39a", "text": "The rise of blockchain-based cryptocurrencies has led to an explosion of services using distributed ledgers as their underlying infrastructure. However, due to inherently single-service oriented blockchain protocols, such services can bloat the existing ledgers, fail to provide sufficient security, or completely forego the property of trustless auditability. Security concerns, trust restrictions, and scalability limits regarding the resource requirements of users hamper the sustainable development of loosely-coupled services on blockchains. This paper introduces Aspen, a sharded blockchain protocol designed to securely scale with increasing number of services. Aspen shares the same trust model as Bitcoin in a peer-to-peer network that is prone to extreme churn containing Byzantine participants. It enables introduction of new services without compromising the security, leveraging the trust assumptions, or flooding users with irrelevant messages.", "title": "" }, { "docid": "d199e473a8a22618c9040fd345254b10", "text": "One of the first questions a researcher or designer of wearable technology has to answer in the design process is where on the body the device should be worn. It has been almost 20 years since Gemperle et al. wrote \"Design for Wearability\" [17], and although much of her initial guidelines on humans factors surrounding wearability still stand, devices and use cases have changed over time. This paper is a collection of literature and updated guidelines and reasons for on-body location depending on the use of the wearable technology and the affordances provided by different locations on the body.", "title": "" }, { "docid": "f18c9cecdd3b7697af7c160906d6d501", "text": "A new data structure for efficient similarity search in very large dataseis of high-dimensional vectors is introduced. This structure called the inverted multi-index generalizes the inverted index idea by replacing the standard quantization within inverted indices with product quantization. For very similar retrieval complexity and preprocessing time, inverted multi-indices achieve a much denser subdivision of the search space compared to inverted indices, while retaining their memory efficiency. Our experiments with large dataseis of SIFT and GIST vectors demonstrate that because of the denser subdivision, inverted multi-indices are able to return much shorter candidate lists with higher recall. Augmented with a suitable reranking procedure, multi-indices were able to improve the speed of approximate nearest neighbor search on the dataset of 1 billion SIFT vectors by an order of magnitude compared to the best previously published systems, while achieving better recall and incurring only few percent of memory overhead.", "title": "" }, { "docid": "ec14996dd3ce3701db628348dfeb63f2", "text": "Eye gaze interaction can provide a convenient and natural addition to user-computer dialogues. We have previously reported on our interaction techniques using eye gaze [10]. While our techniques seemed useful in demonstration, we now investigate their strengths and weaknesses in a controlled setting. In this paper, we present two experiments that compare an interaction technique we developed for object selection based on a where a person is looking with the most commonly used selection method using a mouse. We find that our eye gaze interaction technique is faster than selection with a mouse. The results show that our algorithm, which makes use of knowledge about how the eyes behave, preserves the natural quickness of the eye. Eye gaze interaction is a reasonable addition to computer interaction and is convenient in situations where it is important to use the hands for other tasks. It is particularly beneficial for the larger screen workspaces and virtual environments of the future, and it will become increasingly practical as eye tracker technology matures.", "title": "" }, { "docid": "e3546095a5d0bb39755355c7a3acc875", "text": "We propose to achieve explainable neural machine translation (NMT) by changing the output representation to explain itself. We present a novel approach to NMT which generates the target sentence by monotonically walking through the source sentence. Word reordering is modeled by operations which allow setting markers in the target sentence and move a target-side write head between those markers. In contrast to many modern neural models, our system emits explicit word alignment information which is often crucial to practical machine translation as it improves explainability. Our technique can outperform a plain text system in terms of BLEU score under the recent Transformer architecture on JapaneseEnglish and Portuguese-English, and is within 0.5 BLEU difference on Spanish-English.", "title": "" }, { "docid": "b3d1780cb8187e5993c5adbb7959b7a6", "text": "We present impacto, a device designed to render the haptic sensation of hitting or being hit in virtual reality. The key idea that allows the small and light impacto device to simulate a strong hit is that it decomposes the stimulus: it renders the tactile aspect of being hit by tapping the skin using a solenoid; it adds impact to the hit by thrusting the user's arm backwards using electrical muscle stimulation. The device is self-contained, wireless, and small enough for wearable use, thus leaves the user unencumbered and able to walk around freely in a virtual environment. The device is of generic shape, allowing it to also be worn on legs, so as to enhance the experience of kicking, or merged into props, such as a baseball bat. We demonstrate how to assemble multiple impacto units into a simple haptic suit. Participants of our study rated impact simulated using impacto's combination of solenoid hit and electrical muscle stimulation as more realistic than either technique in isolation.", "title": "" }, { "docid": "11b857de21829051b55aa8318c4c97f7", "text": "An optimized split-gate-enhanced UMOSFET (SGE-UMOS) layout design is proposed, and its mechanism is investigated by 2-D and 3-D simulations. The layout features trench surrounding mesa (TSM): First, it optimizes the distribution of electric field density in the outer active mesa, reduces the electric-field crowding effect, and improves the breakdown voltage of the SGE-UMOS device. Second, it is unnecessary to design the layout corner with a large diameter in the termination region for the TSM structure as the conventional mesa surrounding trench (MST) structure, which is more efficient in terms of silicon usage. Rsp.on is reduced when compared with the MST structure within the same rectangular chip area. The BV of SGE-UMOS is increased from 72 to 115 V, and Rsp.on is reduced by approximately 3.5% as compared with the MST structure, due to the application of the TSM. Finally, it needs five masks in the process, and the trenches in active and termination regions are formed with the same processing steps; hence, the manufacturing process is simplified, and the cost is reduced as well.", "title": "" }, { "docid": "80e0a6c270bb146a1a45994d27340639", "text": "BACKGROUND\nThe promotion of active and healthy ageing is becoming increasingly important as the population ages. Physical activity (PA) significantly reduces all-cause mortality and contributes to the prevention of many chronic illnesses. However, the proportion of people globally who are active enough to gain these health benefits is low and decreases with age. Social support (SS) is a social determinant of health that may improve PA in older adults, but the association has not been systematically reviewed. This review had three aims: 1) Systematically review and summarise studies examining the association between SS, or loneliness, and PA in older adults; 2) clarify if specific types of SS are positively associated with PA; and 3) investigate whether the association between SS and PA differs between PA domains.\n\n\nMETHODS\nQuantitative studies examining a relationship between SS, or loneliness, and PA levels in healthy, older adults over 60 were identified using MEDLINE, PSYCInfo, SportDiscus, CINAHL and PubMed, and through reference lists of included studies. Quality of these studies was rated.\n\n\nRESULTS\nThis review included 27 papers, of which 22 were cross sectional studies, three were prospective/longitudinal and two were intervention studies. Overall, the study quality was moderate. Four articles examined the relation of PA with general SS, 17 with SS specific to PA (SSPA), and six with loneliness. The results suggest that there is a positive association between SSPA and PA levels in older adults, especially when it comes from family members. No clear associations were identified between general SS, SSPA from friends, or loneliness and PA levels. When measured separately, leisure time PA (LTPA) was associated with SS in a greater percentage of studies than when a number of PA domains were measured together.\n\n\nCONCLUSIONS\nThe evidence surrounding the relationship between SS, or loneliness, and PA in older adults suggests that people with greater SS for PA are more likely to do LTPA, especially when the SS comes from family members. However, high variability in measurement methods used to assess both SS and PA in included studies made it difficult to compare studies.", "title": "" }, { "docid": "71f7ce3b6e4a20a112f6a1ae9c22e8e1", "text": "The neural correlates of many emotional states have been studied, most recently through the technique of fMRI. However, nothing is known about the neural substrates involved in evoking one of the most overwhelming of all affective states, that of romantic love, about which we report here. The activity in the brains of 17 subjects who were deeply in love was scanned using fMRI, while they viewed pictures of their partners, and compared with the activity produced by viewing pictures of three friends of similar age, sex and duration of friendship as their partners. The activity was restricted to foci in the medial insula and the anterior cingulate cortex and, subcortically, in the caudate nucleus and the putamen, all bilaterally. Deactivations were observed in the posterior cingulate gyrus and in the amygdala and were right-lateralized in the prefrontal, parietal and middle temporal cortices. The combination of these sites differs from those in previous studies of emotion, suggesting that a unique network of areas is responsible for evoking this affective state. This leads us to postulate that the principle of functional specialization in the cortex applies to affective states as well.", "title": "" } ]
scidocsrr
25b203d9db914434347acf348782f69c
Secured Data Storage Scheme Based on Block Chain for Agricultural Products Tracking
[ { "docid": "45d3e3e34b3a6217c59e5196d09774ef", "text": "While showing great promise, Bitcoin requires users to wait tens of minutes for transactions to commit, and even then, offering only probabilistic guarantees. This paper introduces ByzCoin, a novel Byzantine consensus protocol that leverages scalable collective signing to commit Bitcoin transactions irreversibly within seconds. ByzCoin achieves Byzantine consensus while preserving Bitcoin’s open membership by dynamically forming hash power-proportionate consensus groups that represent recently-successful block miners. ByzCoin employs communication trees to optimize transaction commitment and verification under normal operation while guaranteeing safety and liveness under Byzantine faults, up to a near-optimal tolerance of f faulty group members among 3 f + 2 total. ByzCoin mitigates double spending and selfish mining attacks by producing collectively signed transaction blocks within one minute of transaction submission. Tree-structured communication further reduces this latency to less than 30 seconds. Due to these optimizations, ByzCoin achieves a throughput higher than Paypal currently handles, with a confirmation latency of 15-20 seconds.", "title": "" } ]
[ { "docid": "f2f53f1bdf451c945053bb8f2b8ca9a1", "text": "In this paper we investigated cybercrime and examined the relevant laws available to combat this crime in Nigeria. Therefore, we had a critical review of criminal laws in Nigeria and also computer network and internet security. The internet as an instrument to aid crime ranges from business espionage, to banking fraud, obtaining un-authorized and sabotaging data in computer networks of some key organizations. We investigated these crimes and noted some useful observations. From our observations, we profound solution to the inadequacies of existing enabling laws. Prevention of cybercrime requires the co-operation of all the citizens and not necessarily the police alone who presently lack specialists in its investigating units to deal with cybercrime. The eradication of this crime is crucial in view of the devastating effect on the image of Nigeria and the attendant consequence on the economy. Out of over 140 million Nigerians less than 5x10-4% are involved in cybercrime across Nigeria.", "title": "" }, { "docid": "4d4f7352f87476ab6cc1528c9c7a3cea", "text": "We consider topic detection without any prior knowledge of category structure or possible categories. Keywords are extracted and clustered based on different similarity measures using the induced k-bisecting clustering algorithm. Evaluation on Wikipedia articles shows that clusters of keywords correlate strongly with the Wikipedia categories of the articles. In addition, we find that a distance measure based on the Jensen-Shannon divergence of probability distributions outperforms the cosine similarity. In particular, a newly proposed term distribution taking co-occurrence of terms into account gives best results.", "title": "" }, { "docid": "64c93db7f7a756a4cbd6ed710cf793ca", "text": "Automated lung cancer detection using computer aided diagnosis (CAD) is an important area in clinical applications. As the manual nodule detection is very time consuming and costly so computerized systems can be helpful for this purpose. In this paper, we propose a computerized system for lung nodule detection in CT scan images. The automated system consists of two stages i.e. lung segmentation and enhancement, feature extraction and classification. The segmentation process will result in separating lung tissue from rest of the image, and only the lung tissues under examination are considered as candidate regions for detecting malignant nodules in lung portion. A feature vector for possible abnormal regions is calculated and regions are classified using neuro fuzzy classifier. It is a fully automatic system that does not require any manual intervention and experimental results show the validity of our system.", "title": "" }, { "docid": "81e88dbd2f01ddddb2b8245e9d9626c9", "text": "The remarkable properties of some recent computer algorithms for neural networks seemed to promise a fresh approach to understanding the computational properties of the brain. Unfortunately most of these neural nets are unrealistic in important respects.", "title": "" }, { "docid": "04953f3a55a77b9a35e7cea663c6387e", "text": "-This paper presents a calibration procedure for a fish-eye lens (a high-distortion lens) mounted on a CCD TV camera. The method is designed to account for the differences in images acquired via a distortion-free lens camera setup and the images obtained by a fish-eye lens camera. The calibration procedure essentially defines a mapping between points in the world coordinate system and their corresponding point locations in the image plane. This step is important for applications in computer vision which involve quantitative measurements. The objective of this mapping is to estimate the internal parameters of the camera, including the effective focal length, one-pixel width on the image plane, image distortion center, and distortion coefficients. The number of parameters to be calibrated is reduced by using a calibration pattern with equally spaced dots and assuming a pin-hole model camera behavior for the image center, thus assuming negligible distortion at the image distortion center. Our method employs a non-finear transformation between points in the world coordinate system and their corresponding location on the image plane. A Lagrangian minimization method is used to determine the coefficients of the transformation. The validity and effectiveness of our calibration and distortion correction procedure are confirmed by application of this procedure on real images. Copyright © 1996 Pattern Recognition Society. Published by Elsevier Science Ltd. Camera calibration Lens distortion Intrinsic camera parameters Fish-eye lens Optimization", "title": "" }, { "docid": "10b9516ef7302db13dcf46e038b3f744", "text": "A new fake iris detection method based on 3D feature of iris pattern is proposed. In pervious researches, they did not consider 3D structure of iris pattern, but only used 2D features of iris image. However, in our method, by using four near infra-red (NIR) illuminators attached on the left and right sides of iris camera, we could obtain the iris image in which the 3D structure of iris pattern could be shown distinctively. Based on that, we could determine the live or fake iris by wavelet analysis of the 3D feature of iris pattern. Experimental result showed that the Equal Error Rate (EER) of determining the live or fake iris was 0.33%. VC 2010 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 20, 162–166, 2010; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.20227", "title": "" }, { "docid": "2b09fad4433a2046902cdf17cb865753", "text": "1 Lawrence CM, Lonsdale Eccles AA. Selective sweat gland removal with minimal skin excision in the treatment of axillary hyperhidrosis: a retrospective clinical and histological review of 15 patients. Br J Dermatol 2006; 155:115–18. 2 Bechara FG, Sand M, Sand D et al. Surgical treatment of axillary hyperhidrosis: a study comparing liposuction cannulas with a suction-curettage cannula. Ann Plast Surg 2006; 56:654–7. 3 Perng CK, Yeh FL, Ma H et al. Is the treatment of axillary osmidrosis with liposuction better than open surgery? Plast Reconstr Surg 2004; 114:93–7. 4 Bechara FG, Sand M, Sand D et al. Postoperative situation after axillary suction-curettage: an endoscopical view. J Plast Reconstr Aesthet Surg 2006; 59:304–6.", "title": "" }, { "docid": "f8435db6c6ea75944d1c6b521e0f3dd3", "text": "We present the design, fabrication process, and characterization of a multimodal tactile sensor made of polymer materials and metal thin film sensors. The multimodal sensor can detect the hardness, thermal conductivity, temperature, and surface contour of a contact object for comprehensive evaluation of contact objects and events. Polymer materials reduce the cost and the fabrication complexity for the sensor skin, while increasing mechanical flexibility and robustness. Experimental tests show the skin is able to differentiate between objects using measured properties. © 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "5ef958d4033ef9e6b2834aa3667252c3", "text": "Intrinsic decomposition from a single image is a highly challenging task, due to its inherent ambiguity and the scarcity of training data. In contrast to traditional fully supervised learning approaches, in this paper we propose learning intrinsic image decomposition by explaining the input image. Our model, the Rendered Intrinsics Network (RIN), joins together an image decomposition pipeline, which predicts reflectance, shape, and lighting conditions given a single image, with a recombination function, a learned shading model used to recompose the original input based off of intrinsic image predictions. Our network can then use unsupervised reconstruction error as an additional signal to improve its intermediate representations. This allows large-scale unlabeled data to be useful during training, and also enables transferring learned knowledge to images of unseen object categories, lighting conditions, and shapes. Extensive experiments demonstrate that our method performs well on both intrinsic image decomposition and knowledge transfer.", "title": "" }, { "docid": "777998d8d239124de463ed28b0a1c27f", "text": "The Scrum software development framework was designed for the hyperproductive state where productivity increases by 5-10 times over waterfall teams and many co-located teams have achieved this effect. In 2006, Xebia (The Netherlands) started localized projects with half Dutch and half Indian team members. After establishing a localized velocity of five times their waterfall competitors on the same project, they moved the Indian members of the team to India and showed stable velocity with fully distributed teams. The ability to achieve hyperproductivity with distributed, outsourced teams was shown to be a repeatable process and a fully distributed model is now the recommended standard when organizations have disciplined Scrum teams with full implementation of XP engineering practices inside the Scrum. Previous studies used overlapping time zones to ease communication and create a single distributed team. The goal of this report is to go one step further and show the same results with team members separated by the 12.5 hour time difference between India and San Francisco. If Scrum works without overlapping time zones then applying it to the mainstream offshoring practice in North America will be possible. In 2008, Xebia India started engagements with partners like TBD.com, a social networking site in San Francisco. TBD has an existing core team of developers doing Scrum with an established local velocity. Adding Xebia India developers to the San Francisco team with a Fully Distributed Scrum model achieved linear scalability with a globally distributed outsourced team.", "title": "" }, { "docid": "39492127ee68a86b33a8a120c8c79f5d", "text": "The Alternating Direction Method of Multipliers (ADMM) has received lots of attention recently due to the tremendous demand from large-scale and data-distributed machine learning applications. In this paper, we present a stochastic setting for optimization problems with non-smooth composite objective functions. To solve this problem, we propose a stochastic ADMM algorithm. Our algorithm applies to a more general class of convex and nonsmooth objective functions, beyond the smooth and separable least squares loss used in lasso. We also demonstrate the rates of convergence for our algorithm under various structural assumptions of the stochastic function: O(1/ √ t) for convex functions and O(log t/t) for strongly convex functions. Compared to previous literature, we establish the convergence rate of ADMM for convex problems in terms of both the objective value and the feasibility violation. A novel application named GraphGuided SVM is proposed to demonstrate the usefulness of our algorithm.", "title": "" }, { "docid": "c77c6ea404d9d834ef1be5a1d7222e66", "text": "We introduce the concepts of regular and totally regular bipolar fuzzy graphs. We prove necessary and sufficient condition under which regular bipolar fuzzy graph and totally bipolar fuzzy graph are equivalent. We introduce the notion of bipolar fuzzy line graphs and present some of their properties. We state a necessary and sufficient condition for a bipolar fuzzy graph to be isomorphic to its corresponding bipolar fuzzy line graph. We examine when an isomorphism between two bipolar fuzzy graphs follows from an isomorphism of their corresponding bipolar fuzzy line graphs.", "title": "" }, { "docid": "b3db73c0398e6c0e6a90eac45bb5821f", "text": "The task of video grounding, which temporally localizes a natural language description in a video, plays an important role in understanding videos. Existing studies have adopted strategies of sliding window over the entire video or exhaustively ranking all possible clip-sentence pairs in a presegmented video, which inevitably suffer from exhaustively enumerated candidates. To alleviate this problem, we formulate this task as a problem of sequential decision making by learning an agent which regulates the temporal grounding boundaries progressively based on its policy. Specifically, we propose a reinforcement learning based framework improved by multi-task learning and it shows steady performance gains by considering additional supervised boundary information during training. Our proposed framework achieves state-ofthe-art performance on ActivityNet’18 DenseCaption dataset (Krishna et al. 2017) and Charades-STA dataset (Sigurdsson et al. 2016; Gao et al. 2017) while observing only 10 or less clips per video.", "title": "" }, { "docid": "b4ab47d8ec52d7a8e989bfc9d6c0d173", "text": "In this paper, we consider the problem of recovering the spatial layout of indoor scenes from monocular images. The presence of clutter is a major problem for existing single-view 3D reconstruction algorithms, most of which rely on finding the ground-wall boundary. In most rooms, this boundary is partially or entirely occluded. We gain robustness to clutter by modeling the global room space with a parameteric 3D “box” and by iteratively localizing clutter and refitting the box. To fit the box, we introduce a structured learning algorithm that chooses the set of parameters to minimize error, based on global perspective cues. On a dataset of 308 images, we demonstrate the ability of our algorithm to recover spatial layout in cluttered rooms and show several examples of estimated free space.", "title": "" }, { "docid": "c43b60a681ba0dfe2fa06a2b60ee3d31", "text": "Gamification gradually gains more attention. However, gamification and its successful application is still unclear. There is a lack of insights and theory on the relationships between game design elements, motivation, domain context and user behavior. We want to discover the potentials of data-driven optimization of gamification design, e.g. by the application of machine learning techniques on user interaction data. Therefore, we propose data-driven gamification design (DDGD) and conducted a questionnaire with 17 gamification experts. Our results show that respondents regard DDGD as a promising method to improve gamification design and lead to a general definition for DDGD.", "title": "" }, { "docid": "d8e32dfbe629d374e7fd5e9571c20cd4", "text": "Neural machine translation (NMT) offers a novel alternative formulation of translation that is potentially simpler than statistical approaches. However to reach competitive performance, NMT models need to be exceedingly large. In this paper we consider applying knowledge distillation approaches (Bucila et al., 2006; Hinton et al., 2015) that have proven successful for reducing the size of neural models in other domains to the problem of NMT. We demonstrate that standard knowledge distillation applied to word-level prediction can be effective for NMT, and also introduce two novel sequence-level versions of knowledge distillation that further improve performance, and somewhat surprisingly, seem to eliminate the need for beam search (even when applied on the original teacher model). Our best student model runs 10 times faster than its state-of-the-art teacher with little loss in performance. It is also significantly better than a baseline model trained without knowledge distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight pruning on top of knowledge distillation results in a student model that has 13× fewer parameters than the original teacher model, with a decrease of 0.4 BLEU.", "title": "" }, { "docid": "bbc936a3b4cd942ba3f2e1905d237b82", "text": "Silkworm silk is among the most widely used natural fibers for textile and biomedical applications due to its extraordinary mechanical properties and superior biocompatibility. A number of physical and chemical processes have also been developed to reconstruct silk into various forms or to artificially produce silk-like materials. In addition to the direct use and the delicate replication of silk's natural structure and properties, there is a growing interest to introduce more new functionalities into silk while maintaining its advantageous intrinsic properties. In this review we assess various methods and their merits to produce functional silk, specifically those with color and luminescence, through post-processing steps as well as biological approaches. There is a highlight on intrinsically colored and luminescent silk produced directly from silkworms for a wide range of applications, and a discussion on the suitable molecular properties for being incorporated effectively into silk while it is being produced in the silk gland. With these understanding, a new generation of silk containing various functional materials (e.g., drugs, antibiotics and stimuli-sensitive dyes) would be produced for novel applications such as cancer therapy with controlled release feature, wound dressing with monitoring/sensing feature, tissue engineering scaffolds with antibacterial, anticoagulant or anti-inflammatory feature, and many others.", "title": "" }, { "docid": "4f287c788c7e95bf350a998650ff6221", "text": "Wireless sensor network has become an emerging technology due its wide range of applications in object tracking and monitoring, military commands, smart homes, forest fire control, surveillance, etc. Wireless sensor network consists of thousands of miniature devices which are called sensors but as it uses wireless media for communication, so security is the major issue. There are number of attacks on wireless of which selective forwarding attack is one of the harmful attacks. This paper describes selective forwarding attack and detection techniques against selective forwarding attacks which have been proposed by different researchers. In selective forwarding attacks, malicious nodes act like normal nodes and selectively drop packets. The selective forwarding attack is a serious threat in WSN. Identifying such attacks is very difficult and sometimes impossible. This paper also presents qualitative analysis of detection techniques in tabular form. Keywordswireless sensor network, attacks, selective forwarding attacks, malicious nodes.", "title": "" }, { "docid": "401f93b2405bd54882fe876365195425", "text": "Previous approaches to training syntaxbased sentiment classification models required phrase-level annotated corpora, which are not readily available in many languages other than English. Thus, we propose the use of tree-structured Long Short-Term Memory with an attention mechanism that pays attention to each subtree of the parse tree. Experimental results indicate that our model achieves the stateof-the-art performance in a Japanese sentiment classification task.", "title": "" }, { "docid": "b85ad4f280359fec469dbb766d3f7bd8", "text": "As we write this chapter, the field of industrial– organizational psychology in the United States has survived its third attempt at a name change. To provide a little perspective, the moniker industrial psychology became popular after World War I, and described a field that was characterized by ability testing and vocational assessment (Koppes, 2003). The current label, industrial– organizational (I-O) psychology, was made official in 1973. The addition of organizational reflected the growing influence of social psychologists and organizational development consultants, as well as the intellectual and social milieu of the period (see Highhouse, 2007). The change to I-O psychology was more of a compromise than a solution—which may have succeeded only to the extent that everyone was equally dissatisfied. The first attempt to change this clunky label, therefore, occurred in 1976. Popular alternatives at the time were personnel psychology , business psychology , and psychology of work . The leading contender, however, was organizational psychology because, according to then-future APA Division 14 president Arthur MacKinney, “all of the Division’s work is grounded in organizational contexts” (MacKinney 1976, p. 2). The issue stalled before ever making it", "title": "" } ]
scidocsrr
488b2f4a33e40d0ab2ea24103587ee1f
Analysis of the reputation system and user contributions on a question answering website: StackOverflow
[ { "docid": "c3325bcfa1b1a9c9012c50fe0bd11161", "text": "We consider the problem of identifying authoritative users in Yahoo! Answers. A common approach is to use link analysis techniques in order to provide a ranked list of users based on their degree of authority. A major problem for such an approach is determining how many users should be chosen as authoritative from a ranked list. To address this problem, we propose a method for automatic identification of authoritative actors. In our approach, we propose to model the authority scores of users as a mixture of gamma distributions. The number of components in the mixture is estimated by the Bayesian Information Criterion (BIC) while the parameters of each component are estimated using the Expectation-Maximization (EM) algorithm. This method allows us to automatically discriminate between authoritative and non-authoritative users. The suitability of our proposal is demonstrated in an empirical study using datasets from Yahoo! Answers.", "title": "" } ]
[ { "docid": "73f9c6fc5dfb00cc9b05bdcd54845965", "text": "The convolutional neural network (CNN), which is one of the deep learning models, has seen much success in a variety of computer vision tasks. However, designing CNN architectures still requires expert knowledge and a lot of trial and error. In this paper, we attempt to automatically construct CNN architectures for an image classification task based on Cartesian genetic programming (CGP). In our method, we adopt highly functional modules, such as convolutional blocks and tensor concatenation, as the node functions in CGP. The CNN structure and connectivity represented by the CGP encoding method are optimized to maximize the validation accuracy. To evaluate the proposed method, we constructed a CNN architecture for the image classification task with the CIFAR-10 dataset. The experimental result shows that the proposed method can be used to automatically find the competitive CNN architecture compared with state-of-the-art models.", "title": "" }, { "docid": "b4622c9a168cd6e6f852bcc640afb4b3", "text": "New developments in osteotomy techniques and methods of fixation have caused a revival of interest of osteotomies around the knee. The current consensus on the indications, patient selection and the factors influencing the outcome after high tibial osteotomy is presented. This paper highlights recent research aimed at joint pressure redistribution, fixation stability and bone healing that has led to improved surgical techniques and a decrease of post-operative time to full weight-bearing.", "title": "" }, { "docid": "d6fbe041eb639e18c3bb9c1ed59d4194", "text": "Based on discrete event-triggered communication scheme (DETCS), this paper is concerned with the satisfactory H ! / H 2 event-triggered fault-tolerant control problem for networked control system (NCS) with α -safety degree and actuator saturation constraint from the perspective of improving satisfaction of fault-tolerant control and saving network resource. Firstly, the closed-loop NCS model with actuator failures and actuator saturation is built based on DETCS; Secondly, based on Lyapunov-Krasovskii function and the definition of α -safety degree given in the paper, a sufficient condition is presented for NCS with the generalized H2 and H! performance, which is the contractively invariant set of fault-tolerance with α -safety degree, and the co-design method for event-triggered parameter and satisfactory faulttolerant controller is also given in this paper. Moreover, the simulation example verifies the feasibility of improving system satisfaction and the effectiveness of saving network resource for the method. Finally, the compatibility analysis of the related indexes is also discussed and analyzed.", "title": "" }, { "docid": "0eba5306a558f2a4018f135ff6e4d29d", "text": "The impact of Automated Trading Systems (ATS) on financial markets is growing every year and the trades generated by an algorithm now account for the majority of orders that arrive at stock exchanges. In this paper we explore how to find a trading strategy via Reinforcement Learning (RL), a branch of Machine Learning (ML) that allows to find an optimal strategy for a sequential decision problem by directly interacting with the environment. We show that the the long-short strategy learned for a synthetic asset, whose price follows a stochastic process with some exploitable patterns, consistently outperforms the market. RL thus shows the potential to deal with many financial problems, that can be often formulated as sequential decision problems.", "title": "" }, { "docid": "e6dcae244f91dc2d7e843d9860ac1cfd", "text": "After Disney's Michael Eisner, Miramax's Harvey Weinstein, and Hewlett-Packard's Carly Fiorina fell from their heights of power, the business media quickly proclaimed thatthe reign of abrasive, intimidating leaders was over. However, it's premature to proclaim their extinction. Many great intimidators have done fine for a long time and continue to thrive. Their modus operandi runs counter to a lot of preconceptions about what it takes to be a good leader. They're rough, loud, and in your face. Their tactics include invading others' personal space, staging tantrums, keeping people guessing, and possessing an indisputable command of facts. But make no mistake--great intimidators are not your typical bullies. They're driven by vision, not by sheer ego or malice. Beneath their tough exteriors and sharp edges are some genuine, deep insights into human motivation and organizational behavior. Indeed, these leaders possess political intelligence, which can make the difference between paralysis and successful--if sometimes wrenching--organizational change. Like socially intelligent leaders, politically intelligent leaders are adept at sizing up others, but they notice different things. Those with social intelligence assess people's strengths and figure out how to leverage them; those with political intelligence exploit people's weaknesses and insecurities. Despite all the obvious drawbacks of working under them, great intimidators often attract the best and brightest. And their appeal goes beyond their ability to inspire high performance. Many accomplished professionals who gravitate toward these leaders want to cultivate a little \"inner intimidator\" of their own. In the author's research, quite a few individuals reported having positive relationships with intimidating leaders. In fact, some described these relationships as profoundly educational and even transformational. So before we throw out all the great intimidators, the author argues, we should stop to consider what we would lose.", "title": "" }, { "docid": "d3a6be631dcf65791b4443589acb6880", "text": "We present a deep generative model for Zero-Shot Learning (ZSL). Unlike most existing methods for this problem, that represent each class as a point (via a semantic embedding), we represent each seen/unseen class using a classspecific latent-space distribution, conditioned on class attributes. We use these latent-space distributions as a prior for a supervised variational autoencoder (VAE), which also facilitates learning highly discriminative feature representations for the inputs. The entire framework is learned end-to-end using only the seen-class training data. At test time, the label for an unseen-class test input is the class that maximizes the VAE lower bound. We further extend the model to a (i) semi-supervised/transductive setting by leveraging unlabeled unseen-class data via an unsupervised learning module, and (ii) few-shot learning where we also have a small number of labeled inputs from the unseen classes. We compare our model with several state-of-the-art methods through a comprehensive set of experiments on a variety of benchmark data sets.", "title": "" }, { "docid": "1566c80c4624533292c7442c61f3be15", "text": "Modern software often relies on the combination of several software modules that are developed independently. There are use cases where different software libraries from different programming languages are used, e.g., embedding DLL files in JAVA applications. Even more complex is the case when different programming paradigms are combined like within applications with database connections, for instance PHP and SQL. Such a diversification of programming languages and modules in just one software application is becoming more and more important, as this leads to a combination of the strengths of different programming paradigms. But not always, the developers are experts in the different programming languages or even in different programming paradigms. So, it is desirable to provide easy to use interfaces that enable the integration of programs from different programming languages and offer access to different programming paradigms. In this paper we introduce a connector architecture for two programming languages of different paradigms: JAVA as a representative of object oriented programming languages and PROLOG for logic programming. Our approach provides a fast, portable and easy to use communication layer between JAVA and PROLOG. The exchange of information is done via a textual term representation which can be used independently from a deployed PROLOG engine. The proposed connector architecture allows for Object Unification on the JAVA side. We provide an exemplary connector for JAVA and SWI-PROLOG, a well-known PROLOG implementation.", "title": "" }, { "docid": "b15f185258caa9d355fae140a41ae03c", "text": "The current approaches in terms of information security awareness and education are descriptive (i.e. they are not accomplishment-oriented nor do they recognize the factual/normative dualism); and current research has not explored the possibilities offered by motivation/behavioural theories. The first situation, level of descriptiveness, is deemed to be questionable because it may prove eventually that end-users fail to internalize target goals and do not follow security guidelines, for example ± which is inadequate. Moreover, the role of motivation in the area of information security is not considered seriously enough, even though its role has been widely recognised. To tackle such weaknesses, this paper constructs a conceptual foundation for information systems/organizational security awareness. The normative and prescriptive nature of end-user guidelines will be considered. In order to understand human behaviour, the behavioural science framework, consisting in intrinsic motivation, a theory of planned behaviour and a technology acceptance model, will be depicted and applied. Current approaches (such as the campaign) in the area of information security awareness and education will be analysed from the viewpoint of the theoretical framework, resulting in information on their strengths and weaknesses. Finally, a novel persuasion strategy aimed at increasing users' commitment to security guidelines is presented. spite of its significant role, seems to lack adequate foundations. To begin with, current approaches (e.g. McLean, 1992; NIST, 1995, 1998; Perry, 1985; Morwood, 1998), are descriptive in nature. Their inadequacy with respect to point of departure is partly recognized by McLean (1992), who points out that the approaches presented hitherto do not ensure learning. Learning can also be descriptive, however, which makes it an improper objective for security awareness. Learning and other concepts or approaches are not irrelevant in the case of security awareness, education or training, but these and other approaches need a reasoned contextual foundation as a point of departure in order to be relevant. For instance, if learning does not reflect the idea of prescriptiveness, the objective of the learning approach includes the fact that users may learn guidelines, but nevertheless fails to comply with them in the end. This state of affairs (level of descriptiveness[6]), is an inadequate objective for a security activity (the idea of prescriptiveness will be thoroughly considered in section 3). Also with regard to the content facet, the important role of motivation (and behavioural theories) with respect to the uses of security systems has been recognised (e.g. by NIST, 1998; Parker, 1998; Baskerville, 1989; Spruit, 1998; SSE-CMM, 1998a; 1998b; Straub, 1990; Straub et al., 1992; Thomson and von Solms, 1998; Warman, 1992) ± but only on an abstract level (as seen in Table I, the issue islevel (as seen in Table I, the issue is not considered from the viewpoint of any particular behavioural theory as yet). Motivation, however, is an issue where a deeper understanding may be of crucial relevance with respect to the effectiveness of approaches based on it. The role, possibilities and constraints of motivation and attitude in the effort to achieve positive results with respect to information security activities will be addressed at a conceptual level from the viewpoints of different theories. The scope of this paper is limited to the content aspects of awareness (Table I) and further end-users, thus resulting in a research contribution that is: a conceptual foundation and a framework for IS security awareness. This is achieved by addressing the following research questions: . What are the premises, nature and point of departure of awareness? . What is the role of attitude, and particularly motivation: the possibilities and requirements for achieving motivation/user acceptance and commitment with respect to information security tasks? . What approaches can be used as a framework to reach the stage of internalization and end-user", "title": "" }, { "docid": "e7bedfa690b456a7a93e5bdae8fff79c", "text": "During the past several years, there have been a significant number of researches conducted in the area of semiconductor final test scheduling problems (SFTSP). As specific example of simultaneous multiple resources scheduling problem (SMRSP), intelligent manufacturing planning and scheduling based on meta-heuristic methods, such as Genetic Algorithm (GA), Simulated Annealing (SA), and Particle Swarm Optimization (PSO), have become the common tools for finding satisfactory solutions within reasonable computational times in real settings. However, limited researches were aiming at analyze the effects of interdependent relations during group decision-making activities. Moreover for complex and large problems, local constraints and objectives from each managerial entity, and their contributions towards the global objectives cannot be effectively represented in a single model. In this paper, we propose a novel Cooperative Estimation of Distribution Algorithm (CEDA) to overcome the challenges mentioned before. The CEDA is established based on divide-and-conquer strategy and a co-evolutionary framework. Considerable experiments have been conducted and the results confirmed that CEDA outperforms recent research results for scheduling problems in FMS (Flexible Manufacturing Systems).", "title": "" }, { "docid": "b36cc742445db810d40c884a90e2cf42", "text": "Telecommunication sector generates a huge amount of data due to increasing number of subscribers, rapidly renewable technologies; data based applications and other value added service. This data can be usefully mined for churn analysis and prediction. Significant research had been undertaken by researchers worldwide to understand the data mining practices that can be used for predicting customer churn. This paper provides a review of around 100 recent journal articles starting from year 2000 to present the various data mining techniques used in multiple customer based churn models. It then summarizes the existing telecom literature by highlighting the sample size used, churn variables employed and the findings of different DM techniques. Finally, we list the most popular techniques for churn prediction in telecom as decision trees, regression analysis and clustering, thereby providing a roadmap to new researchers to build upon novel churn management models.", "title": "" }, { "docid": "94a1b63b1adcffd548601c43190e3caf", "text": "The convergence of back-propagation learning is analyzed so as to explain common phenomenon observed by practitioners. Many undesirable behaviors of backprop can be avoided with tricks that are rarely exposed in serious technical publications. This paper gives some of those tricks, and offers explanations of why they work. Many authors have suggested that second-order optimization methods are advantageous for neural net training. It is shown that most “classical” second-order methods are impractical for large neural networks. A few methods are proposed that do not have these limitations.", "title": "" }, { "docid": "159e040b0e74ad1b6124907c28e53daf", "text": "People (pedestrians, drivers, passengers in public transport) use different services on small mobile gadgets on a daily basis. So far, mobile applications don't react to context changes. Running services should adapt to the changing environment and new services should be installed and deployed automatically. We propose a classification of context elements that influence the behavior of the mobile services, focusing on the challenges of the transportation domain. Malware Detection on Mobile Devices Asaf Shabtai*, Ben-Gurion University, Israel Abstract: We present various approaches for mitigating malware on mobile devices which we have implemented and evaluated on Google Android. Our work is divided into the following three segments: a host-based intrusion detection framework; an implementation of SELinux in Android; and static analysis of Android application files. We present various approaches for mitigating malware on mobile devices which we have implemented and evaluated on Google Android. Our work is divided into the following three segments: a host-based intrusion detection framework; an implementation of SELinux in Android; and static analysis of Android application files. Dynamic Approximative Data Caching in Wireless Sensor Networks Nils Hoeller*, IFIS, University of Luebeck Abstract: Communication in Wireless Sensor Networks generally is the most energy consuming task. Retrieving query results from deep within the sensor network therefore consumes a lot of energy and hence shortens the network's lifetime. In this work optimizations Communication in Wireless Sensor Networks generally is the most energy consuming task. Retrieving query results from deep within the sensor network therefore consumes a lot of energy and hence shortens the network's lifetime. In this work optimizations for processing queries by using adaptive caching structures are discussed. Results can be retrieved from caches that are placed nearer to the query source. As a result the communication demand is reduced and hence energy is saved by using the cached results. To verify cache coherence in networks with non-reliable communication channels, an approximate update policy is presented. A degree of result quality can be defined for a query to find the adequate cache adaptively. Gossip-based Data Fusion Framework for Radio Resource Map Jin Yang*, Ilmenau University of Technology Abstract: In disaster scenarios, sensor networks are used to detect changes and estimate resource availability to further support the system recovery and rescue process. In this PhD thesis, sensor networks are used to detect available radio resources in order to form a global view of the radio resource map, based on locally sensed and measured data. Data fusion and harvesting techniques are employed for the generation and maintenance of this “radio resource map.” In order to guarantee the flexibility and fault tolerance goals of disaster scenarios, a gossip protocol is used to exchange information. The radio propagation field knowledge is closely coupled to harvesting and fusion protocols in order to achieve efficient fusing of radio measurement data. For the evaluation, simulations will be used to measure the efficiency and robustness in relation to time critical applications and various deployment densities. Actual radio data measurements within the Ilmenau area are being collected for further analysis of the map quality and in order to verify simulation results. In disaster scenarios, sensor networks are used to detect changes and estimate resource availability to further support the system recovery and rescue process. In this PhD thesis, sensor networks are used to detect available radio resources in order to form a global view of the radio resource map, based on locally sensed and measured data. Data fusion and harvesting techniques are employed for the generation and maintenance of this “radio resource map.” In order to guarantee the flexibility and fault tolerance goals of disaster scenarios, a gossip protocol is used to exchange information. The radio propagation field knowledge is closely coupled to harvesting and fusion protocols in order to achieve efficient fusing of radio measurement data. For the evaluation, simulations will be used to measure the efficiency and robustness in relation to time critical applications and various deployment densities. Actual radio data measurements within the Ilmenau area are being collected for further analysis of the map quality and in order to verify simulation results. Dynamic Social Grouping Based Routing in a Mobile Ad-Hoc Network Roy Cabaniss*, Missouri S&T Abstract: Trotta, University of Missouri, Kansas City, Srinivasa Vulli, Missouri University S&T The patterns of movement used by Mobile Ad-Hoc networks are application specific, in the sense that networks use nodes which travel in different paths. When these nodes are used in experiments involving social patterns, such as wildlife tracking, algorithms which detect and use these patterns can be used to improve routing efficiency. The intent of this paper is to introduce a routing algorithm which forms a series of social groups which accurately indicate a node’s regular contact patterns while dynamically shifting to represent changes to the social environment. With the social groups formed, a probabilistic routing schema is used to effectively identify which social groups have consistent contact with the base station, and route accordingly. The algorithm can be implemented dynamically, in the sense that the nodes initially have no awareness of their environment, and works to reduce overhead and message traffic while maintaining high delivery ratio. Trotta, University of Missouri, Kansas City, Srinivasa Vulli, Missouri University S&T The patterns of movement used by Mobile Ad-Hoc networks are application specific, in the sense that networks use nodes which travel in different paths. When these nodes are used in experiments involving social patterns, such as wildlife tracking, algorithms which detect and use these patterns can be used to improve routing efficiency. The intent of this paper is to introduce a routing algorithm which forms a series of social groups which accurately indicate a node’s regular contact patterns while dynamically shifting to represent changes to the social environment. With the social groups formed, a probabilistic routing schema is used to effectively identify which social groups have consistent contact with the base station, and route accordingly. The algorithm can be implemented dynamically, in the sense that the nodes initially have no awareness of their environment, and works to reduce overhead and message traffic while maintaining high delivery ratio. MobileSOA framework for Context-Aware Mobile Applications Aaratee Shrestha*, University of Leipzig Abstract: Mobile application development is more challenging when context-awareness is taken into account. This research introduces the benefit of implementing a Mobile Service Oriented Architecture (SOA). A robust mobile SOA framework is designed for building and operating lightweight and flexible Context-Aware Mobile Application (CAMA). We develop a lightweight and flexible CAMA to show dynamic integration of the systems, where all operations run smoothly in response to the rapidly changing environment using local and remote services. Keywords-service oriented architecture (SOA); mobile service; context-awareness; contextaware mobile application (CAMA). Mobile application development is more challenging when context-awareness is taken into account. This research introduces the benefit of implementing a Mobile Service Oriented Architecture (SOA). A robust mobile SOA framework is designed for building and operating lightweight and flexible Context-Aware Mobile Application (CAMA). We develop a lightweight and flexible CAMA to show dynamic integration of the systems, where all operations run smoothly in response to the rapidly changing environment using local and remote services. Keywords-service oriented architecture (SOA); mobile service; context-awareness; contextaware mobile application (CAMA). Performance Analysis of Secure Hierarchical Data Aggregation in Wireless Sensor Networks Vimal Kumar*, Missouri S&T Abstract: Data aggregation is a technique used to conserve battery power in wireless sensor networks (WSN). While providing security in such a scenario it is also important that we minimize the number of security operations as they are computationally expensive, without compromising on the security. In this paper we evaluate the performance of such an end to end security algorithm. We provide our results from the implementation of the algorithm on mica2 motes and conclude how it is better than traditional hop by hop security. Data aggregation is a technique used to conserve battery power in wireless sensor networks (WSN). While providing security in such a scenario it is also important that we minimize the number of security operations as they are computationally expensive, without compromising on the security. In this paper we evaluate the performance of such an end to end security algorithm. We provide our results from the implementation of the algorithm on mica2 motes and conclude how it is better than traditional hop by hop security. A Communication Efficient Framework for Finding Outliers in Wireless Sensor Networks Dylan McDonald*, MS&T Abstract: Outlier detection is a well studied problem in various fields. The unique challenges of wireless sensor networks make this problem especially challenging. Sensors can detect outliers for a plethora of reasons and these reasons need to be inferred in real time. Here, we present a new communication technique to find outliers in a wireless sensor network. Communication is minimized through controlling sensor when sensors are allowed to communicate. At the same time, minimal assumptions are made about the nature of the data set as to ", "title": "" }, { "docid": "71cf11678093ac010abcfb45c25e60ab", "text": "Anti-phishing detection solutions employed in industry use blacklist-based approaches to achieve low falsepositive rates, but blacklist approaches utilizes website URLs only. This study analyses and combines phishing emails and phishing web-forms in a single framework, which allows feature extraction and feature model construction. The outcome should classify between phishing, suspicious, legitimate and detect emerging phishing attacks accurately. The intelligent phishing security for online approach is based on machine learning techniques, using Adaptive Neuro-Fuzzy Inference System and a combination sources from which features are extracted. An experiment was performed using two-fold cross validation method to measure the system’s accuracy. The intelligent phishing security approach achieved a higher accuracy. The finding indicates that the feature model from combined sources can detect phishing websites with a higher accuracy. This paper contributes to phishing field a combined feature which sources in a single framework. The implication is that phishing attacks evolve rapidly; therefore, regular updates and being ahead of phishing strategy is the way forward. Keywords—Phishing websites; fuzzy models; feature model; intelligent detection; neuro fuzzy; fuzzy inference system", "title": "" }, { "docid": "3e8b47e43adde845613c78eb4094db25", "text": "Recently, the problem of opinion spam has been widespread and has attracted a lot of research attention. While the problem has been approached on a variety of dimensions, the temporal dynamics in which opinion spamming operates is unclear. Are there specific spamming policies that spammers employ? What kind of changes happen with respect to the dynamics to the truthful ratings on entities. How do buffered spamming operate for entities that need spamming to retain threshold popularity and reduced spamming for entities making better success? We analyze these questions in the light of time-series analysis on Yelp. Our analyses discover various temporal patterns and their relationships with the rate at which fake reviews are posted. Building on our analyses, we employ vector autoregression to predict the rate of deception across different spamming policies. Next, we explore the effect of filtered reviews on (long-term and imminent) future rating and popularity prediction of entities. Our results discover novel temporal dynamics of spamming which are intuitive, arguable and also render confidence on Yelp’s filtering. Lastly, we leverage our discovered temporal patterns in deception detection. Experimental results on large-scale reviews show the effectiveness of our approach that significantly improves the existing approaches.", "title": "" }, { "docid": "444364c2ab97bef660ab322420fc5158", "text": "We present a telerobotics research platform that provides complete access to all levels of control via open-source electronics and software. The electronics employs an FPGA to enable a centralized computation and distributed I/O architecture in which all control computations are implemented in a familiar development environment (Linux PC) and low-latency I/O is performed over an IEEE-1394a (FireWire) bus at speeds up to 400 Mbits/sec. The mechanical components are obtained from retired first-generation da Vinci ® Surgical Systems. This system is currently installed at 11 research institutions, with additional installations underway, thereby creating a research community around a common open-source hardware and software platform.", "title": "" }, { "docid": "7bdbfd11a4aa723d3b5361f689d93698", "text": "We discuss the characteristics of constructive news comments, and present methods to identify them. First, we define the notion of constructiveness. Second, we annotate a corpus for constructiveness. Third, we explore whether available argumentation corpora can be useful to identify constructiveness in news comments. Our model trained on argumentation corpora achieves a top accuracy of 72.59% (baseline=49.44%) on our crowdannotated test data. Finally, we examine the relation between constructiveness and toxicity. In our crowd-annotated data, 21.42% of the non-constructive comments and 17.89% of the constructive comments are toxic, suggesting that non-constructive comments are not much more toxic than constructive comments.", "title": "" }, { "docid": "7b4140cb95fbaae6e272326ab59fb884", "text": "Network intrusion detection systems (NIDSs) play a crucial role in defending computer networks. However, there are concerns regarding the feasibility and sustainability of current approaches when faced with the demands of modern networks. More specifically, these concerns relate to the increasing levels of required human interaction and the decreasing levels of detection accuracy. This paper presents a novel deep learning technique for intrusion detection, which addresses these concerns. We detail our proposed nonsymmetric deep autoencoder (NDAE) for unsupervised feature learning. Furthermore, we also propose our novel deep learning classification model constructed using stacked NDAEs. Our proposed classifier has been implemented in graphics processing unit (GPU)-enabled TensorFlow and evaluated using the benchmark KDD Cup ’99 and NSL-KDD datasets. Promising results have been obtained from our model thus far, demonstrating improvements over existing approaches and the strong potential for use in modern NIDSs.", "title": "" }, { "docid": "26b46c13726dcf1a2f23ff41086b6392", "text": "From relatively unknown, just 5 years ago, High Dynamic Range (HDR) video is now having a major impact on most aspects of imaging. Although one of the five components of the specification for UHDTV, ITU-R Recommendation BT.2020 in 2012, it is only when it became apparent that HDR could help accelerate the slow penetration of 4K into the TV and home-cinema market, that HDR suddenly started to gain significant attention. But what exactly is HDR? Dynamic range is defined as the difference between the largest and smallest useable signal. In photography this has meant the luminance range of the scene being photographed. However, as HDR grows as a “marketing tool” this definition is becoming less “black & white”. This paper considers the different ways in which the term HDR is now being exploited; the challenges of achieving a complete efficient HDR pipeline from capture to display for a variety of applications; and, what could be done to help ensure HDR algorithms are future proof as HDR technology rapidly improves.", "title": "" }, { "docid": "91dd4e52f1ab0752499b9026ff6cc8d7", "text": "Augmented reality has recently achieved a rapid growth through its applications in various industries, including education and entertainment. Despite the growing attraction of augmented reality, trend analyses in this emerging technology have relied on qualitative literature review, failing to provide comprehensive competitive intelligence analysis using objective data. Therefore, tracing industrial competition trends in augmented reality will provide technology experts with a better understanding of evolving competition trends and insights for further technology and sustainable business planning. In this paper, we apply a topic modeling approach to 3595 patents related to augmented reality technology to identify technology subjects and their knowledge stocks, thereby analyzing industrial competitive intelligence in light of technology subject and firm levels. As a result, we were able to obtain some findings from an inventional viewpoint: technological development of augmented reality will soon enter a mature stage, technologies of infrastructural requirements have been a focal subject since 2001, and several software firms and camera manufacturing firms have dominated the recent development of augmented reality.", "title": "" }, { "docid": "9ce5377315e50c70337aa4b7d6512de0", "text": "This paper discusses two main software engineering methodologies to system development, the waterfall model and the objectoriented approach. A review of literature reveals that waterfall model uses linear approach and is only suitable for sequential or procedural design. In waterfall, errors can only be detected at the end of the whole process and it may be difficult going back to repeat the entire process because the processes are sequential. Also, software based on waterfall approach is difficult to maintain and upgrade due to lack of integration between software components. On the other hand, the Object Oriented approach enables software systems to be developed as integration of software objects that work together to make a holistic and functional system. The software objects are independent of each other, allowing easy upgrading and maintenance of software codes. The paper also highlighted the merits and demerits of each of the approaches. This work concludes with the appropriateness of each approach in relation to the complexity of the problem domain.", "title": "" } ]
scidocsrr
542aa0ade7bc4fdd5d7c7ad79b5c3c04
Online Context-Aware Recommendation with Time Varying Multi-Armed Bandit
[ { "docid": "341b0588f323d199275e89d8c33d6b47", "text": "We propose novel multi-armed bandit (explore/exploit) schemes to maximize total clicks on a content module published regularly on Yahoo! Intuitively, one can ``explore'' each candidate item by displaying it to a small fraction of user visits to estimate the item's click-through rate (CTR), and then ``exploit'' high CTR items in order to maximize clicks. While bandit methods that seek to find the optimal trade-off between explore and exploit have been studied for decades, existing solutions are not satisfactory for web content publishing applications where dynamic set of items with short lifetimes, delayed feedback and non-stationary reward (CTR) distributions are typical. In this paper, we develop a Bayesian solution and extend several existing schemes to our setting. Through extensive evaluation with nine bandit schemes, we show that our Bayesian solution is uniformly better in several scenarios. We also study the empirical characteristics of our schemes and provide useful insights on the strengths and weaknesses of each. Finally, we validate our results with a ``side-by-side'' comparison of schemes through live experiments conducted on a random sample of real user visits to Yahoo!", "title": "" }, { "docid": "04d06629a3683536fb94228f6295a7d3", "text": "User profiling is an important step for solving the problem of personalized news recommendation. Traditional user profiling techniques often construct profiles of users based on static historical data accessed by users. However, due to the frequent updating of news repository, it is possible that a user’s finegrained reading preference would evolve over time while his/her long-term interest remains stable. Therefore, it is imperative to reason on such preference evaluation for user profiling in news recommenders. Besides, in content-based news recommenders, a user’s preference tends to be stable due to the mechanism of selecting similar content-wise news articles with respect to the user’s profile. To activate users’ reading motivations, a successful recommender needs to introduce ‘‘somewhat novel’’ articles to", "title": "" } ]
[ { "docid": "2191ed336872593e0abcbfea60b0502b", "text": "The modern mobile communication systems requires high gain, large bandwidth and minimal size antenna's that are capable of providing better performance over a wide range of frequency spectrum. This requirement leads to the design of Microstrip patch antenna. This paper proposes the design of 4-Element microstrip patch antenna array which uses the corporate feed technique for excitation. Low dielectric constant substrates are generally preferred for maximum radiation. Thus it prefers Taconic as a dielectric substrate. Desired patch antenna design is initially simulated by using high frequency simulation software SONNET and FEKO and patch antenna is designed as per requirements. Antenna dimensions such as Length (L), Width (W) and substrate Dielectric Constant (εr) and parameters like Return Loss, Gain and Impedance are calculated using high frequency simulation software. The antenna has been designed for the range 9-11 GHz. Hence this antenna is highly suitable for X-band applications.", "title": "" }, { "docid": "2af524d484b7bb82db2dd92727a49fff", "text": "Computer-based multimedia learning environments — consisting of pictures (such as animation) and words (such as narration) — offer a potentially powerful venue for improving student understanding. How can we use words and pictures to help people understand how scientific systems work, such as how a lightning storm develops, how the human respiratory system operates, or how a bicycle tire pump works? This paper presents a cognitive theory of multimedia learning which draws on dual coding theory, cognitive load theory, and constructivist learning theory. Based on the theory, principles of instructional design for fostering multimedia learning are derived and tested. The multiple representation principle states that it is better to present an explanation in words and pictures than solely in words. The contiguity principle is that it is better to present corresponding words and pictures simultaneously rather than separately when giving a multimedia explanation. The coherence principle is that multimedia explanations are better understood when they include few rather than many extraneous words and sounds. The modality principle is that it is better to present words as auditory narration than as visual on-screen text. The redundancy principle is that it is better to present animation and narration than to present animation, narration, and on-screen text. By beginning with a cognitive theory of how learners process multimedia information, we have been able to conduct focused research that yields some preliminary principles of instructional design for multimedia messages.  2001 Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "05a76f64a6acbcf48b7ac36785009db3", "text": "Mixed methods research is an approach that combines quantitative and qualitative research methods in the same research inquiry. Such work can help develop rich insights into various phenomena of interest that cannot be fully understood using only a quantitative or a qualitative method. Notwithstanding the benefits and repeated calls for such work, there is a dearth of mixed methods research in information systems. Building on the literature on recent methodological advances in mixed methods research, we develop a set of guidelines for conducting mixed methods research in IS. We particularly elaborate on three important aspects of conducting mixed methods research: (1) appropriateness of a mixed methods approach; (2) development of meta-inferences (i.e., substantive theory) from mixed methods research; and (3) assessment of the quality of meta-inferences (i.e., validation of mixed methods research). The applicability of these guidelines is illustrated using two published IS papers that used mixed methods.", "title": "" }, { "docid": "0441fb016923cd0b7676d3219951c230", "text": "Globally modeling and reasoning over relations between regions can be beneficial for many computer vision tasks on both images and videos. Convolutional Neural Networks (CNNs) excel at modeling local relations by convolution operations, but they are typically inefficient at capturing global relations between distant regions and require stacking multiple convolution layers. In this work, we propose a new approach for reasoning globally in which a set of features are globally aggregated over the coordinate space and then projected to an interaction space where relational reasoning can be efficiently computed. After reasoning, relation-aware features are distributed back to the original coordinate space for down-stream tasks. We further present a highly efficient instantiation of the proposed approach and introduce the Global Reasoning unit (GloRe unit) that implements the coordinate-interaction space mapping by weighted global pooling and weighted broadcasting, and the relation reasoning via graph convolution on a small graph in interaction space. The proposed GloRe unit is lightweight, end-to-end trainable and can be easily plugged into existing CNNs for a wide range of tasks. Extensive experiments show our GloRe unit can consistently boost the performance of state-of-the-art backbone architectures, including ResNet [15, 16], ResNeXt [33], SE-Net [18] and DPN [9], for both 2D and 3D CNNs, on image classification, semantic segmentation and video action recognition task.", "title": "" }, { "docid": "4d5820e9e137c96d4d63e25772c577c6", "text": "facial topography clinical anatomy of the face upsky facial topography: clinical anatomy of the face by joel e facial topography clinical anatomy of the face [c796.ebook] free ebook facial topography: clinical the anatomy of the aging face: volume loss and changes in facial topographyclinical anatomy of the face ebook facial anatomy mccc dmca / copyrighted works removal title anatomy for plastic surgery thieme medical publishers the face sample quintessence publishing! facial anatomy 3aface academy facial topography clinical anatomy of the face ebook download the face der medizinverlag facial topography clinical anatomy of the face liive facial topography clinical anatomy of the face user clinical anatomy of the head univerzita karlova pdf download the face: pictorial atlas of clinical anatomy clinical anatomy anatomic landmarks for localisation of j m perry co v commissioner internal bouga international journal of anatomy and research, case report anatomy and physiology of the aging neck the clinics topographical anatomy of the head eng nikolaizarovo crc title list: change of ownership a guide to childrens books about asian americans fractography: observing, measuring and interpreting nystce students with disabilities study guide tibca army ranger survival guide compax sharp grill 2 convection manual iwsun nursing diagnosis handbook 9th edition apa citation the surgical management of facial nerve injury lipteh the outermost house alongz cosmetic voted best plastic surgeon in dallas texas c tait a dachau 1933 1945 teleip select your ebook amazon s3 quotation of books all india institute of medical latest ten anatomy acquisitions british dental association lindens complete auto repair reviews mires department of topographic anatomy and operative surgery", "title": "" }, { "docid": "34a7d306a788ab925db8d0afe4c21c5a", "text": "The Sandbox is a flexible and expressive thinking environment that supports both ad-hoc and more formal analytical tasks. It is the evidence marshalling and sensemaking component for the analytical software environment called nSpace. This paper presents innovative Sandbox human information interaction capabilities and the rationale underlying them including direct observations of analysis work as well as structured interviews. Key capabilities for the Sandbox include “put-this-there” cognition, automatic process model templates, gestures for the fluid expression of thought, assertions with evidence and scalability mechanisms to support larger analysis tasks. The Sandbox integrates advanced computational linguistic functions using a Web Services interface and protocol. An independent third party evaluation experiment with the Sandbox has been completed. The experiment showed that analyst subjects using the Sandbox did higher quality analysis in less time than with standard tools. Usability test results indicated the analysts became proficient in using the Sandbox with three hours of training.", "title": "" }, { "docid": "43bf765a516109b885db5b6d1b873c33", "text": "The attention economy motivates participation in peer-produced sites on the Web like YouTube and Wikipedia. However, this economy appears to break down at work. We studied a large internal corporate blogging community using log files and interviews and found that employees expected to receive attention when they contributed to blogs, but these expectations often went unmet. Like in the external blogosphere, a few people received most of the attention, and many people received little or none. Employees expressed frustration if they invested time and received little or no perceived return on investment. While many corporations are looking to adopt Web-based communication tools like blogs, wikis, and forums, these efforts will fail unless employees are motivated to participate and contribute content. We identify where the attention economy breaks down in a corporate blog community and suggest mechanisms for improvement.", "title": "" }, { "docid": "315fe02072069d3fe7f2a03f251dde31", "text": "We introduce a novel architecture for dependency parsing: stack-pointer networks (STACKPTR). Combining pointer networks (Vinyals et al., 2015) with an internal stack, the proposed model first reads and encodes the whole sentence, then builds the dependency tree top-down (from root-to-leaf) in a depth-first fashion. The stack tracks the status of the depthfirst search and the pointer networks select one child for the word at the top of the stack at each step. The STACKPTR parser benefits from the information of the whole sentence and all previously derived subtree structures, and removes the leftto-right restriction in classical transitionbased parsers. Yet, the number of steps for building any (including non-projective) parse tree is linear in the length of the sentence just as other transition-based parsers, yielding an efficient decoding algorithm with O(n2) time complexity. We evaluate our model on 29 treebanks spanning 20 languages and different dependency annotation schemas, and achieve state-of-theart performance on 21 of them.", "title": "" }, { "docid": "1ddfafbef17009b003471aee85fecda9", "text": "Mutational witches’ broom clones show a growth redistribution compared with normal clones. The main factor affecting the variations in mutant clone morphology is the strength of the mutation. Mutational witches’ broom is a fragment of the tree crown with abnormally dense branching and slower shoot growth, compared with those of a normal crown. Thousands of dwarf ornamental cultivars widely used in landscape design have been developed from mutational witches’ broom. In this study, crown morphology was compared between grafted clones of witches’ broom and normal clones from the same trees. The results quantify variations in crown structure between the mutants and normal clones. The sample materials were 10 pairs of grafted witches’ broom and normal crown clones of Pinus sibirica. The mutant and normal clones were discrete sets. Many morphological traits were affected in the mutants. Compared with the normal-crown clones, the mutants showed male sterility, decreased apical dominance, reduced shoot and needle length, and increased branching and seed cone bearing. In terms of morphogenic changes induced by the mutation, the shoots of the witches’ broom clones were bicyclic, generated seed cones that were much shorter than those of normal clones, and had acquired the ability to form lateral buds. The extent of interclonal variation was significantly greater among witches’ broom clones than among normal clones. Compared with the morphological traits of normal clones, those of the mutants were shifted in the same direction but to different extents. Therefore, mutational witches’ broom is the expression of a mutation that can be weak, medium, or strong. These results will be useful for unraveling the genetic basis of witches’ broom in conifers and for breeding new dwarf cultivars.", "title": "" }, { "docid": "63fef6099108f7990da0a7687e422e14", "text": "The IWSLT 2017 evaluation campaign has organised three tasks. The Multilingual task, which is about training machine translation systems handling many-to-many language directions, including so-called zero-shot directions. The Dialogue task, which calls for the integration of context information in machine translation, in order to resolve anaphoric references that typically occur in human-human dialogue turns. And, finally, the Lecture task, which offers the challenge of automatically transcribing and translating real-life university lectures. Following the tradition of these reports, we will described all tasks in detail and present the results of all runs submitted by their participants.", "title": "" }, { "docid": "17c7f83f622e3132b0385a6343992fb4", "text": "This chapter presents a brief introduction to the developmental and educational literature linking children's moral emotions to cognitive moral development. A central premise of the chapter is that an integrative developmental perspective on moral emotions and moral cognition provides an important conceptual framework for understanding children's emerging morality and designing developmentally sensitive moral intervention strategies. The subsequent chapters present promising conceptual approaches and empirical evidence linking children's moral emotions to moral cognition. Examples of integrated educational interventions intended to enhance children's moral development are presented and discussed.", "title": "" }, { "docid": "2c599927a97e09bbff3cb6f85953d36d", "text": "Information systems technology, computer-supported cooperative work practice, and organizational modeling and planning theories have evolved with only accidental contact to each other. Cooperative information systems is a relatively young research area which tries to systematically investigate the synergies between these research fields, driven by the observation that change management is the central issue facing all three areas today and that all three fields have indeed developed rather similar strategies to cope with change. In this paper, we therefore propose a framework which views cooperative information systems as composed from three interrelated facets, viz. the system facet, the group collaboration facet, and the organizational facet. We present an overview of these facets, emphasizing strategies they have developed over the past few years to accommodate change. We also discuss the propagation of change across the facets, and sketch a basic software architecture intended to support the rapid construction and evolution of cooperative information systems on top of existing organizational and technical legacy.", "title": "" }, { "docid": "b2c05f820195154dbbb76ee68740b5d9", "text": "DeConvNet, Guided BackProp, LRP, were invented to better understand deep neural networks. We show that these methods do not produce the theoretically correct explanation for a linear model. Yet they are used on multi-layer networks with millions of parameters. This is a cause for concern since linear models are simple neural networks. We argue that explanation methods for neural nets should work reliably in the limit of simplicity, the linear models. Based on our analysis of linear models we propose a generalization that yields two explanation techniques (PatternNet and PatternAttribution) that are theoretically sound for linear models and produce improved explanations for deep networks.", "title": "" }, { "docid": "0c1f01d9861783498c44c7c3d0acd57e", "text": "We understand a sociotechnical system as a multistakeholder cyber-physical system. We introduce governance as the administration of such a system by the stakeholders themselves. In this regard, governance is a peer-to-peer notion and contrasts with traditional management, which is a top-down hierarchical notion. Traditionally, there is no computational support for governance and it is achieved through out-of-band interactions among system administrators. Not surprisingly, traditional approaches simply do not scale up to large sociotechnical systems.\n We develop an approach for governance based on a computational representation of norms in organizations. Our approach is motivated by the Ocean Observatory Initiative, a thirty-year $400 million project, which supports a variety of resources dealing with monitoring and studying the world's oceans. These resources include autonomous underwater vehicles, ocean gliders, buoys, and other instrumentation as well as more traditional computational resources. Our approach has the benefit of directly reflecting stakeholder needs and assuring stakeholders of the correctness of the resulting governance decisions while yielding adaptive resource allocation in the face of changes in both stakeholder needs and physical circumstances.", "title": "" }, { "docid": "32e1b7734ba1b26a6a27e0504db07643", "text": "Due to its high popularity and rich functionalities, the Portable Document Format (PDF) has become a major vector for malware propagation. To detect malicious PDF files, the first step is to extract and de-obfuscate Java Script codes from the document, for which an effective technique is yet to be created. However, existing static methods cannot de-obfuscate Java Script codes, existing dynamic methods bring high overhead, and existing hybrid methods introduce high false negatives. Therefore, in this paper, we present MPScan, a scanner that combines dynamic Java Script de-obfuscation and static malware detection. By hooking the Adobe Reader's native Java Script engine, Java Script source code and op-code can be extracted on the fly after the source code is parsed and then executed. We also perform a multilevel analysis on the resulting Java Script strings and op-code to detect malware. Our evaluation shows that regardless of obfuscation techniques, MPScan can effectively de-obfuscate and detect 98% malicious PDF samples.", "title": "" }, { "docid": "d6ed9594536cada2d857a876fd9e21ae", "text": "As the increasing growth of the computing technology and network technology, it also increases data storage demands. Data Security has become a crucial issue in electronic communication. Secret writing has come up as a solution, and plays a vital role in data security system. It uses some algorithms to scramble data into unreadable text which might be only being decrypted by party those having the associated key. These algorithms consume a major amount of computing resources such as memory and battery power and computation time. This paper accomplishes comparative analysis of encryption standards DES, AES and RSA considering various parameters such as computation time, memory usages. A cryptographic tool is used for performing experiments. Experiments results are given to analyses the effectiveness of symmetric and asymmetric algorithms. Keywords— Encryption, secret key encryption, public key encryption, DES, AES, RSA encryption, Symmetric", "title": "" }, { "docid": "c432a44e48e777a7a3316c1474f0aa12", "text": "In this paper, we present an algorithm that generates high dynamic range (HDR) images from multi-exposed low dynamic range (LDR) stereo images. The vast majority of cameras in the market only capture a limited dynamic range of a scene. Our algorithm first computes the disparity map between the stereo images. The disparity map is used to compute the camera response function which in turn results in the scene radiance maps. A refinement step for the disparity map is then applied to eliminate edge artifacts in the final HDR image. Existing methods generate HDR images of good quality for still or slow motion scenes, but give defects when the motion is fast. Our algorithm can deal with images taken during fast motion scenes and tolerate saturation and radiometric changes better than other stereo matching algorithms.", "title": "" }, { "docid": "e31b5b120d485d77e8743132f028d8b3", "text": "In this paper, we consider the problem of linking users across multiple online communities. Specifically, we focus on the alias-disambiguation step of this user linking task, which is meant to differentiate users with the same usernames. We start quantitatively analyzing the importance of the alias-disambiguation step by conducting a survey on 153 volunteers and an experimental analysis on a large dataset of About.me (75,472 users). The analysis shows that the alias-disambiguation solution can address a major part of the user linking problem in terms of the coverage of true pairwise decisions (46.8%). To the best of our knowledge, this is the first study on human behaviors with regards to the usages of online usernames. We then cast the alias-disambiguation step as a pairwise classification problem and propose a novel unsupervised approach. The key idea of our approach is to automatically label training instances based on two observations: (a) rare usernames are likely owned by a single natural person, e.g. pennystar88 as a positive instance; (b) common usernames are likely owned by different natural persons, e.g. tank as a negative instance. We propose using the n-gram probabilities of usernames to estimate the rareness or commonness of usernames. Moreover, these two observations are verified by using the dataset of Yahoo! Answers. The empirical evaluations on 53 forums verify: (a) the effectiveness of the classifiers with the automatically generated training data and (b) that the rareness and commonness of usernames can help user linking. We also analyze the cases where the classifiers fail.", "title": "" }, { "docid": "c79a3f831a7bcbcd164397a499cece29", "text": "A new MOS-C bandpass-low-pass filter using the current feedback operational amplifier (CFOA) is presented. The filter employs two CFOA’s, eight MOS transistors operating in the nonsaturation region, and two grounded capacitors. The proposed MOS-C filter has the advantage of independent control ofQ and !o. PSpice simulation results for the proposed filter are given.", "title": "" }, { "docid": "c175910d1809ad6dc073f79e4ca15c0c", "text": "The Global Positioning System (GPS) double-difference carrier-phase data are biased by an integer number of cycles. In this contribution a new method is introduced that enables very fast integer least-squares estimation of the ambiguities. The method makes use of an ambiguity transformation that allows one to reformulate the original ambiguity estimation problem as a new problem that is much easier to solve. The transformation aims at decorrelating the least-squares ambiguities and is based on an integer approximation of the conditional least-squares transformation. And through a flattening of the typical discontinuity in the GPS-spectrum of conditional variances of the ambiguities, the transformation returns new ambiguities that show a dramatic improvement in precision in comparison with the original double-difference ambiguities.", "title": "" } ]
scidocsrr
c771b5e6de457ce893060e7b297d5764
Design Automation for Binarized Neural Networks: A Quantum Leap Opportunity?
[ { "docid": "ab7a69accb17ff99642ab225facec95d", "text": "It is challenging to adopt computing-intensive and parameter-rich Convolutional Neural Networks (CNNs) in mobile devices due to limited hardware resources and low power budgets. To support multiple concurrently running applications, one mobile device needs to perform multiple CNN tests simultaneously in real-time. Previous solutions cannot guarantee a high enough frame rate when serving multiple applications with reasonable hardware and power cost. In this paper, we present a novel process-in-memory architecture to process emerging binary CNN tests in Wide-IO2 DRAMs. Compared to state-of-the-art accelerators, our design improves CNN test performance by 4× ∼ 11× with small hardware and power overhead.", "title": "" }, { "docid": "0d4b9fe319c7ca3ffcd6974ccf9b2fbd", "text": "Research has shown that convolutional neural networks contain significant redundancy, and high classification accuracy can be obtained even when weights and activations are reduced from floating point to binary values. In this paper, we present FINN, a framework for building fast and flexible FPGA accelerators using a flexible heterogeneous streaming architecture. By utilizing a novel set of optimizations that enable efficient mapping of binarized neural networks to hardware, we implement fully connected, convolutional and pooling layers, with per-layer compute resources being tailored to user-provided throughput requirements. On a ZC706 embedded FPGA platform drawing less than 25 W total system power, we demonstrate up to 12.3 million image classifications per second with 0.31 μs latency on the MNIST dataset with 95.8% accuracy, and 21906 image classifications per second with 283 μs latency on the CIFAR-10 and SVHN datasets with respectively 80.1% and 94.9% accuracy. To the best of our knowledge, ours are the fastest classification rates reported to date on these benchmarks.", "title": "" }, { "docid": "5495aeaa072a1f8f696298ebc7432045", "text": "Deep neural networks (DNNs) are widely used in data analytics, since they deliver state-of-the-art accuracies. Binarized neural networks (BNNs) are recently proposed optimized variant of DNNs. BNNs constraint network weight and/or neuron value to either +1 or −1, which is representable in 1 bit. This leads to dramatic algorithm efficiency improvement, due to reduction in the memory and computational demands. This paper evaluates the opportunity to further improve the execution efficiency of BNNs through hardware acceleration. We first proposed a BNN hardware accelerator design. Then, we implemented the proposed accelerator on Aria 10 FPGA as well as 14-nm ASIC, and compared them against optimized software on Xeon server CPU, Nvidia Titan X server GPU, and Nvidia TX1 mobile GPU. Our evaluation shows that FPGA provides superior efficiency over CPU and GPU. Even though CPU and GPU offer high peak theoretical performance, they are not as efficiently utilized since BNNs rely on binarized bit-level operations that are better suited for custom hardware. Finally, even though ASIC is still more efficient, FPGA can provide orders of magnitudes in efficiency improvements over software, without having to lock into a fixed ASIC solution.", "title": "" }, { "docid": "395afccf9891cfcc8e14d82a6e968918", "text": "In this paper, we present an ultra-low-power smart visual sensor architecture. A 10.6-μW low-resolution contrast-based imager featuring internal analog preprocessing is coupled with an energy-efficient quad-core cluster processor that exploits near-threshold computing within a few milliwatt power envelope. We demonstrate the capability of the smart camera on a moving object detection framework. The computational load is distributed among mixed-signal pixel and digital parallel processing. Such local processing reduces the amount of digital data to be sent out of the node by 91%. Exploiting context aware analog circuits, the imager only dispatches meaningful postprocessed data to the processing unit, lowering the sensor-to-processor bandwidth by 31× with respect to transmitting a full pixel frame. To extract high-level features, an event-driven approach is applied to the sensor data and optimized for parallel runtime execution. A 57.7× system energy saving is reached through the event-driven approach with respect to frame-based processing, on a low-power MCU node. The near-threshold parallel processor further reduces the processing energy cost by 6.64×, achieving an overall system energy cost of 1.79 μJ per frame, which results to be 21.8× and up to 383× lower than, respectively, an event-based imaging system based on an asynchronous visual sensor and a traditional frame-based smart visual sensor.", "title": "" } ]
[ { "docid": "5b6daefbefd44eea4e317e673ad91da3", "text": "A three-dimensional (3-D) thermogram can provide spatial information; however, it is rarely applied because it lacks an accurate method in obtaining the intrinsic and extrinsic parameters of an infrared (IR) camera. Conventional methods cannot be used for such calibration because an IR camera cannot capture visible calibration patterns. Therefore, in the current study, a trinocular vision system composed of two visible cameras and an IR camera is constructed and a calibration board with miniature bulbs is designed. The two visible cameras compose a binocular vision system that obtains 3-D information from the miniature bulbs while the IR camera captures the calibration board to obtain the two dimensional subpixel coordinates of miniature bulbs. The corresponding algorithm is proposed to calibrate the IR camera based on the gathered information. Experimental results show that the proposed calibration can accurately obtain the intrinsic and extrinsic parameters of the IR camera, and meet the requirements of its application.", "title": "" }, { "docid": "0b58503e8b2ccc606cb1b45f542ba97a", "text": "Fingerprint images generally either contain only a single fingerprint or a set of non-overlapped fingerprints (e.g., slap fingerprints). However, there are situations where more than one fingerprint overlap on each other. Such situations are frequently encountered when latent fingerprints are lifted from crime scenes or residue fingerprints are left on fingerprint sensors. Overlapped fingerprints constitute a serious challenge to existing fingerprint recognition techniques, since these techniques are designed under the assumption that fingerprints have been properly segmented. In this paper, a novel algorithm is proposed to separate overlapped fingerprints into component or individual fingerprints. We first use local Fourier transform to estimate an initial overlapped orientation field, which contains at most two candidate orientations at each location. Then relaxation labeling technique is employed to label each candidate orientation as one of two classes. Based on the labeling result, we separate the initial overlapped orientation field into two orientation fields. Finally, the two fingerprints are obtained by enhancing the overlapped fingerprint using Gabor filters tuned to these two component separated orientation fields, respectively. Experimental results indicate that the algorithm leads to a good separation of overlapped fingerprints.", "title": "" }, { "docid": "6021388395ddd784422a22d30dac8797", "text": "Introduction: The European Directive 2013/59/EURATOM requires patient radiation dose information to be included in the medical report of radiological procedures. To provide effective communication to the patient, it is necessary to first assess the patient's level of knowledge regarding medical exposure. The goal of this work is to survey patients’ current knowledge level of both medical exposure to ionizing radiation and professional disciplines and communication means used by patients to garner information. Material and Methods: A questionnaire was designed comprised of thirteen questions: 737 patients participated in the survey. The data were analysed based on population age, education, and number of radiological procedures received in the three years prior to survey. Results: A majority of respondents (56.4%) did not know which modality uses ionizing radiation. 74.7% had never discussed with healthcare professionals the risk concerning their medical radiological procedures. 70.1% were not aware of the professionals that have expertise to discuss the use of ionizing radiation for medical purposes, and 84.7% believe it is important to have the radiation dose information stated in the medical report. Conclusion: Patients agree with new regulations that it is important to know the radiation level related to the medical exposure, but there is little awareness in terms of which modalities use X-Rays and the professionals and channels that can help them to better understand the exposure information. To plan effective communication, it is essential to devise methods and adequate resources for key professionals (medical physicists, radiologists, referring physicians) to convey correct and effective information.", "title": "" }, { "docid": "76547fb01f5d9ede8731a3c22a69ec87", "text": "This paper explores the use monads to structure functionalprograms. No prior knowledge of monads or category theory isrequired.\nMonads increase the ease with which programs may be modified.They can mimic the effect of impure features such as exceptions,state, and continuations; and also provide effects not easilyachieved with such features. The types of a program reflect whicheffects occur.\nThe first section is an extended example of the use of monads. Asimple interpreter is modified to support various extra features:error messages, state, output, and non-deterministic choice. Thesecond section describes the relation between monads and thecontinuation-passing style. The third section sketches how monadsare used in a compiler for Haskell that is written in Haskell.", "title": "" }, { "docid": "60161ef0c46b4477f0cf35356bc3602c", "text": "Differential privacy is a formal mathematical framework for quantifying and managing privacy risks. It provides provable privacy protection against a wide range of potential attacks, including those * Alexandra Wood is a Fellow at the Berkman Klein Center for Internet & Society at Harvard University. Micah Altman is Director of Research at MIT Libraries. Aaron Bembenek is a PhD student in computer science at Harvard University. Mark Bun is a Google Research Fellow at the Simons Institute for the Theory of Computing. Marco Gaboardi is an Assistant Professor in the Computer Science and Engineering department at the State University of New York at Buffalo. James Honaker is a Research Associate at the Center for Research on Computation and Society at the Harvard John A. Paulson School of Engineering and Applied Sciences. Kobbi Nissim is a McDevitt Chair in Computer Science at Georgetown University and an Affiliate Professor at Georgetown University Law Center; work towards this document was completed in part while the Author was visiting the Center for Research on Computation and Society at Harvard University. David R. O’Brien is a Senior Researcher at the Berkman Klein Center for Internet & Society at Harvard University. Thomas Steinke is a Research Staff Member at IBM Research – Almaden. Salil Vadhan is the Vicky Joseph Professor of Computer Science and Applied Mathematics at Harvard University. This Article is the product of a working group of the Privacy Tools for Sharing Research Data project at Harvard University (http://privacytools.seas.harvard.edu). The working group discussions were led by Kobbi Nissim. Alexandra Wood and Kobbi Nissim are the lead Authors of this Article. Working group members Micah Altman, Aaron Bembenek, Mark Bun, Marco Gaboardi, James Honaker, Kobbi Nissim, David R. O’Brien, Thomas Steinke, Salil Vadhan, and Alexandra Wood contributed to the conception of the Article and to the writing. The Authors thank John Abowd, Scott Bradner, Cynthia Dwork, Simson Garfinkel, Caper Gooden, Deborah Hurley, Rachel Kalmar, Georgios Kellaris, Daniel Muise, Michel Reymond, and Michael Washington for their many valuable comments on earlier versions of this Article. A preliminary version of this work was presented at the 9th Annual Privacy Law Scholars Conference (PLSC 2017), and the Authors thank the participants for contributing thoughtful feedback. The original manuscript was based upon work supported by the National Science Foundation under Grant No. CNS-1237235, as well as by the Alfred P. Sloan Foundation. The Authors’ subsequent revisions to the manuscript were supported, in part, by the US Census Bureau under cooperative agreement no. CB16ADR0160001. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the Authors and do not necessarily reflect the views of the National Science Foundation, the Alfred P. Sloan Foundation, or the US Census Bureau. 210 VAND. J. ENT. & TECH. L. [Vol. 21:1:209 currently unforeseen. Differential privacy is primarily studied in the context of the collection, analysis, and release of aggregate statistics. These range from simple statistical estimations, such as averages, to machine learning. Tools for differentially private analysis are now in early stages of implementation and use across a variety of academic, industry, and government settings. Interest in the concept is growing among potential users of the tools, as well as within legal and policy communities, as it holds promise as a potential approach to satisfying legal requirements for privacy protection when handling personal information. In particular, differential privacy may be seen as a technical solution for analyzing and sharing data while protecting the privacy of individuals in accordance with existing legal or policy requirements for de-identification or disclosure limitation. This primer seeks to introduce the concept of differential privacy and its privacy implications to non-technical audiences. It provides a simplified and informal, but mathematically accurate, description of differential privacy. Using intuitive illustrations and limited mathematical formalism, it discusses the definition of differential privacy, how differential privacy addresses privacy risks, how differentially private analyses are constructed, and how such analyses can be used in practice. A series of illustrations is used to show how practitioners and policymakers can conceptualize the guarantees provided by differential privacy. These illustrations are also used to explain related concepts, such as composition (the accumulation of risk across multiple analyses), privacy loss parameters, and privacy budgets. This primer aims to provide a foundation that can guide future decisions when analyzing and sharing statistical data about individuals, informing individuals about the privacy protection they will be afforded, and designing policies and regulations for robust privacy protection.", "title": "" }, { "docid": "7981acb4e72343a960803761929f4179", "text": "DIBCO 2017 is the international Competition on Document Image Binarization organized in conjunction with the ICDAR 2017 conference. The general objective of the contest is to identify current advances in document image binarization of machine-printed and handwritten document images using performance evaluation measures that are motivated by document image analysis and recognition requirements. This paper describes the competition details including the evaluation measures used as well as the performance of the 26 submitted methods along with a brief description of each method.", "title": "" }, { "docid": "f8daf84baa19c438d22a5274f0393f08", "text": "We describe and evaluate methods for learning to forecast forthcoming events of interest from a corpus containing 22 years of news stories. We consider the examples of identifying significant increases in the likelihood of disease outbreaks, deaths, and riots in advance of the occurrence of these events in the world. We provide details of methods and studies, including the automated extraction and generalization of sequences of events from news corpora and multiple web resources. We evaluate the predictive power of the approach on real-world events withheld from the system.", "title": "" }, { "docid": "799c839fad857c1ba90a9905f1b1d544", "text": "Much of the research published in the property discipline consists of work utilising quantitative methods. While research gained using quantitative methods, if appropriately designed and rigorous, leads to results which are typically generalisable and quantifiable, it does not allow for a rich and in-depth understanding of a phenomenon. This is especially so if a researcher’s aim is to uncover the issues or factors underlying that phenomenon. Such an aim would require using a qualitative research methodology, and possibly an interpretive as opposed to a positivist theoretical perspective. The purpose of this paper is to provide a general overview of qualitative methodologies with the aim of encouraging a broadening of methodological approaches to overcome the positivist methodological bias which has the potential of inhibiting property behavioural research.", "title": "" }, { "docid": "e0d040efd131db568d875b80c6adc111", "text": "Familism is a cultural value that emphasizes interdependent family relationships that are warm, close, and supportive. We theorized that familism values can be beneficial for romantic relationships and tested whether (a) familism would be positively associated with romantic relationship quality and (b) this association would be mediated by less attachment avoidance. Evidence indicates that familism is particularly relevant for U.S. Latinos but is also relevant for non-Latinos. Thus, we expected to observe the hypothesized pattern in Latinos and explored whether the pattern extended to non-Latinos of European and East Asian cultural background. A sample of U.S. participants of Latino (n 1⁄4 140), European (n 1⁄4 176), and East Asian (n 1⁄4 199) cultural background currently in a romantic relationship completed measures of familism, attachment, and two indices of romantic relationship quality, namely, partner support and partner closeness. As predicted, higher familism was associated with higher partner support and partner closeness, and these associations were mediated by lower attachment avoidance in the Latino sample. This pattern was not observed in the European or East Asian background samples. The implications of familism for relationships and psychological processes relevant to relationships in Latinos and non-Latinos are discussed. 1 University of California, Irvine, USA 2 University of California, Los Angeles, USA Corresponding author: Belinda Campos, Department of Chicano/Latino Studies, University of California, Irvine, 3151 Social Sciences Plaza A, Irvine, CA 92697, USA. Email: bcampos@uci.edu Journal of Social and Personal Relationships 1–20 a The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav DOI: 10.1177/0265407514562564 spr.sagepub.com J S P R at UNIV CALIFORNIA IRVINE on January 5, 2015 spr.sagepub.com Downloaded from", "title": "" }, { "docid": "4ea8351c57e4581bfdab4c7cd357c90a", "text": "Hierarchies have long been used for organization, summarization, and access to information. In this paper we define summarization in terms of a probabilistic language model and use the definition to explore a new technique for automatically generating topic hierarchies by applying a graph-theoretic algorithm, which is an approximation of the Dominating Set Problem. The algorithm efficiently chooses terms according to a language model. We compare the new technique to previous methods proposed for constructing topic hierarchies including subsumption and lexical hierarchies, as well as the top TF.IDF terms. Our results show that the new technique consistently performs as well as or better than these other techniques. They also show the usefulness of hierarchies compared with a list of terms.", "title": "" }, { "docid": "919f42363fed69dc38eba0c46be23612", "text": "Large amounts of heterogeneous medical data have become available in various healthcare organizations (payers, providers, pharmaceuticals). Those data could be an enabling resource for deriving insights for improving care delivery and reducing waste. The enormity and complexity of these datasets present great challenges in analyses and subsequent applications to a practical clinical environment. In this tutorial, we introduce the characteristics and related mining challenges on dealing with big medical data. Many of those insights come from medical informatics community, which is highly related to data mining but focuses on biomedical specifics. We survey various related papers from data mining venues as well as medical informatics venues to share with the audiences key problems and trends in healthcare analytics research, with different applications ranging from clinical text mining, predictive modeling, survival analysis, patient similarity, genetic data analysis, and public health. The tutorial will include several case studies dealing with some of the important healthcare applications.", "title": "" }, { "docid": "9694bc859dd5295c40d36230cf6fd1b9", "text": "In the past two decades, the synthetic style and fashion drug \"crystal meth\" (\"crystal\", \"meth\"), chemically representing the crystalline form of the methamphetamine hydrochloride, has become more and more popular in the United States, in Eastern Europe, and just recently in Central and Western Europe. \"Meth\" is cheap, easy to synthesize and to market, and has an extremely high potential for abuse and dependence. As a strong sympathomimetic, \"meth\" has the potency to switch off hunger, fatigue and, pain while simultaneously increasing physical and mental performance. The most relevant side effects are heart and circulatory complaints, severe psychotic attacks, personality changes, and progressive neurodegeneration. Another effect is \"meth mouth\", defined as serious tooth and oral health damage after long-standing \"meth\" abuse; this condition may become increasingly relevant in dentistry and oral- and maxillofacial surgery. There might be an association between general methamphetamine abuse and the development of osteonecrosis, similar to the medication-related osteonecrosis of the jaws (MRONJ). Several case reports concerning \"meth\" patients after tooth extractions or oral surgery have presented clinical pictures similar to MRONJ. This overview summarizes the most relevant aspect concerning \"crystal meth\" abuse and \"meth mouth\".", "title": "" }, { "docid": "d2ccb98fab55a9870a7018df3817337c", "text": "This paper focuses on the design, modelling and hovering control of a tail-sitter with single thrust-vectored propeller which possesses the inherent advantages of both fixed wing and rotary wing unmanned aerial vehicles (UAVs). The developed tail-sitter requires only the same number of actuators as a normal fixed wing aircraft and achieves attitude control through deflections of the thrust-vectored propeller and ailerons. Thrust vectoring is realized by mounting a simple gimbal mechanism beneath the propeller motor. Both the thrust vector model and aerodynamics model are established, which leads to a complete nonlinear model of the tail-sitter in hovering state. Quaternion is applied for attitude description to avoid the singularity problem and improve computation efficiency. Through reasonable assumptions, a simplified model of the tail-sitter is obtained, based on which a backstepping controller is designed using the Lyapunov stability theory. Experimental results are presented to demonstrate the effectiveness of the proposed control scheme.", "title": "" }, { "docid": "faf4eeaaf3e8516ac65543c0bc5e50d6", "text": "Service Oriented Architecture facilitates more feature as compared to legacy architecture which makes this architecture widely accepted by the industry. Service oriented architecture provides feature like reusability, composability, distributed deployment. Service of SOA is governed by SOA governance board in which they provide approval to create the services and also provide space to expose the particular services. Sometime many services are kept in a repository which creates service identification issue. Service identification is one of the most critical aspects in service oriented architecture. The services must be defined or identified keeping reuse and usage in different business contexts in mind. Rigorous review of Identified service should be done prior to development of the services. Identification of the authenticated service is challenging to development teams due to several reasons such as lack of business process documentation, lack of expert analyst, and lack of business executive involvement, lack of reuse of services, lack of right decision to choose the appropriate service. In some of the cases we have replica of same service exist, which creates difficulties in service identification. Existing design approaches of SOA doesn't take full advantage whereas proposed model is compatible more advantageous and increase the performance of the services. This paper proposes a model which will help in clustering the service repository based on service functionality. Service identification will be easy if we follow distributed repository based on functionality for our services. Generally in case of web services where service response time should be minimal, searching in whole repository delays response time. The proposed model will reduce the response time of the services and will also helpful in identifying the correct services within the specified time.", "title": "" }, { "docid": "49df721b5115ad7d3f91b6212dbb585e", "text": "We first present a minimal feature set for transition-based dependency parsing, continuing a recent trend started by Kiperwasser and Goldberg (2016a) and Cross and Huang (2016a) of using bi-directional LSTM features. We plug our minimal feature set into the dynamic-programming framework of Huang and Sagae (2010) and Kuhlmann et al. (2011) to produce the first implementation of worst-case Opn3q exact decoders for arc-hybrid and arceager transition systems. With our minimal features, we also present Opn3q global training methods. Finally, using ensembles including our new parsers, we achieve the best unlabeled attachment score reported (to our knowledge) on the Chinese Treebank and the “second-best-in-class” result on the English Penn Treebank. Publication venue: EMNLP 2017", "title": "" }, { "docid": "39828596907746de12a31885c6ce7643", "text": "Hypervelocity (~1000-km/s) impact of a macroscopic particle (macron) has profound influences in high energy density physics and inertial fusion energy researches. As the charge-mass ratio of macrons is too low, the length of an electrostatic accelerator can reach hundreds to thousands of kilometers, rendering macron acceleration impractical. To reduce the accelerator length, a much higher electric field than what the most powerful klystrons can provide is desired. One practical choice may be the high-intensity charged particle beam ldquoblowing-piperdquo approach. In this approach, a high-intensity (~10-kA) medium-energy (0.5-2-MeV) long-pulse (10-1000-mus) positively charged ion beam shots to a heavily charged millimeter-size macron to create a local high-strength electric field (~1010 V/m), accelerating the macron efficiently. We will discuss the physics and challenges involved in this concept and give an illustrative simulation.", "title": "" }, { "docid": "67fdad898361edd4cf63b525b8af8b48", "text": "Traffic data is a fundamental component for applications and researches in transportation systems. However, real traffic data collected from loop detectors or other channels often include missing data which affects the relative applications and researches. This paper proposes an approach based on deep learning to impute the missing traffic data. The proposed approach treats the traffic data including observed data and missing data as a whole data item and restores the complete data with the deep structural network. The deep learning approach can discover the correlations contained in the data structure by a layer-wise pre-training and improve the imputation accuracy by conducting a fine-tuning afterwards. We analyze the imputation patterns that can be realized with the proposed approach and conduct a series of experiments. The results show that the proposed approach can keep a stable error under different traffic data missing rate. Deep learning is promising in the field of traffic data imputation.", "title": "" }, { "docid": "e82681b5140f3a9b283bbd02870f18d5", "text": "Employee turnover has been identified as a key issue for organizations because of its adverse impact on work place productivity and long term growth strategies. To solve this problem, organizations use machine learning techniques to predict employee turnover. Accurate predictions enable organizations to take action for retention or succession planning of employees. However, the data for this modeling problem comes from HR Information Systems (HRIS); these are typically under-funded compared to the Information Systems of other domains in the organization which are directly related to its priorities. This leads to the prevalence of noise in the data that renders predictive models prone to over-fitting and hence inaccurate. This is the key challenge that is the focus of this paper, and one that has not been addressed historically. The novel contribution of this paper is to explore the application of Extreme Gradient Boosting (XGBoost) technique which is more robust because of its regularization formulation. Data from the HRIS of a global retailer is used to compare XGBoost against six historically used supervised classifiers and demonstrate its significantly higher accuracy for predicting employee turnover. Keywords—turnover prediction; machine learning; extreme gradient boosting; supervised classification; regularization", "title": "" }, { "docid": "c9ff6e6c47b6362aaba5f827dd1b48f2", "text": "IEC 62056 for upper-layer protocols and IEEE 802.15.4g for communication infrastructure are promising means of advanced metering infrastructure (AMI) in Japan. However, since the characteristics of a communication system based on these combined technologies have yet to be identified, this paper gives the communication failure rates and latency acquired by calculations. In addition, the calculation results suggest some adequate AMI configurations, and show its extensibility in consideration of the usage environment.", "title": "" }, { "docid": "9e10e151b9e032e79296b35d09d45bbf", "text": "PURPOSE\nAutomated segmentation of breast and fibroglandular tissue (FGT) is required for various computer-aided applications of breast MRI. Traditional image analysis and computer vision techniques, such atlas, template matching, or, edge and surface detection, have been applied to solve this task. However, applicability of these methods is usually limited by the characteristics of the images used in the study datasets, while breast MRI varies with respect to the different MRI protocols used, in addition to the variability in breast shapes. All this variability, in addition to various MRI artifacts, makes it a challenging task to develop a robust breast and FGT segmentation method using traditional approaches. Therefore, in this study, we investigated the use of a deep-learning approach known as \"U-net.\"\n\n\nMATERIALS AND METHODS\nWe used a dataset of 66 breast MRI's randomly selected from our scientific archive, which includes five different MRI acquisition protocols and breasts from four breast density categories in a balanced distribution. To prepare reference segmentations, we manually segmented breast and FGT for all images using an in-house developed workstation. We experimented with the application of U-net in two different ways for breast and FGT segmentation. In the first method, following the same pipeline used in traditional approaches, we trained two consecutive (2C) U-nets: first for segmenting the breast in the whole MRI volume and the second for segmenting FGT inside the segmented breast. In the second method, we used a single 3-class (3C) U-net, which performs both tasks simultaneously by segmenting the volume into three regions: nonbreast, fat inside the breast, and FGT inside the breast. For comparison, we applied two existing and published methods to our dataset: an atlas-based method and a sheetness-based method. We used Dice Similarity Coefficient (DSC) to measure the performances of the automated methods, with respect to the manual segmentations. Additionally, we computed Pearson's correlation between the breast density values computed based on manual and automated segmentations.\n\n\nRESULTS\nThe average DSC values for breast segmentation were 0.933, 0.944, 0.863, and 0.848 obtained from 3C U-net, 2C U-nets, atlas-based method, and sheetness-based method, respectively. The average DSC values for FGT segmentation obtained from 3C U-net, 2C U-nets, and atlas-based methods were 0.850, 0.811, and 0.671, respectively. The correlation between breast density values based on 3C U-net and manual segmentations was 0.974. This value was significantly higher than 0.957 as obtained from 2C U-nets (P < 0.0001, Steiger's Z-test with Bonferoni correction) and 0.938 as obtained from atlas-based method (P = 0.0016).\n\n\nCONCLUSIONS\nIn conclusion, we applied a deep-learning method, U-net, for segmenting breast and FGT in MRI in a dataset that includes a variety of MRI protocols and breast densities. Our results showed that U-net-based methods significantly outperformed the existing algorithms and resulted in significantly more accurate breast density computation.", "title": "" } ]
scidocsrr
b7200ac2ba1ec8aee7baf9428f1837ce
Poster Abstract: A 6LoWPAN Model for OMNeT++
[ { "docid": "64dc61e853f41654dba602c7362546b5", "text": "This paper introduces our work on the communication stack of wireless sensor networks. We present the IPv6 approach for wireless sensor networks called 6LoWPAN in its IETF charter. We then compare the different implementations of 6LoWPAN subsets for several sensor nodes platforms. We present our approach for the 6LoWPAN implementation which aims to preserve the advantages of modularity while keeping a small memory footprint and a good efficiency.", "title": "" }, { "docid": "a231d6254a136a40625728d7e14d7844", "text": "This document specifies an Internet standards track protocol for the Internet community, and requests discussion and suggestions for improvements. Please refer to the current edition of the \"Internet Official Protocol Standards\" (STD 1) for the standardization state and status of this protocol. Distribution of this memo is unlimited. Abstract This document describes the frame format for transmission of IPv6 packets and the method of forming IPv6 link-local addresses and statelessly autoconfigured addresses on IEEE 802.15.4 networks. Additional specifications include a simple header compression scheme using shared context and provisions for packet delivery in IEEE 802.15.4 meshes.", "title": "" } ]
[ { "docid": "9af22f6a1bbb4cbb13508b654e5fd7a5", "text": "We present a 3-D correspondence method to match the geometric extremities of two shapes which are partially isometric. We consider the most general setting of the isometric partial shape correspondence problem, in which shapes to be matched may have multiple common parts at arbitrary scales as well as parts that are not similar. Our rank-and-vote-and-combine algorithm identifies and ranks potentially correct matches by exploring the space of all possible partial maps between coarsely sampled extremities. The qualified top-ranked matchings are then subjected to a more detailed analysis at a denser resolution and assigned with confidence values that accumulate into a vote matrix. A minimum weight perfect matching algorithm is finally iterated to combine the accumulated votes into an optimal (partial) mapping between shape extremities, which can further be extended to a denser map. We test the performance of our method on several data sets and benchmarks in comparison with state of the art.", "title": "" }, { "docid": "c252cca4122984aac411a01ce28777f7", "text": "An image-based visual servo control is presented for an unmanned aerial vehicle (UAV) capable of stationary or quasi-stationary flight with the camera mounted onboard the vehicle. The target considered consists of a finite set of stationary and disjoint points lying in a plane. Control of the position and orientation dynamics is decoupled using a visual error based on spherical centroid data, along with estimations of the linear velocity and the gravitational inertial direction extracted from image features and an embedded inertial measurement unit. The visual error used compensates for poor conditioning of the image Jacobian matrix by introducing a nonhomogeneous gain term adapted to the visual sensitivity of the error measurements. A nonlinear controller, that ensures exponential convergence of the system considered, is derived for the full dynamics of the system using control Lyapunov function design techniques. Experimental results on a quadrotor UAV, developed by the French Atomic Energy Commission, demonstrate the robustness and performance of the proposed control strategy.", "title": "" }, { "docid": "2f5c25f08f360381ea3d46c8d66694f7", "text": "Router syslogs are messages that a router logs to describe a wide range of events observed by it. They are considered one of the most valuable data sources for monitoring network health and for trou- bleshooting network faults and performance anomalies. However, router syslog messages are essentially free-form text with only a minimal structure, and their formats vary among different vendors and router OSes. Furthermore, since router syslogs are aimed for tracking and debugging router software/hardware problems, they are often too low-level from network service management perspectives. Due to their sheer volume (e.g., millions per day in a large ISP network), detailed router syslog messages are typically examined only when required by an on-going troubleshooting investigation or when given a narrow time range and a specific router under suspicion. Automated systems based on router syslogs on the other hand tend to focus on a subset of the mission critical messages (e.g., relating to network fault) to avoid dealing with the full diversity and complexity of syslog messages. In this project, we design a Sys-logDigest system that can automatically transform and compress such low-level minimally-structured syslog messages into meaningful and prioritized high-level network events, using powerful data mining techniques tailored to our problem domain. These events are three orders of magnitude fewer in number and have much better usability than raw syslog messages. We demonstrate that they provide critical input to network troubleshooting, and net- work health monitoring and visualization.", "title": "" }, { "docid": "7866c0cdaa038f08112e629580c445cb", "text": "Cumulative exposure to repetitive and forceful activities may lead to musculoskeletal injuries which not only reduce workers’ efficiency and productivity, but also affect their quality of life. Thus, widely accessible techniques for reliable detection of unsafe muscle force exertion levels for human activity is necessary for their well-being. However, measurement of force exertion levels is challenging and the existing techniques pose a great challenge as they are either intrusive, interfere with humanmachine interface, and/or subjective in the nature, thus are not scalable for all workers. In this work, we use face videos and the photoplethysmography (PPG) signals to classify force exertion levels of 0%, 50%, and 100% (representing rest, moderate effort, and high effort), thus providing a non-intrusive and scalable approach. Efficient feature extraction approaches have been investigated, including standard deviation of the movement of different landmarks of the face, distances between peaks and troughs in the PPG signals. We note that the PPG signals can be obtained from the face videos, thus giving an efficient classification algorithm for the force exertion levels using face videos. Based on the data collected from 20 subjects, features extracted from the face videos give 90% accuracy in classification among the 100% and the combination of 0% and 50% datasets. Further combining the PPG signals provide 81.7% accuracy. The approach is also shown to be robust to the correctly identify force level when the person is talking, even though such datasets are not included in the training.", "title": "" }, { "docid": "105fe384f9dfb13aef82f4ff16f87821", "text": "Dengue hemorrhagic fever (DHF), a severe manifestation of dengue viral infection that can cause severe bleeding, organ impairment, and even death, affects between 15,000 and 105,000 people each year in Thailand. While all Thai provinces experience at least one DHF case most years, the distribution of cases shifts regionally from year to year. Accurately forecasting where DHF outbreaks occur before the dengue season could help public health officials prioritize public health activities. We develop statistical models that use biologically plausible covariates, observed by April each year, to forecast the cumulative DHF incidence for the remainder of the year. We perform cross-validation during the training phase (2000-2009) to select the covariates for these models. A parsimonious model based on preseason incidence outperforms the 10-y median for 65% of province-level annual forecasts, reduces the mean absolute error by 19%, and successfully forecasts outbreaks (area under the receiver operating characteristic curve = 0.84) over the testing period (2010-2014). We find that functions of past incidence contribute most strongly to model performance, whereas the importance of environmental covariates varies regionally. This work illustrates that accurate forecasts of dengue risk are possible in a policy-relevant timeframe.", "title": "" }, { "docid": "791cc656afc2d36e1f491c5a80b77b97", "text": "With the wide diffusion of smartphones and their usage in a plethora of processes and activities, these devices have been handling an increasing variety of sensitive resources. Attackers are hence producing a large number of malware applications for Android (the most spread mobile platform), often by slightly modifying existing applications, which results in malware being organized in families. Some works in the literature showed that opcodes are informative for detecting malware, not only in the Android platform. In this paper, we investigate if frequencies of ngrams of opcodes are effective in detecting Android malware and if there is some significant malware family for which they are more or less effective. To this end, we designed a method based on state-of-the-art classifiers applied to frequencies of opcodes ngrams. Then, we experimentally evaluated it on a recent dataset composed of 11120 applications, 5560 of which are malware belonging to several different families. Results show that an accuracy of 97% can be obtained on the average, whereas perfect detection rate is achieved for more than one malware family.", "title": "" }, { "docid": "4b68d3c94ef785f80eac9c4c6ca28cfe", "text": "We address the problem of recovering a common set of covariates that are relevant simultaneously to several classification problems. By penalizing the sum of l2-norms of the blocks of coefficients associated with each covariate across different classification problems, similar sparsity patterns in all models are encouraged. To take computational advantage of the sparsity of solutions at high regularization levels, we propose a blockwise path-following scheme that approximately traces the regularization path. As the regularization coefficient decreases, the algorithm maintains and updates concurrently a growing set of covariates that are simultaneously active for all problems. We also show how to use random projections to extend this approach to the problem of joint subspace selection, where multiple predictors are found in a common low-dimensional subspace. We present theoretical results showing that this random projection approach converges to the solution yielded by trace-norm regularization. Finally, we present a variety of experimental results exploring joint covariate selection and joint subspace selection, comparing the path-following approach to competing algorithms in terms of prediction accuracy and running time.", "title": "" }, { "docid": "242a2f64fc103af641320c1efe338412", "text": "The availability of data on digital traces is growing to unprecedented sizes, but inferring actionable knowledge from large-scale data is far from being trivial. This is especially important for computational finance, where digital traces of human behaviour offer a great potential to drive trading strategies. We contribute to this by providing a consistent approach that integrates various datasources in the design of algorithmic traders. This allows us to derive insights into the principles behind the profitability of our trading strategies. We illustrate our approach through the analysis of Bitcoin, a cryptocurrency known for its large price fluctuations. In our analysis, we include economic signals of volume and price of exchange for USD, adoption of the Bitcoin technology and transaction volume of Bitcoin. We add social signals related to information search, word of mouth volume, emotional valence and opinion polarization as expressed in tweets related to Bitcoin for more than 3 years. Our analysis reveals that increases in opinion polarization and exchange volume precede rising Bitcoin prices, and that emotional valence precedes opinion polarization and rising exchange volumes. We apply these insights to design algorithmic trading strategies for Bitcoin, reaching very high profits in less than a year. We verify this high profitability with robust statistical methods that take into account risk and trading costs, confirming the long-standing hypothesis that trading-based social media sentiment has the potential to yield positive returns on investment.", "title": "" }, { "docid": "7e08a713a97f153cdd3a7728b7e0a37c", "text": "The availability of long circulating, multifunctional polymers is critical to the development of drug delivery systems and bioconjugates. The ease of synthesis and functionalization make linear polymers attractive but their rapid clearance from circulation compared to their branched or cyclic counterparts, and their high solution viscosities restrict their applications in certain settings. Herein, we report the unusual compact nature of high molecular weight (HMW) linear polyglycerols (LPGs) (LPG - 100; M(n) - 104 kg mol(-1), M(w)/M(n) - 1.15) in aqueous solutions and its impact on its solution properties, blood compatibility, cell compatibility, in vivo circulation, biodistribution and renal clearance. The properties of LPG have been compared with hyperbranched polyglycerol (HPG) (HPG-100), linear polyethylene glycol (PEG) with similar MWs. The hydrodynamic size and the intrinsic viscosity of LPG-100 in water were considerably lower compared to PEG. The Mark-Houwink parameter of LPG was almost 10-fold lower than that of PEG. LPG and HPG demonstrated excellent blood and cell compatibilities. Unlike LPG and HPG, HMW PEG showed dose dependent activation of blood coagulation, platelets and complement system, severe red blood cell aggregation and hemolysis, and cell toxicity. The long blood circulation of LPG-100 (t(1/2β,) 31.8 ± 4 h) was demonstrated in mice; however, it was shorter compared to HPG-100 (t(1/2β,) 39.2 ± 8 h). The shorter circulation half life of LPG-100 was correlated with its higher renal clearance and deformability. Relatively lower organ accumulation was observed for LPG-100 and HPG-100 with some influence of on the architecture of the polymers. Since LPG showed better biocompatibility profiles, longer in vivo circulation time compared to PEG and other linear drug carrier polymers, and has multiple functionalities for conjugation, makes it a potential candidate for developing long circulating multifunctional drug delivery systems similar to HPG.", "title": "" }, { "docid": "cbb5856d08a9f8a99b2b6a48ad6fc573", "text": "Programmable Logic Controller (PLC) technology plays an important role in the automation architectures of several critical infrastructures such as Industrial Control Systems (ICS), controlling equipment in contexts such as chemical processes, factory lines, power production plants or power distribution grids, just to mention a few examples. Despite their importance, PLCs constitute one of the weakest links in ICS security, frequently due to reasons such as the absence of secure communication mechanisms, authenticated access or system integrity checks. While events such as the Stuxnet worm have raised awareness for this problem, industry has slowly reacted, either due to reliability or cost concerns. This paper introduces the Shadow Security Unit, a low-cost device deployed in parallel with a PLC or Remote Terminal Unit (RTU), being capable of transparently intercepting its communications control channels and physical process I/O lines to continuously assess its security and operational status. The proposed device does not require significant changes to the existing control network, being able to work in standalone or integrated within an ICS protection framework.", "title": "" }, { "docid": "2b1048b3bdb52c006437b18d7b458871", "text": "A road interpretation module is presented! which is part of a real-time vehicle guidance system for autonomous driving. Based on bifocal computer vision, the complete system is able to drive a vehicle on marked or unmarked roads, to detect obstacles, and to react appropriately. The hardware is a network of 23 transputers, organized in modular clusters. Parallel modules performing image analysis, feature extraction, object modelling, sensor data integration and vehicle control, are organized in hierarchical levels. The road interpretation module is based on the principle of recursive state estimation by Kalman filter techniques. Internal 4-D models of the road, vehicle position, and orientation are updated using data produced by the image-processing module. The system has been implemented on two vehicles (VITA and VaMoRs) and demonstrated in the framework of PROMETHEUS, where the ability of autonomous driving through narrow curves and of lane changing were demonstrated. Meanwhile, the system has been tested on public roads in real traffic situations, including travel on a German Autobahn autonomously at speeds up to 85 km/h. Belcastro, C.M., Fischl, R., and M. Kam. “Fusion Techniques Using Distributed Kalman Filtering for Detecting Changes in Systems.” Proceedings of the 1991 American Control Conference. 26-28 June 1991: Boston, MA. American Autom. Control Council, 1991. Vol. 3: (2296-2298).", "title": "" }, { "docid": "a33f862d0b7dfde7b9f18aa193db9acf", "text": "Phytoremediation is an important process in the removal of heavy metals and contaminants from the soil and the environment. Plants can help clean many types of pollution, including metals, pesticides, explosives, and oil. Phytoremediation in phytoextraction is a major technique. In this process is the use of plants or algae to remove contaminants in the soil, sediment or water in the harvesting of plant biomass. Heavy metal is generally known set of elements with atomic mass (> 5 gcm -3), particularly metals such as exchange of cadmium, lead and mercury. Between different pollutant cadmium (Cd) is the most toxic and plant and animal heavy metals. Mustard (Brassica juncea L.) and Sunflower (Helianthus annuus L.) are the plant for the production of high biomass and rapid growth, and it seems that the appropriate species for phytoextraction because it can compensate for the low accumulation of cadmium with a much higher biomass yield. To use chelators, such as acetic acid, ethylene diaminetetraacetic acid (EDTA), and to increase the solubility of metals in the soil to facilitate easy availability indiscernible and the absorption of the plant from root leg in vascular plants. *Corresponding Author: Awais Shakoor  awais.shakoor22@gmail.com Journal of Biodiversity and Environmental Sciences (JBES) ISSN: 2220-6663 (Print) 2222-3045 (Online) Vol. 10, No. 3, p. 88-98, 2017 http://www.innspub.net J. Bio. Env. Sci. 2017 89 | Shakoor et al. Introduction Phytoremediation consists of Greek and words of \"station\" and Latin remedium plants, which means \"rebalancing\" describes the treatment of environmental problems treatment (biological) through the use of plants that mitigate the environmental problem without digging contaminated materials and disposed of elsewhere. Controlled by the plant interactions with groundwater and organic and inorganic contaminated materials in specific locations to achieve therapeutic targets molecules site application (Landmeyer, 2011). Phytoremediation is the use of green plants to remove contaminants from the environment or render them harmless. The technology that uses plants to\" green space \"of heavy metals in the soil through the roots. While vacuum cleaners and you should be able to withstand and survive high levels of heavy metals in the soil unique plants (Baker, 2000). The main result in increasing the population and more industrialization are caused water and soil contamination that is harmful for environment as well as human health. In the whole world, contamination in the soil by heavy metals has become a very serious issue. So, removal of these heavy metals from the soil is very necessary to protect the soil and human health. Both inorganic and organic contaminants, like petroleum, heavy metals, agricultural waste, pesticide and fertilizers are the main source that deteriorate the soil health (Chirakkara et al., 2016). Heavy metals have great role in biological system, so we can divide into two groups’ essentials and non essential. Those heavy metals which play a vital role in biochemical and physiological function in some living organisms are called essential heavy metals, like zinc (Zn), nickel (Ni) and cupper (Cu) (Cempel and Nikel, 2006). In some living organisms, heavy metals don’t play any role in biochemical as well as physiological functions are called non essential heavy metals, such as mercury (Hg), lead (Pb), arsenic (As), and Cadmium (Cd) (Dabonne et al., 2010). Cadmium (Cd) is consider as a non essential heavy metal that is more toxic at very low concentration as compare to other non essential heavy metals. It is toxic to plant, human and animal health. Cd causes serious diseases in human health through the food chain (Rafiq et al., 2014). So, removal of Cd from the soil is very important problem to overcome these issues (Neilson and Rajakaruna, 2015). Several methods are used to remove the Cd from the soil, such as physical, chemical and physiochemical to increase the soil pH (Liu et al., 2015). The main source of Cd contamination in the soil and environment is automobile emissions, batteries and commercial fertilizers (Liu et al., 2015). Phytoremediation is a promising technique that is used in removing the heavy metals form the soil (Ma et al., 2011). Plants update the heavy metals through the root and change the soil properties which are helpful in increasing the soil fertility (Mench et al., 2009). Plants can help clean many types of pollution, including metals, pesticides, explosives, and oil. Plants also help prevent wind and rain, groundwater and implementation of pollution off site to other areas. Phytoremediation works best in locations with low to moderate amounts of pollution. Plants absorb harmful chemicals from the soil when the roots take in water and nutrients from contaminated soils, streams and groundwater. Once inside the plant and chemicals can be stored in the roots, stems, or leaves. Change of less harmful chemicals within the plant. Or a change in the gases that are released into the air as a candidate plant Agency (US Environmental Protection, 2001). Phytoremediation is the direct use of living green plants and minutes to stabilize or reduce pollution in soil, sludge, sediment, surface water or groundwater bodies with low concentrations of pollutants a large clean space and shallow depths site offers favorable treatment plant (associated with US Environmental Protection Agency 0.2011) circumstances. Phytoremediation is the use of plants for the treatment of contaminated soil sites and sediments J. Bio. Env. Sci. 2017 90 | Shakoor et al. and water. It is best applied at sites of persistent organic pollution with shallow, nutrient, or metal. Phytoremediation is an emerging technology for contaminated sites is attractive because of its low cost and versatility (Schnoor, 1997). Contaminated soils on the site using the processing plants. Phytoremediation is a plant that excessive accumulation of metals in contaminated soils in growth (National Research Council, 1997). Phytoremediation to facilitate the concentration of pollutants in contaminated soil, water or air is composed, and plants able to contain, degrade or eliminate metals, pesticides, solvents, explosives, crude oil and its derivatives, and other contaminants in the media that contain them. Phytoremediation have several techniques and these techniques depend on different factors, like soil type, contaminant type, soil depth and level of ground water. Special operation situations and specific technology applied at the contaminated site (Hyman and Dupont 2001). Techniques of phytoremediation Different techniques are involved in phytoremediation, such as phytoextraction, phytostabilisation, phytotransformation, phytostimulation, phytovolatilization, and rhizofiltration. Phytoextraction Phytoextraction is also called phytoabsorption or phytoaccumulation, in this technique heavy metals are removed by up taking through root form the water and soil environment, and accumulated into the shoot part (Rafati et al., 2011). Phytostabilisation Phytostabilisation is also known as phytoimmobilization. In this technique different type of plants are used for stabilization the contaminants from the soil environment (Ali et al., 2013). By using this technique, the bioavailability and mobility of the different contaminants are reduced. So, this technique is help to avoiding their movement into food chain as well as into ground water (Erakhrumen, 2007). Nevertheless, Phytostabilisation is the technique by which movement of heavy metals can be stop but its not permanent solution to remove the contamination from the soil. Basically, phytostabilisation is the management approach for inactivating the potential of toxic heavy metals form the soil environment contaminants (Vangronsveld et al., 2009).", "title": "" }, { "docid": "6cd301f1b6ffe64f95b7d63eb0356a87", "text": "The purpose of this study is to analyze factors affecting on online shopping behavior of consumers that might be one of the most important issues of e-commerce and marketing field. However, there is very limited knowledge about online consumer behavior because it is a complicated socio-technical phenomenon and involves too many factors. One of the objectives of this study is covering the shortcomings of previous studies that didn't examine main factors that influence on online shopping behavior. This goal has been followed by using a model examining the impact of perceived risks, infrastructural variables and return policy on attitude toward online shopping behavior and subjective norms, perceived behavioral control, domain specific innovativeness and attitude on online shopping behavior as the hypotheses of study. To investigate these hypotheses 200 questionnaires dispersed among online stores of Iran. Respondents to the questionnaire were consumers of online stores in Iran which randomly selected. Finally regression analysis was used on data in order to test hypothesizes of study. This study can be considered as an applied research from purpose perspective and descriptive-survey with regard to the nature and method (type of correlation). The study identified that financial risks and non-delivery risk negatively affected attitude toward online shopping. Results also indicated that domain specific innovativeness and subjective norms positively affect online shopping behavior. Furthermore, attitude toward online shopping positively affected online shopping behavior of consumers.", "title": "" }, { "docid": "00f31f21742a843ce6c4a00f3f6e6259", "text": "Recent developments in digital technologies bring about considerable business opportunities but also impose significant challenges on firms in all industries. While some industries, e.g., newspapers, have already profoundly reorganized the mechanisms of value creation, delivery, and capture during the course of digitalization (Karimi & Walter, 2015, 2016), many process-oriented and asset intensive industries have not yet fully evaluated and exploited the potential applications (Rigby, 2014). Although the process industries have successfully used advancements in technologies to optimize processes in the past (Kim et al., 2011), digitalization poses an unprecedented shift in technology that exceeds conventional technological evolution (Svahn et al., 2017). Driven by augmented processing power, connectivity of devices (IoT), advanced data analytics, and sensor technology, innovation activities in the process industries now break away from established innovation paths (Svahn et al., 2017; Tripsas, 2009). In contrast to prior innovations that were primarily bound to physical devices, new products are increasingly embedded into systems of value creation that span the physical and digital world (Parmar et al., 2014; Rigby, 2014; Yoo et al., 2010a). On this new playing field, firms and researchers are jointly interested in the organizational characteristics and capabilities that are required to gain a competitive advantage (e.g. Fink, 2011). Whereas prior studies cover the effect of digital transformation on innovation in various industries like newspaper (Karimi and Walter, 2015, 2016), automotive (Henfridsson and Yoo, 2014; Svahn et al., 2017), photography (Tripsas, 2009), and manufacturing (Jonsson et al., 2008), there is a relative dearth of studies that cover the impact of digital transformation in the process industries (Westergren and Holmström, 2012). The process industries are characterized by asset and research intensity, strong integration into physical locations, and often include value chains that are complex and feature aspects of rigidity (Lager Research Paper Digitalization in the process industries – Evidence from the German water industry", "title": "" }, { "docid": "6e76496dbe78bd7ffa9359a41dc91e69", "text": "US Supreme Court rulings concerning sanctions for juvenile offenders have drawn on the science of brain development and concluded that adolescents are inherently less mature than adults in ways that render them less culpable. This conclusion departs from arguments made in cases involving the mature minor doctrine, in which teenagers have been portrayed as comparable to adults in their capacity to make medical decisions. I attempt to reconcile these apparently incompatible views of adolescents' decision-making competence. Adolescents are indeed less mature than adults when making decisions under conditions that are characterized by emotional arousal and peer pressure, but adolescents aged 15 and older are just as mature as adults when emotional arousal is minimized and when they are not under the influence of peers, conditions that typically characterize medical decision-making. The mature minor doctrine, as applied to individuals 15 and older, is thus consistent with recent research on adolescent development.", "title": "" }, { "docid": "6cbcd5288423895c4aeff8524ca5ac6c", "text": "We report a quantitative analysis of the cross-utterance coordination observed in child-directed language, where successive utterances often overlap in a manner that makes their constituent structure more prominent, and describe the application of a recently published unsupervised algorithm for grammar induction to the largest available corpus of such language, producing a grammar capable of accepting and generating novel wellformed sentences. We also introduce a new corpus-based method for assessing the precision and recall of an automatically acquired generative grammar without recourse to human judgment. The present work sets the stage for the eventual development of more powerful unsupervised algorithms for language acquisition, which would make use of the coordination structures present in natural child-directed speech.", "title": "" }, { "docid": "12e2d86add1918393291ea55f99a44a0", "text": "Supervised classification algorithms aim at producing a learning model from a labeled training set. Various successful techniques have been proposed to solve the problem in the binary classification case. The multiclass classification case is more delicate, as many of the algorithms were introduced basically to solve binary classification problems. In this short survey we investigate the various techniques for solving the multiclass classification problem.", "title": "" }, { "docid": "910678cdd552fe5d0d2c288784ca550f", "text": "Livestock production today has become a very complex process since several requirements have to be combined such as: food safety, animal welfare, animal health, environmental impact and sustainability in a wider sense. The consequence is a growing need to balance many of these variables during the production process. In the past farmers were monitoring their animals in their daily work by normal audio-visual observation like ethologists still do in their research. Today however the number of animals per farm has increased so much that this has become impossible. Another problem is that visual observation never can be done continuously during 24 hours a day. One of the objectives of Precision Livestock Farming (PLF) is to develop the technology and the tools for the on-line monitoring of farm animals and this continuously during their life and in a fully automatic way. This technology will never replace the farmer but can support him as a tool that automatically and continuously delivers him quantitative information about the status of his animals. Like other living organisms farm animals are responding to their environment with several behavioural and physiological variables. Many sensors and sensing techniques are under development to measure such behavioural and biological responses of farm animals. This can be done by new sensors or by sound analysis, image analysis etc. A major problem to monitor animals is the fact that animals themselves are complex systems that are individually different and that are so called time varying dynamic systems since their behaviour and health status can change at any time. Another challenge for PLF is to develop reliable monitoring tools for such Complex Individual Time varying Dynamic systems (“CITD” systems). In this paper we will talk about what is PLF and what is the importance of PLF. Next we will explain the basic principles. Further we will show examples of monitoring tools by PLF such as on-line monitor for health status by analysing continuously the sound produced by pigs. Another example shows the on-line automatic identification of the behaviour of individual laying hens by continuous analysis of 2D images from a top view camera. Next we will demonstrate the potential of PLF for more efficient controlling of biological processes. Finally we will discuss how implementation might be realised and what risk and problems are. The technology that is already available and that is under development today can be used for efficient and continuous monitoring if an engineering approach is combined with the expertise of ethologists, physiologist, veterinarians who are familiar with the animal as a living organism.", "title": "" }, { "docid": "c7a8cd22ef67abcdeed13b86825e4d7e", "text": "Recent advancements in computer vision, multimedia and Internet of Things (IoT) have shown that human detection methods are useful for applications of intelligent transportation system in smart environment. However, detection of a human in real world remains a challenging problem. Histogram of oriented gradients (HOG) based human detection gives an emphasis towards finding an effective solution to the problems of significant changes in view point, fixed resolution human but are expensive to compute. The proposed algorithm aims to reduce the computations using approximation methods and adapts for varying scale. The features are modeled at different scales for training the classifier. Experiments have been conducted on human datasets to demonstrate the superior performance of the proposed approach in human detection and discussions are made to integrate and increase personalization for building smart environment using IoT.", "title": "" } ]
scidocsrr